---
slug: maintain-vibe-coded-app
title: "Adding Features to a Vibe-Coded App Six Months Later"
excerpt: "You shipped a vibe-coded app six months ago. Now you want to add a feature and the codebase looks like a stranger wrote it. Here's how to maintain it."
primaryKeyword: "maintain a vibe-coded app"
publishedAt: 2026-04-20
readingTimeMin: 7
author: "Robert Boylan"
tags:
  - vibe-coding
  - maintenance
  - indie-dev
  - ai-coding-tools
  - refactoring
---

You open the project for the first time in five months. A user emailed asking for a small feature. You agree, sit down to add it, and the codebase looks like a stranger wrote it. There's a folder called `helpers` and a folder called `utils` and you have no memory of why both exist. The component you need to edit is 800 lines long. Your AI tool, which built most of this, has no memory of any of it either. You're starting from zero and the project isn't.

Maintaining a vibe-coded app is the part nobody writes about. Every guide is for the build phase, where the AI is generating new code and the dopamine is high. The maintenance phase is quieter, slower, and where most indie projects die. Not because adding a feature is hard, but because the cold-start problem is brutal when you don't remember your own codebase and the AI never knew it.

Here's how to get back in.

## Why your old app feels like a stranger's

The first thing to make peace with: the code in front of you was written through a series of prompts, decisions, and AI suggestions that you no longer remember. That's not failure. That's how vibe-coding works.

When you're building, you're holding the whole thing in your head. The reason you put auth in a particular folder, the reason that one component is shaped weirdly, the reason there are three different button styles. All of that lives in your working memory while you're shipping. Two weeks after you stop, half of it is gone. Six months later, none of it is.

The AI model that helped you build it has no memory at all. Every session starts fresh. So you're staring at a codebase that was built by a partnership, and one of the partners has no recollection and the other has selective amnesia.

This is fine. It just means the first thing you do isn't add the feature. It's reconstruct the spec.

## Reconstructing the spec from a shipped app

You need a written description of what the app actually is, before you let an AI tool touch it. This is non-negotiable. Skipping this step is the single most common reason maintenance sessions go sideways.

There are two ways to do this. The slow way: you read the code yourself, take notes, build a mental model, then write a one-page document covering what the app does, who it's for, what tech stack it uses, what features exist, and what the data model looks like.

The fast way: you let your AI tool read the codebase first and produce that document for you. In Cursor, this looks like opening the project, attaching the relevant top-level files (auth, main routes, a couple of representative components), and asking for a project overview. In Lovable, this looks like pasting in the deployed URL and asking what the app does. In Claude Code, this looks like running it against the repo and asking for a summary.

Both approaches end at the same place: a paragraph or two that says what the app is. Save it. You're going to paste it into every future session.

## Make the AI re-read everything before any change

Once you have the reconstructed spec, the next rule: the AI re-reads relevant files before it writes anything new. Always. This is the rule that prevents the most damaging maintenance failure mode, which is the AI [silently rebuilding a feature you already shipped](/blog/why-ai-keeps-rebuilding-features) because it didn't know the existing version was there.

In Cursor, "re-read" means attaching the relevant files explicitly to the prompt. In Claude Code, it means letting the agent navigate the codebase before you ask for the change. In Lovable or v0, it means pasting the relevant component code into the chat alongside the request.

A useful template for any maintenance prompt:

```
Here's the spec [paste].
Here's the current state of the relevant file [paste or attach].
The change I want is [describe].
Match the patterns and naming conventions already in the file.
Don't refactor anything that isn't directly required for the change.
```

That last line is the load-bearing one. Without it, AI tools love to "improve" your code while they're in there. They'll rename variables they consider unclear, restructure components they consider too long, swap libraries they consider out of fashion. None of which you asked for. All of which can break things you didn't think to test.

## When to refactor versus accept the cruft

You will be tempted, going through old code, to clean it up. Don't, unless the refactor is directly related to the feature you're adding.

The reason isn't laziness. It's that vibe-coded code is often weirder than dev-written code in ways that look like cruft but were the only way to make something work. A component might be 800 lines because the AI couldn't get a cleaner split to pass tests. A folder might be called `utils` and `helpers` because the AI ran out of context for the existing pattern and made a new one. Some of that is fine to clean up. Some of it has load-bearing weirdness you'll only discover by breaking it.

The rule of thumb: if the cruft is in your way for the current feature, refactor the smallest amount needed to land the change. If it's just bothering you, leave it alone. Maintenance is not the time for spring cleaning.

If you genuinely do need a bigger refactor, treat it as its own task. Reconstruct the spec, write the desired end state, and let the AI tool do the migration as one focused session. Mixing a refactor with a feature add is how you end up six hours in with neither working.

## Killing dead features cleanly

The other half of maintenance is removing things, not adding them. The half-finished features, the experimental flows, the dead routes that nobody clicks. These accumulate in vibe-coded apps faster than in hand-written ones, because the cost of generating a new feature is so low that you tend to leave the old ones lying around.

Before you remove anything, ask the AI to find every place the feature is referenced. Routes, components, database tables, environment variables, deploy hooks. AI tools are good at this kind of grep-and-summarise work and bad at remembering it for you. Get the full list before you start cutting.

Then remove the feature in one focused session, run the test suite (you do have one, right?), and ship it as a small dedicated change. Don't bundle it with the new feature you're adding. If something breaks, you want to know which change caused it.

## The version of you in six months will thank you

Every maintenance session should leave the project slightly more maintainable than it was before. Not through aggressive refactoring, but through one small habit: update the spec.

When you finish today's feature, take three minutes. Open the reconstructed spec. Add a line about the new feature. Update the data model section if you changed it. Note any new external service you wired up. Save the spec back into the repo as `SPEC.md` or wherever feels natural.

Future-you, opening this project in another five months, will paste that spec into the AI tool and the cold start will take ten minutes instead of an afternoon. That's the actual maintenance moat. Not the code itself; the documented description of what the code is.

This is the same loop as the build phase, just played slowly. Every change refines the spec. Every spec makes the next change cheaper. Most vibe-coded apps die in maintenance because their authors never built that loop. The shipped app outlives the working memory that built it, and there's no document bridging the gap.

That's exactly the gap [Draftlytic was built to fill](/blog/how-to-use-draftlytic). When the spec is the artifact you maintain, not the codebase, every future change starts from a place where the AI can be useful again. The model didn't change. Your context did.
