---
slug: why-ai-keeps-rebuilding-features
title: "Why Your AI Coding Tool Keeps Rebuilding the Same Feature"
excerpt: "AI rebuilding features you already built is the most common vibe-coding frustration. Here's why it keeps happening and how to stop it."
primaryKeyword: "AI keeps rebuilding features"
publishedAt: 2026-05-02
readingTimeMin: 6
author: "Robert Boylan"
tags:
  - ai-coding-tools
  - context-window
  - prd
  - vibe-coding
  - indie-dev
---

You asked Cursor to add a notification banner. It added one. It also silently rewrote the entire auth flow you finished three weeks ago and spent two days getting right. The session ends. You check the app. Login is broken. You have no idea what changed.

That is AI keeps rebuilding features you already built. It is the most common complaint from vibe-coders who have been at it for more than a few days, and it has nothing to do with the AI being bad. It has to do with the AI having no idea what is finished and what is not.

This post is about that specific failure mode and how to stop it.

## Why the AI has no memory of what you built

Every time you start a new conversation with Cursor, Claude Code, Lovable, or Bolt.new, the model starts fresh. It has no memory of the session where you got auth working. It has no memory of the careful prompt where you explained the user model. It sees the current codebase and whatever you type next, and that is it.

Even within a session, context windows (the amount of text a model can "hold in mind" at once) are finite. The longer a session runs, the more of the early conversation falls off the edge. Details you established in prompt three may be invisible by prompt fifteen.

This means the model is always making educated guesses about what the code in front of it is supposed to do. It sees an auth flow and has to decide: is this load-bearing production code, or is it scaffolding I put here to unblock something and should now replace? Without guidance, it defaults to "rewrite whatever looks relevant." That is not a bug. That is the model doing its job with incomplete information.

The other half of this is that the model does not understand feature ownership. It does not have a mental model of "this file is part of the onboarding flow, which is done" versus "this file is a placeholder I want to replace." Code does not wear labels. If you do not give it labels, the model improvises.

## The specific moment it goes wrong

The failure almost never happens on the first day of a project. On day one, everything is in progress, and the model can rewrite freely because there is nothing yet worth protecting.

It starts happening after you have three or four features that actually work. You come back to add feature five, and the prompt is something like: "Add a weekly email digest for the user's activity." That sounds simple. But "weekly email digest" touches users, notification preferences, and potentially the email-sending path, which, depending on how your app is structured, may share code with the password reset flow you do not want touched.

The model doesn't know the password reset flow is sacred. You didn't say so. It sees "email" in the task and restructures everything related to email.

This is not a Cursor problem or a Claude Code problem or a v0 problem. It is an all-of-them problem because the failure comes from the prompt, not the model. [The AI getting worse over time in a long project is a related issue but a different one](/blog/why-ai-code-gets-worse-over-time). Rebuilding features is specifically about the model not knowing what is done versus what is in scope.

## How to tell the AI what is already finished

The most effective thing you can do is name what is complete and put it somewhere the model will see it.

Most vibe-coders keep the spec (the document that describes what they're building) as a rough notes file. They update it when they add features but never subtract from it. The spec reads like a wishlist, not a status board. When you paste it into a prompt, the AI treats everything in it as "to do."

Fix that. Add a "Completed features" section to your spec. List what is done by name. Then, in any prompt where you are touching related territory, add a short explicit line: "The auth flow is complete and working. Do not modify files in the auth folder."

It sounds obvious. It works because you are giving the model the context it cannot infer from the code alone. The model is not stubborn. It is not trying to ruin your app. It is trying to be helpful with what it has. When you give it more, it does more with it.

Tools like Cursor let you pin files or include a `CURSOR_RULES` file. Claude Code has project memory. Lovable and Bolt.new have a "notes" or persistent instructions field. Every tool has a version of this, and almost nobody uses it beyond the first setup.

## Four habits that prevent AI from rewrites your finished work

**Lock your prompts to a scope.** Instead of "Add a weekly email digest," say: "Add a weekly email digest. Only create new files in `/src/features/digest/`. Do not modify any existing files in `/src/features/auth/` or `/src/lib/email.ts`." Narrow prompts get narrow edits. The model follows scope constraints better than you might expect because it is trying to be helpful within the constraints you give it.

**Ask the AI to describe what it will touch before it touches anything.** Before it writes a single line, ask: "Which files would you need to modify to implement this? List them and explain why each one." Read that list. If `/src/features/auth/login.tsx` appears on a prompt about email digests, ask why. Nine times out of ten the model will explain a dependency you didn't know about, or you will catch that it is about to do something you didn't intend.

**Commit often, and commit at feature checkpoints.** Not just at the end of the day. Every time you finish a feature and it works, commit. The commit message can be one line: "auth flow working." If the model later breaks auth, you can revert to that exact point without losing the two features you built after. Version control (Git, the system that saves snapshots of your code over time) is not just for teams. It is a safety net for solo vibe-coders specifically because AI sometimes reaches further than you expected.

**Keep a spec with a clear done/to-do split.** A living spec that separates "what is built" from "what is next" gives the AI accurate context on every prompt. If you paste your spec into a session, the model knows not to reinvent things you already have. This is also just a good habit for your own clarity. [Planning before you build reduces all of the downstream friction](/blog/vibe-coding-why-planning-matters), including the friction of patching something the AI accidentally broke.

## Real examples of how this plays out

A founder building a bookmarking app asked Lovable to add tagging to saved links. Lovable added tags. It also restructured the data model for saved links, which broke the existing import feature. The import feature had taken four sessions to get right. It took two more to get it back.

A developer using Claude Code asked it to add a dark mode toggle. Claude Code rewrote the entire theming system from scratch because it inferred (correctly, in a vacuum) that the existing setup was improvised. The developer agreed the old setup was messy. But "messy and working" was better than "clean and broken" with a launch two days away.

In both cases the fix would have been the same. A one-line instruction in the prompt: "Do not restructure existing data models" or "Do not change the existing theme system, only add a toggle that switches between light and dark classes." The model would have followed it.

The pattern is always the same. The AI keeps rebuilding features because nobody told it those features were done.

## Spec discipline is the actual fix

There is a version of this advice that sounds like a lot of work: maintain a full written spec, keep a done/to-do list, write explicit constraints into every prompt. It is not that heavy in practice. It is ten extra seconds per prompt and a document you update as you go.

The cost of not doing it is the afternoon you spend recovering an auth flow that worked fine before you touched it.

A spec that distinguishes what is built from what is planned is the thing [a PRD is actually for](/what-is-a-prd) in a vibe-coding context. Not to satisfy some product process. Just to give the AI the signal it needs to not accidentally destroy your work.

Draftlytic is built around this: you describe your idea, it produces a structured spec with sections for your completed features alongside your planned ones, and it stays live as you build. You reference it in prompts instead of writing constraints from scratch each time. The AI knows what is done. It stops rebuilding the things that are already there.

You can also [export it as an implementation plan](/blog/export-implementation-plan) when you are ready to hand it to a tool for a full build pass, with completed features already flagged so the model skips them.

AI coding tools are good. They are also completely blind to your project history if you do not show it to them. Show it to them.
