---
slug: feature-descriptions-for-ai
title: "Writing Feature Descriptions an AI Won't Misinterpret"
excerpt: "Feature description for AI coding tools is harder than it looks. The title and one-liner the AI reads first sets the shape; vague openings produce vague apps."
primaryKeyword: "feature description for AI"
publishedAt: 2026-04-08
readingTimeMin: 7
author: "Robert Boylan"
tags:
  - prd
  - ai-coding-tools
  - app-planning
  - prompt-engineering
  - feature-spec
---

You write down a feature called "Notifications" with a one-line description: "Send notifications to users about important events." You hand it to Lovable. It builds you an autoplaying browser-modal popup that triggers every time a user does literally anything. Technically correct. Not what you meant.

The acceptance criteria might have caught this. The data model might have caught this. But the feature title and one-liner came first, set the AI's mental shape for the whole feature, and quietly steered every downstream decision toward "browser modals about everything." That's the feature description problem, and it's separate from the longer post on [acceptance criteria for AI coders](/blog/acceptance-criteria-for-ai-coders) because the title and one-liner happen earlier, get more visual weight, and are read by the AI first.

Here's how to write feature descriptions for AI that don't quietly produce the wrong app.

## What the AI fills in when the title is vague

"Notifications" is a hundred features. Browser push, email, in-app toast, SMS, in-app inbox, real-time alerts, daily digest. The AI will pick whichever one was most common in its training data, which is usually browser push (because that's what the JavaScript ecosystem has the loudest opinions about).

"Notifications" with a one-liner like "send notifications to users" hasn't reduced the space at all. Every word in that sentence is the same feature you already named.

Compare with: "In-app toast notifications triggered when a user's daily check-in is marked complete or missed by 9pm. Toasts auto-dismiss after 4 seconds. No browser push, no email, no SMS in v1."

That description has done four things at once. It picked the channel (in-app toast). It pinned the trigger (check-in completion or 9pm miss). It set the behaviour (auto-dismiss after 4 seconds). It explicitly ruled out the alternatives the AI would otherwise have tried. The AI now has nothing to invent.

You don't have to write the description that long for every feature. But the principle holds: if the title plus one-liner could describe twenty different features, you've written a category, not a feature.

## The verb test

A useful diagnostic. For every feature title and one-liner, ask whether the verbs are concrete enough that a junior engineer could build it without follow-up questions. If the verbs are "manage", "handle", "support", "enable", you've written a category.

Compare these pairs:

- "Manage user profiles" → "Edit name, email, and avatar from a Profile page; save with one button"
- "Support file uploads" → "Upload PDFs up to 10MB to S3 from the Documents page; show progress bar; one file at a time"
- "Handle subscriptions" → "Subscribe to monthly or yearly plan via Stripe Checkout; redirect to /welcome on success; surface error to user on failure"
- "Enable team collaboration" → "Invite up to 5 teammates by email; teammates see read-only project list; owner can revoke access from a Team settings page"

The right-hand columns aren't longer because they're more verbose. They're longer because they make decisions. Which file types? How big? Which payment provider? Which surface? How many teammates? Concrete verbs (edit, upload, subscribe, invite) force you to make those calls before the AI does.

## Naming features so they survive a refactor

This is a quieter rule that pays off later. Features get referenced again. They get renamed in conversation. The AI hears the rename and the original shape drifts.

You wrote "Notifications" in version one. Three weeks later you tell the AI "let's add email alerts to the notifications system." The AI now thinks "Notifications" includes email, which it didn't a moment ago. You've quietly redefined the feature without writing it down.

The fix is to name features after the user-facing thing, not the implementation:

- "Daily check-in reminder" beats "Notifications"
- "Document upload" beats "File handling"
- "Team invite" beats "User management"
- "Plan upgrade" beats "Subscriptions"

User-facing names hold their shape across conversations. Implementation-facing names slide as the AI generalises them.

## When precision is too much

There's a failure mode in the other direction: a description so specific it fights your stack.

If you write "Implement notifications using web-push library version 3.5.1 with VAPID keys and a service worker registered at /push-sw.js", you've removed the AI's ability to use whatever the right library actually is in your project. Maybe you're already using OneSignal. Maybe Lovable's default toast pattern is the right answer. The over-specification locks the AI into a path that may not match the rest of the codebase.

The rule of thumb: be specific about what the user sees and what triggers it. Be loose about how it's implemented, unless you have a real preference. "In-app toast triggered by X" is precise about behaviour. It lets the AI pick the right component or library to actually render the toast.

The same logic shows up in [why prompt engineering still matters](/blog/what-is-prompt-engineering): you're describing the goal, not the steps. Steps are the AI's job. Goals are yours.

## Templates for common feature types

A few starting shapes for the most common indie-app feature categories. None of these are exhaustive. They're a baseline that pulls a vague description into the precision range.

**Display feature ("show me X"):**

> [What] is shown [where] [when]. Each item shows [fields]. [Sorting / filtering rules]. [Empty state behaviour].

Example: "User's logged workouts are shown on the Dashboard route, ordered newest-first. Each row shows date, exercise count, and total volume. If no workouts exist, show a 'log your first workout' CTA."

**Input feature ("let me create X"):**

> User can create a [thing] from [where]. Required fields: [list]. Optional fields: [list]. Validation: [rules]. On success: [what happens]. On failure: [what happens].

Example: "User can create a Workout from the New Workout button on the Dashboard. Required: date, at least one exercise. Optional: notes. Validation: date can't be in the future. On success: redirect to the Workout detail page. On failure: surface the validation error inline above the form."

**Lifecycle feature ("when X happens, do Y"):**

> When [trigger], the system [action]. The user sees [feedback]. If [edge case], [fallback].

Example: "When a user completes their daily check-in, the system increments their streak counter. The user sees a confetti animation and the new streak number. If the previous check-in was more than 36 hours ago, the streak resets to 1 instead of incrementing."

These shapes are not magic; they're scaffolding. The point isn't to follow the template literally, it's to force yourself past "manage" and "handle" into concrete verbs and observable outcomes.

## Why this matters more than the acceptance criteria

Acceptance criteria are the rules the AI is supposed to honour. Feature descriptions are the shape the AI imagines first. Get the shape wrong and the criteria are repairing damage instead of preventing it.

When the AI rebuilds the same feature you already shipped, it's often because the original feature description was vague enough that the rebuild looks like a new feature to the model. There's a longer post on [why AI keeps rebuilding features](/blog/why-ai-keeps-rebuilding-features); a precise feature title is one of the simplest ways to prevent it. "Daily check-in reminder" is recognisable as the same feature next session. "Notifications" is fuzzy enough that another notification-shaped thing can slot into the same slot, and now you have two of them.

The deeper habit this builds: writing feature descriptions makes you decide what the feature is, which is the part most app builds skip. The AI is happy to build whatever you describe. The hard part is describing the actual thing, not a category that contains it.

That's the work Draftlytic was built to make easy. The questionnaire pushes you toward concrete verbs and user-facing names without you having to remember the whole rule set, and the export carries those decisions into every prompt downstream. The feature descriptions in the spec aren't the AI's problem to solve. They're the part the AI was waiting for you to figure out.
