---
slug: inside-an-exported-prd
title: "What's Inside an Exported PRD: A Section-by-Section Tour"
excerpt: "A guided tour of the Draftlytic PRD export: every section, what it's for, and why each one changes the code your AI tool produces."
primaryKeyword: "Draftlytic PRD export"
publishedAt: 2026-04-29
readingTimeMin: 7
author: "Robert Boylan"
tags:
  - draftlytic
  - prd
  - app-planning
  - ai-coding-tools
  - export
---

A friend pasted a Draftlytic PRD export into Lovable last week. Same idea he'd been trying to build for two evenings. Same model. The output was suddenly different. Not better in some vague way, actually different: the right tech stack picked the first time, the data model named the way he'd described it, two of the features built without him having to remind the model they existed.

He sent me a message that said "what is this thing actually doing?" Fair question. The Draftlytic PRD export is one markdown file, but it has eight or nine sections, and each one is doing a specific job. Some of those jobs you can guess. Some you wouldn't think to ask for, which is exactly why the AI tool drops them when you don't have a spec. (A PRD, by the way, is a product requirements document. A plan that says what the app should do before any code is written.)

This post is a tour of that file, section by section. If you've used Draftlytic and wondered what's inside the export, or you're considering it and want to know what you'd actually get, this is for you.

## The overview, target audience, and personas

The first three sections of a Draftlytic PRD are the "who" sections. Project name, one-paragraph overview of what the app is, then target audience, then up to a handful of named personas with their goals and frustrations.

This is the part of the spec that an AI coding tool, left to its own devices, will quietly drop after about three prompts. You describe a habit tracker for parents of toddlers, the model nods, and then somewhere around prompt four it's helping you build a slick gradient onboarding screen aimed at twenty-somethings. Not malicious. The model just doesn't carry the audience between turns the way you assume it does.

When the export gives the model a paragraph that says "Sam, 34, two kids under five, opens the app standing in the kitchen, has thirty seconds before something is on fire," the design choices change. Bigger tap targets. Less clever copy. Defaults that assume one-handed phone use. None of that gets engineered if the audience isn't on the page.

> **Target audience**
>
> Parents of toddlers (ages 1–4) tracking small daily wins. Time-poor, often distracted. Phone use is one-handed and interrupted. Not the quantified-self crowd.

The personas section sounds redundant when you read it back. It isn't. A persona is a worked example of the audience. The audience tells the model who it's for; the persona shows the model what one of those people does on a Tuesday morning. AI tools build noticeably different UIs when there's a Tuesday morning in the prompt.

## Features and acceptance criteria

This is the section that most prevents the AI from rebuilding things you already asked for.

Every feature in a Draftlytic PRD has a title, a short description, a priority, and acceptance criteria. The acceptance criteria are the part most other planning tools skip. They're the bullet list under each feature that describes what "done" looks like, in concrete behaviour.

Here's roughly what a feature block looks like in the export:

```markdown
### Daily check-in

A one-tap interaction to log that today's habit happened. Should feel
fast and forgiving on mobile.

**Acceptance criteria**

- A single primary button on the home screen marks today as done
- Tapping again undoes it without a confirmation prompt
- Streak count updates immediately, no full reload
- Works offline; syncs when the connection returns
```

Without acceptance criteria, "daily check-in" is a vibe. The AI tool makes its own call about what "feels fast" means, picks a confirmation modal you didn't ask for, and you spend prompt five and six explaining what you actually meant. With acceptance criteria, the model has a list of conditions it can check itself against. Confirmation modal? That breaks "tapping again undoes it without a confirmation prompt." It deletes the modal and moves on.

There's a longer post on [what good acceptance criteria look like for AI coding tools](/blog/acceptance-criteria-for-ai-coders) if you want to go deeper. The short version: criteria should be observable, not aspirational. "Loads fast" is aspirational. "Initial paint under 2 seconds on 4G" is observable.

The export also keeps your features in priority order, with completed features marked. That ordering is read by the AI tool as the build sequence, which matters more than people expect.

## Tech stack, data model, and external services

These sections answer a different question: not what to build, but the shape of what's allowed.

The tech stack section lists the languages, frameworks, and platforms you've committed to. React, Postgres, deployed on Vercel, native iOS in Swift, whatever it is. This stops the model from helpfully suggesting Next.js when you've already picked Astro, or proposing a Firebase backend when you said Supabase. AI tools default to whatever they've seen most in training data, which is rarely your choice.

The data model section is the one that surprises people. It's a list of the main entities your app needs (User, Habit, CheckIn) with their key fields. Not a full database schema. Just enough that when the model writes a `User` table, it includes the fields you said you'd need, with the names you used. This single section is why pasted Draftlytic exports tend to produce code with consistent naming on the first try. The model isn't guessing whether the field is `userId` or `user_id` or `owner`: it's reading.

External services covers the third-party dependencies you're already planning to use: Stripe for payments, Resend for email, Supabase for auth, Mux for video. Without this, the AI tool will sometimes invent a fake auth integration and write a hundred lines of code against it. With this, it knows to use the SDK you actually plan to install.

> **External services**
>
> - Supabase (auth, Postgres, storage)
> - Resend (transactional email)
> - Stripe (subscriptions, customer portal)

That tiny list saves a remarkable amount of "no, not that one" prompting.

## Design fields, and why every Lovable app looks the same without them

If you've used Lovable, v0, or Bolt.new much, you've probably noticed that apps generated with no design direction all converge on the same look. Soft gradients. Rounded corners. Stock-photo hero. Tasteful, but bland and indistinguishable.

That happens because the model has nothing to anchor on, so it falls back to the design language it's seen most in training data. Tell it nothing, get the average of everything.

The Draftlytic PRD includes a block of design fields that solve this directly. Specifically:

- **`design_style`**: the overall aesthetic (minimalist, playful, brutalist, premium-editorial)
- **`copy_tone`**: how the writing should sound (warm and human, dry and technical, punchy and irreverent)
- **`ux_patterns`**: the interaction conventions (single-page, wizard-style, dashboard-with-sidebar, mobile-first stack)
- **`theme_colors`**: your actual brand palette
- **`font_family`**: the typeface family or pairing
- **`first_user_action`**: the very first thing a user does after opening the app

That last one matters more than it sounds. "First user action" tells the AI what the empty state should be built around. An app whose first action is "log your first habit" needs a very different home screen from one whose first action is "invite a teammate." Models don't naturally pick this up from the feature list, because it lives in between features.

For more on what an AI tool actually needs to produce a non-generic UI, see [what a good Lovable app spec looks like](/blog/good-lovable-app-spec).

## Business and operational fields

This is the section the AI never thinks to ask about, and it's the one that changes the most about the resulting app.

Draftlytic exports include a small but loaded business block:

- **`revenue_stream`**: free, freemium, subscription, one-time purchase, ads, marketplace fees
- **`products`**: the actual tiers or SKUs (Free / Pro at $9, Studio at $29, etc.)
- **`admin_panel`**: whether you need a back-office UI to manage users, content, refunds

Why does this change the code? Because if the model knows the app is a $9/month subscription with three plans, it builds the auth flow with a billing relationship from day one. It scaffolds a `subscriptions` table. It thinks about cancellation states. If the model knows there's an admin panel, it doesn't bury moderation logic inside the user-facing app where you'll have to extract it later.

When this section is missing, the AI tool builds a "free app" by default. Then prompt seven is you trying to retrofit Stripe into a codebase that wasn't designed for billing, and the model is now refactoring code it wrote two prompts ago. Painful, and avoidable.

The admin panel field is small but quietly enormous. Most indie apps need one and most first AI builds don't have one, because nobody asked for it explicitly. Putting one bullet in the spec ("Admin panel: yes, basic user list, refund button, content moderation queue") shifts the whole architecture to support it.

## What about the rest of the file?

A few smaller sections round out the export: notifications and how they should behave, constraints (anything you've explicitly ruled out, or platform limits), and the current milestone: what version you're working toward right now. The implementation plan export, which is its own document, picks up from there: see [how the implementation plan export complements the PRD](/blog/export-implementation-plan) if you want the full picture of how the two files work together.

The shape of a Draftlytic export is opinionated. Every section is there because, in testing, leaving it out led to AI tools quietly inventing the wrong answer. Some of those wrong answers (like the design defaulting to soft gradients) are obvious. Others (like the model assuming a free app and forgetting to scaffold billing) only become obvious three prompts deep, when undoing the assumption is expensive.

## So what does this mean for your next project?

Every section in the export is something the AI would otherwise invent. The audience: invented as "general users." The data model: invented from the feature names. The design: invented from training-data averages. The business model: invented as "free." Each invention is plausible, which is why you don't notice it. And each invention diverges, just slightly, from what you actually meant.

A PRD doesn't make the AI smarter. It just stops it from filling in the blanks with whatever was easiest. The work of pinning down those blanks is the part most people skip, because it's tedious and you can't quite remember all the moving parts. That's what [Draftlytic is for](/what-is-a-prd). You describe the idea, it asks the questions you wouldn't have thought to ask, and the export is what gets pasted into Cursor, Lovable, or wherever you're working: with all the sections the model needs to build the thing you actually had in mind.
