The anatomy, in order of what matters
Phase I proposals are structured documents. Every agency publishes a solicitation that specifies the sections, the page limits, and the evaluation criteria. The sections look similar across agencies — technical volume, commercialization, team, cost — but the weights and the reviewer behavior differ enough to matter. This piece walks through each section in the order a reviewer experiences it, explains what they are scoring, and gives a page-and-effort budget that assumes a typical 20-page DoD Phase I or a 15-page civilian Phase I.
The Six Sections of a Phase I Proposal
Phase I reviewers weight technical merit 50–60% and commercialization 20–30%. Most proposals fail on commercialization — not because the science is weak, but because market analysis substitutes TAM slides for named customers.
The frame to keep in mind: a Phase I reviewer reads 20 to 40 proposals in a compressed review window. They do not read every word. They read the first page closely, scan headers and tables, and return to the scoring sections with a scoring rubric in hand. Every decision you make about structure — section order, header phrasing, table placement, figure count — is a decision about where their attention lands.
Section 1 — The technical volume (50-60% of the score)

The technical volume is the heart of the proposal. It is the section that reviewers score against the published evaluation criteria, and it is the section where most proposals are won or lost. At DoD the technical volume is capped at 20 pages (including figures). At civilian agencies it typically runs 10 to 15 pages. The cap is strict — reviewers are instructed to stop reading at the limit, and pages beyond the cap are discarded.
Inside the technical volume, the sub-structure is prescribed by the solicitation and is approximately:
| Sub-section | Typical pages | What reviewers grade |
|---|---|---|
| Identification of problem | 1-2 | Do you understand the topic as written? Do you understand the end-user need under the topic? |
| Phase I objectives | 1 | Are the objectives measurable? Do they map to the topic's required outcomes? |
| Phase I statement of work | 3-5 | Is the technical plan credible? Are tasks clearly defined with deliverables and schedule? |
| Related work / state of the art | 2-3 | Do you know the prior art? Is your approach meaningfully different or better? |
| Technical approach | 5-7 | Is the approach technically sound? Are the risks identified and managed? Does the PI clearly understand what they are proposing? |
| Relationship to Phase II | 1-2 | Is there a credible path from the Phase I prototype to a Phase II prototype-to-deployment? |
| Facilities and equipment | 0.5-1 | Can you execute? Access to compute, data, clearances, lab space? |
| Subcontractors / consultants | 0.5 | Who are the external contributors and what do they bring? |
The technical approach sub-section is where proposals are won. It should read like a senior engineer explaining an architecture to another senior engineer — specific, technical, anchored in real methods, and honest about risks. Vagueness loses. A reviewer who cannot tell from the technical approach what you will actually build will not score you as technically credible.
Section 2 — The project summary / abstract
The summary appears at the front of the technical volume and is often also submitted as a standalone 200-word abstract. It is the single most-read piece of writing in the proposal. Reviewers read it first, often twice, and return to it when forming their final score. Treat it as a contract with the reader: by the end of the abstract, the reviewer should know the problem, your approach, what is novel, and what you will deliver at the end of Phase I.
A strong summary has four sentences: the problem, the approach, the novelty, the deliverable. Each sentence should carry weight. The weakest summaries read like marketing copy — "we propose to develop an innovative AI solution." The strongest summaries read like a technical brief — "We will train a transformer-based anomaly detector on Navy maintenance telemetry, demonstrating a 30% reduction in false positives against the current threshold-based system, delivering a containerized inference pipeline integrated with NAVSEA's maintenance scheduler."
Section 3 — Commercialization (20-30% of the score)
Commercialization is the second-most-weighted section and is the section where first-time proposers most often fail. A commercialization section is not a market study. It is a credible plan for how this technology goes from Phase I prototype to real customers — inside and outside government — with the revenue and relationships to back the plan.
Reviewers are looking for three things. First, a realistic federal transition path with named program offices, specific contracts, and a dollar figure that is plausible. Second, a dual-use story for the commercial market with target customers, pricing, and a go-to-market plan. Third, evidence that the team has thought about commercialization seriously — letters of support, existing customer conversations, a capital plan if relevant. A commercialization section that is generic language about "the growing AI market" scores at the bottom.
Page budget: 3-5 pages. Sections typically include federal opportunity, commercial opportunity, market size (done carefully — inflated TAM is a tell), revenue projection over 5 years, capitalization plan, and the team's commercialization experience.
Section 4 — Team, key personnel, and bios
The team section is a credibility filter. Reviewers are asking whether the PI has done this kind of work before, whether the team collectively has the skills to execute, and whether the firm has the institutional capacity to deliver. For a small firm with a senior technical founder, the PI bio is the single most important paragraph in this section.
A strong PI bio reads like a resume written for a peer: credentials, years in the relevant domain, specific prior projects, published or delivered results. For a Kaggle Top 200 founder with 15 years of engineering tenure and a track record of production ML delivery at a prior employer, the bio should say exactly that, with concrete numbers. Reviewers discount generic claims; they reward specific prior delivery.
Page budget: 1-2 pages for PI and key personnel. A separate bio document is usually attached but the summary paragraph in the technical volume is what gets read.
Section 5 — Past performance and facilities
For a newly formed firm — Precision Delivery Federal LLC was formed in March 2026 — corporate past performance is thin by definition. That is an acceptable position to be in; reviewers know that many SBIR awardees are new firms. What they want to see is founder prior delivery described carefully. "The PI delivered production ML systems for a federal health agency under prior employment at Harmonia Holdings" is a credible statement. "Company has extensive federal past performance" from a firm that is 30 days old is not.
Facilities is usually a short paragraph: compute environment, security posture, any lab or data access. For an AI firm, the honest answer is often "we run on a FedRAMP-authorized cloud, we have access to the following datasets, we can stand up a secure research enclave in X days." Reviewers do not expect small firms to own SCIFs; they expect you to describe an executable environment.
Section 6 — The cost proposal (pass-fail, not scored)
Cost is evaluated for reasonableness, not scored competitively. A Phase I at the ceiling is treated the same as a Phase I at 90% of the ceiling if both are justified. Where cost hurts you is when it looks padded, when labor categories do not match the technical plan, or when indirect rates are unsupported.
The cost proposal has fixed line items: direct labor (hours by labor category × rate), fringe, overhead / indirect, materials, travel, consultants, subcontractors, other direct costs, and fee/profit. For Phase I, fee is typically capped at 7-9%. Indirect rates on a new firm are usually provisional — pick a defensible number (15-25% overhead, 10-15% G&A) and be ready to justify it if asked. See our cost proposal deep dive for the full treatment.
Section 7 — Letters of support
Letters of support are optional at most agencies but signal strongly when present. A letter from an actual end-user — a program office that says it has a problem this technology solves, or a paying commercial customer — can move a selectable proposal into selected. Generic letters from industry consultants or academic collaborators are weaker, and a proposal padded with six weak letters often reads worse than one with two strong letters.
The hierarchy (strongest to weakest): paying commercial customer willing to buy post-Phase-II > program office end-user with funding identified > program office end-user without funding > large industry partner with teaming agreement > academic collaborator > industry association or trade group. Write your letters to the hierarchy.
Section 8 — Supporting documents
Phase I proposals typically carry a set of supporting documents: SBC registration confirmation, bio/CV attachments, assertion of data rights, funding agreement certification, subcontract arrangements, and any agency-specific forms. These are compliance items. They do not score. They do, however, disqualify: a missing or improperly filed supporting document is the most common reason a proposal is rejected without being scored. Build a compliance checklist and run it twice before submission.
How reviewer attention actually distributes
A useful mental model: out of every 100 minutes a reviewer spends on a proposal, roughly 30 minutes goes to the technical volume (first page, technical approach, SOW), 15 minutes to commercialization, 10 minutes to team and bios, 10 minutes to the cost proposal for reasonableness, and the rest to section-hopping during scoring. First-time proposers often over-invest in the back half (facilities, supporting docs) and under-invest in the first page of the technical volume. That is backwards.
The practical implication: write the first page last, after the rest of the proposal is drafted, so the summary reflects what is actually in the document and hits every win theme. Put your strongest figure on page 2 or 3. Put a table — any table — in the technical approach section, because reviewers stop to read tables.
Page-count discipline
The 20-page cap is not aspirational. It is enforced. A proposal that tries to sneak 22 pages past the portal will either be rejected pre-review or have the extra pages discarded and scored with holes. Discipline starts at the outline: allocate pages per section before writing, write to the allocation, and cut aggressively. Tables and figures count toward the page limit at most agencies — check the solicitation. Margins and font sizes are prescribed (typically 1-inch margins, 11pt minimum body) and are enforced.
Frequently asked questions
Most DoD Phase I technical volumes are capped at 20 pages. Civilian agencies cap at 10-15. Page count is strictly enforced.
Technical volume at 50-60%, then commercialization at 20-30%, then team and past performance at 10-20%.
Cost is evaluated for reasonableness, not scored. Padded or unsupported numbers hurt; a clean budget at the ceiling is fine.
Strong letters from actual end-users or paying customers move scores. Generic industry letters add little.
No. They read the first page, skim the middle, and return to sections with a scoring rubric.
Yes — company background, team bios, and facilities are reusable. Technical approach and topic-relevance must be custom.