SBIR Data

SBIR win rates by agency in 2026: real numbers

Phase I win rates across the eleven SBIR agencies, the structural factors that drive differences, and where a small AI firm should realistically compete. No agency is "easy" — but some are more legible than others.

Why win rate is the wrong single number — but still useful

"Win rate" is the first number founders ask about when they look at SBIR. It is also the number that most misleads them. Win rate is an aggregate across wildly heterogeneous topics — some receive two proposals, some receive fifty. A 15 percent aggregate rate can hide a topic where one of fifty proposals wins and another topic where two of three win. Founders who plan around the aggregate number miss the texture that actually determines their outcome.

Still, the aggregate rates are worth knowing. They establish a base rate for portfolio sizing. A firm that thinks SBIR win rate is 40 percent will submit far too few proposals. A firm that thinks it is 5 percent will burn out chasing impossible odds. The truth, across agencies, is in the low-to-mid teens, with meaningful variance — and the factors driving that variance are predictable enough to act on.

Aggregate win rate sets your portfolio math. Topic-level dynamics decide whether you win any given cycle. You need both numbers to plan.

The headline numbers

Reported Phase I win rates across the eleven SBIR agencies have been remarkably stable over the past five cycles. The table below shows approximate ranges based on SBA's annual reports to Congress, agency-published statistics, and cycle-level data. Rates are Phase I, not Phase II, and are aggregated across topics — specific topics may be much tighter or much looser.

AgencyApprox. Phase I win rateNotes
DoD (Army)12% — 18%Highest topic volume in DoD (80-120 topics/cycle). Competition varies widely by topic specificity.
DoD (Navy)10% — 16%Strong Phase III transition rate. Topics tend to map to named programs of record.
DoD (Air Force / Space Force)10% — 20%Open Topic cycles (when active) and Phase I pitch days inflate the top of the range. Traditional topics closer to 10-12%.
DoD (DARPA)8% — 15%Fewer topics, more specialized. Incumbents in specific research areas win disproportionately.
DoD (CDAO, SOCOM, MDA, DTRA, OSD)8% — 14%Smaller topic counts, narrower scope, often tied to specific program offices.
NIH14% — 22%One of the highest reported rates, partly because NIH uses a more permissive 'fundable' score band.
NSF15% — 25%Historically the highest aggregate rate. Two-phase review (pitch then full proposal) filters heavily before full proposal.
DOE13% — 22%Topics tied to national lab research directions. Lab collaboration is a strong signal.
NASA10% — 17%Fewer topics, tight coupling to mission directorates. Center relationships matter.
DHS10% — 15%Smaller program. Component-driven topics (TSA, CBP, CISA, FEMA).
USDA, ED, DOT, DOC, EPA8% — 20%Smaller SBIR programs, wide variance by cycle. Less competition at some, more specialized at others.

Two honest caveats. First, these are agency-reported and the reporting methodology varies — "win rate" at NSF means something slightly different from "win rate" at Navy. Second, any given cycle can swing rates by three to five percentage points on reduced funding or increased submission volume.

What drives the differences

Topic specificity and direction

DoD topics are written by program offices and typically describe a specific operational problem with a known customer. NSF topics are broad technology areas with open invitation. NIH topics live in between. The more specific the topic, the more an incumbent or close-to-the-topic small business has an advantage, and the higher the implicit win rate for that firm. A first-time firm with no connection to the program office faces a lower effective rate than the published aggregate.

Solicitation style — directed vs. open

Directed solicitations (DoD, most civilian) ask for specific capability against specific need. Open topics (Air Force Open Topic when it runs, NSF's broad agency topics) invite the firm to propose the problem. Open topics have higher effective win rates for well-positioned firms because the firm is not fighting incumbents on the incumbent's home turf.

Review methodology

NIH runs a peer review with named reviewers, scored on impact, approach, innovation, investigator, and environment. NSF uses a pitch-plus-proposal two-stage review. DoD uses technical review boards composed of program office staff. Each has its own biases. NIH review prefers credentialed PIs and clear scientific methodology. NSF review rewards commercial potential and team credibility. DoD review rewards mission fit and transition potential. Matching your proposal style to the reviewer is a bigger lever than most first-time firms realize.

Funding level and topic count

Agencies with large budgets and many topics (DoD, NIH) see more competition per topic on average. Agencies with smaller budgets and fewer topics can look more forgiving in aggregate but also have thinner opportunity pipelines.

Incumbency effects

Every agency has firms that are "known" to the program offices. They have won before, delivered well, and their names register when a proposal crosses a reviewer's desk. Incumbents win more. That is not corruption — it is rational risk management by the government. The practical implication is that first wins are harder than subsequent wins, and a firm's fifth proposal to the same agency has materially different odds than its first.

The topic-level reality

Under the aggregate, topics behave very differently. A sample of recent cycles gives a feel:

  • A specific Army topic on predictive maintenance for rotary-wing aviation received 42 proposals and awarded 3 — effective rate 7%.
  • A Navy topic on a niche electromagnetic spectrum application received 6 proposals and awarded 2 — effective rate 33%.
  • An NSF Phase I topic in the broad AI space received over 400 pitch submissions with a funded Phase I cohort of 45 — effective rate approximately 11%.
  • An NIH topic at NCI on imaging biomarkers received 18 proposals with 4 awards — effective rate 22%.

The variance across topics within a single agency is larger than the variance between agencies. Which means: agency selection matters less than topic selection within an agency. The firm that reads every topic in its capability area, identifies the three that will receive the fewest high-quality proposals, and writes tight proposals to those three will outperform the firm that writes one proposal to every agency.

Phase II conversion is where the money is

Phase I win rates get the attention, but Phase II conversion rates — the probability that a Phase I winner gets a Phase II — are higher and matter more for revenue. Across agencies, Phase II conversion typically ranges from 40 to 60 percent. DoD and NIH are on the higher end. The conversion is not automatic — you must apply for Phase II and submit on time — but conditional on a clean Phase I delivery and a credible commercialization plan, the rate is much higher than Phase I.

The practical implication: portfolio math should be built on Phase I-to-Phase II-to-revenue, not Phase I in isolation. A firm that wins two Phase Is at $150K each and converts both to Phase IIs at $1M each has very different economics than a firm that wins three Phase Is and converts none.

Portfolio sizing: how many proposals per cycle

Given a 10-15% agency-aggregate rate and allowing for the fact that not every proposal is equally strong, the portfolio math suggests:

  • 3 proposals: ~35-45% probability of at least one win. Too few to plan around.
  • 6 proposals: ~55-70% probability of at least one win. Minimum viable for a firm that needs SBIR revenue.
  • 10 proposals: ~75-85% probability of at least one win. The right floor for a firm treating SBIR as its main revenue path.
  • 20+ proposals: diminishing returns on marginal quality. The portfolio benefits if you can maintain quality, but each marginal proposal is harder to write well.

The bottleneck is almost never submission capacity — it is proposal quality. A firm that can write one excellent proposal per week can sustain 12 per cycle with a small authoring team. A firm that outsources proposal writing and cuts quality to scale volume usually sees its aggregate win rate drop below the published rate.

Where AI firms should compete in 2026

For a small AI firm with real technical depth but no federal incumbency, the highest-leverage agencies in 2026 are:

  • Army — volume, predictable cadence, xTechSearch alongside DSIP. Predictive maintenance, logistics, and intelligence processing are warm topic areas.
  • Navy — volume plus the strongest Phase III track record. NAVWAR, NAVAIR, and ONR all buy AI with follow-on funding in play.
  • NIH — permissive peer review and generous Phase II. Institute-specific SBIR topics are published and often less crowded than headline DoD topics.
  • NSF — pitch-plus-proposal structure means wasted effort is capped. Commercialization focus favors founders who can talk about markets as well as code.
  • DOE — national lab collaboration is a strong signal. Topics are technical and reward genuine engineering depth.

Where reported rates mislead

A few traps to avoid:

  • Agency rate is an average over many topics. Your actual rate depends on the specific topics you choose.
  • First-time firms face a lower effective rate. Incumbency effects are real. The aggregate includes established firms and inflates the implicit first-proposal rate.
  • Quality matters more than count. Three excellent proposals beat twelve mediocre ones on every dimension — win rate, reviewer feedback, and agency reputation.
  • Phase II is where the math works. Phase I dollars are too small to sustain a firm. Phase II conversion rate is the real number to plan around.
  • Reauthorization cycles reshape rates. Lapses, continuing resolutions, and new set-asides (like the strategic initiative set-aside) change the denominator year to year (see our reauthorization breakdown).

What this means for planning

A realistic first-year SBIR plan for a small AI firm looks something like:

  • Pick two or three target agencies based on mission fit and existing past performance.
  • Plan to submit 8 to 12 Phase I proposals in the first year across those agencies.
  • Expect 1 to 2 Phase I wins. More is upside; zero is a signal to re-evaluate, not to quit.
  • Budget for 3 to 6 months of Phase I work on each win, with Phase II application in parallel.
  • Aim for 40-50% Phase II conversion on your Phase I wins. Below that, look hard at what the reviewers said.
  • By the end of year two, the firm should be on its second or third Phase II, with Phase III discussions open at at least one agency.

Bottom line

SBIR win rates in 2026 sit in the low-to-mid teens on aggregate, with meaningful differences across agencies and much larger differences across topics. The firms that win consistently treat SBIR as a disciplined portfolio: sensible agency selection, careful topic selection, high proposal quality, and a Phase II-to-Phase III plan from day one. The firms that lose treat it as a lottery. The difference is not talent. It is discipline.

Frequently asked questions

What is a typical SBIR Phase I win rate in 2026?

Across the eleven participating agencies, Phase I rates range from roughly 8% to 25%. The federal average has sat in the low-to-mid teens for years. Topic-level rates can be much tighter or much looser.

Which agency has the highest SBIR win rate?

NSF and DOE historically report the highest aggregate rates. But aggregate is misleading — a broad DoD topic at 50 proposals is harder than a narrow NSF topic at 20. Topic-level matters more.

What is the DoD SBIR win rate?

DoD component rates run roughly 10-18% depending on component and cycle. Army and Navy see the most volume. Air Force Open Topic rates can be higher; directed topics are comparable to other DoD.

Does win rate improve after you win once?

Yes, substantially. Second-time proposers see materially higher rates than first-time proposers at most agencies. Phase II conversion rates (40-60%) are dramatically higher than Phase I.

How many proposals should a small AI firm submit per cycle?

Enough to survive the base rate. At 10-15% Phase I, three proposals give a 35-45% chance of at least one win. Eight to twelve push that to 70-85%. First-year firms chronically under-submit.

Is it better to submit to one agency repeatedly or spread across agencies?

Concentrate. Agency incumbency compounds — winning twice with the same program office builds a relationship that a single proposal across ten agencies never does. Two or three target agencies is the right diversification, not eleven.

Keep reading
1 business day response

Building a realistic SBIR portfolio?

We help small firms size a proposal portfolio, pick target topics, and write proposals that win at Phase I and convert at Phase II.

Explore SBIR partnering
UEI Y2JVCZXT9HP5CAGE 1AYQ0NAICS 541512SAM.GOV ACTIVE