SBIR

DoD SBIR 26.1: how to pick a topic from 115 options

A working rubric for reading, scoring, and filtering the 115 topics DoD released on April 13, 2026. Designed for the small business that has a week to decide and a month to write.

Why this matters right now

DoD SBIR solicitation 26.1 opened on April 13, 2026, the same day the reauthorization bill was signed into law. The release contains 115 topics spread across Army, Navy, Air Force, Space Force, DARPA, DTRA, MDA, and other DoD components. Topics are listed on the Defense SBIR/STTR Innovation Portal (DSIP). The window is compressed because the lapse pushed what would have been a late-2025 or early-2026 solicitation into mid-April, and close dates follow the normal rhythm from there.

If you try to read 115 topics the way you read three, you will drown. This post is the read-and-score process I use. It is opinionated. It is also not fancy — the whole thing fits in a spreadsheet, and a good first pass takes four to six hours.

The question is not "which topic is interesting." It is "which topic is interesting and written for a firm that looks like mine."

How to read a DoD topic

Every DoD topic follows roughly the same shape. Learn the shape and you can skim fast.

Objective

One to three sentences at the top. States the technical outcome the sponsor wants. If the objective is fuzzy ("improve capability in area X"), the topic is often underwritten — that can be an opportunity or a trap depending on whether you have direct access to the sponsor to clarify scope.

Description

The longest block. Explains the operational problem, prior attempts, why those failed or stopped short, and what the sponsor believes a successful solution looks like. Read this paragraph-by-paragraph. Underline every concrete technical requirement (data rates, accuracy thresholds, platform constraints, environment conditions). These are the rubric the evaluator will use, whether the sponsor admits it or not.

Phase I deliverables

Usually a feasibility study, sometimes a prototype at TRL 3 or 4, always a final report. Look for the phrase "demonstrate feasibility" vs. "build a prototype" — the former lets you lean on paper analysis, the latter forces you to budget for hardware or compute.

Phase II expectations

The section people skip. Don't. Phase II language tells you whether the sponsor has actual transition intent. Concrete transition language ("will integrate with Program of Record X," "the PEO has identified this as a gap," "anticipated Phase II ceiling of $Y") signals a real pipeline. Vague transition language ("potential integration with future systems") is a warning sign — the topic exists, but the money and the program manager to sustain Phase II may not.

References

The reading list at the bottom. A small but real signal. If the references are to in-house AFRL/ARL/NRL reports from the last 24 months, a specific government researcher is almost certainly behind the topic, and your proposal needs to respect what they have already found. If the references are all commercial sources, the topic is more open.

The scoring rubric

Score every candidate topic on five axes. Three thresholds rule most decisions — below the threshold, the topic is a no regardless of how interesting it is.

Criterion
What you are measuring
Max
Technical fit
Can you point to specific past work, team members, or prior outputs that directly map to this topic's Phase I deliverables? If you have to hand-wave, it is not a fit. This axis is a hard gate.
/ 25
Transition viability
Does the topic name a Program of Record, PEO, or identified program office? Is the Phase II language concrete or vague? Can you tell who the eventual customer is?
/ 25
Dual-use potential
Does the technology have a defensible commercial market in addition to the defense use case? Dual-use improves both Phase II scoring and your business resilience if Phase III stalls.
/ 20
Competitive density
How many small businesses are likely to submit here? Broad topics (generic LLM, generic cyber) draw hundreds. Narrow topics with specific domain depth draw tens. Narrow is usually better.
/ 15
Resource fit
Can you deliver Phase I inside the ceiling and the period of performance without buying equipment or hiring? If Phase I requires GPU clusters you do not have budgeted, the topic gets discounted.
/ 15

A topic under 60 out of 100 drops out. A topic under 15 on technical fit drops out regardless of total. The remaining set is your shortlist.

Filtering 115 down to 3

You will not manually score 115 topics. You should not try. The filtering is a funnel.

Pass 1 — Keyword exclusion (30 minutes)

Pull the DSIP topic list as a table. Mark every topic where the title and objective use language that is outside your technical center of gravity. If your team builds LLM systems, topics on hypersonic materials or undersea acoustics are out. This is almost always 50 to 70 percent of topics.

Pass 2 — Objective read (2 to 3 hours)

For the 30 to 60 topics that survive keyword exclusion, read the one-paragraph objective. Mark the ones where your first reaction is "I know exactly who on the team would work on this." Be honest with yourself. If you hesitate, it is a no.

Pass 3 — Full score (2 to 3 hours)

For the 10 to 15 topics that survive objective read, read the full description, Phase I deliverables, Phase II language, and references. Score each on the rubric. Sort by total score. The top three to five become your write list.

Pass 4 — Gut check

For the top candidates, ask three questions. Who on my team signs the technical volume and presents in a kickoff? Do I have a plausible Phase II story that ends with a named program office buying? If I do not win this, do I regret the time? If any answer is weak, drop the topic.

Rule of thumb

Three focused submissions beat twelve diluted ones.

Per-firm caps in the 2026 reauthorization explicitly penalize high-volume submitters. Evaluators read dozens of proposals per topic — the ones that win are the ones where the team's fit is obvious in the first page. Dilution is expensive.

Common mistakes

Four mistakes account for most of the bad submissions I have seen or written in past cycles.

1. Chasing topics outside your technical center

The topic sounds cool. The team has zero direct experience. You tell yourselves you can learn it in Phase I. Evaluators have read hundreds of proposals in that area and can detect unfamiliarity within the first technical paragraph. Save the effort for a topic you already know.

2. Assuming the listed PoC is the topic author

DoD topics are often drafted by a technical lead and routed through a contracting office. The listed point of contact may be a KO, a program manager, or a technical representative who did not write the text. Pre-solicitation questions should be specific enough that you learn from the answer regardless of who responds.

3. Writing for the topic text instead of the topic intent

The description is a snapshot of what the sponsor wrote months ago. The evaluator reads your proposal with what the sponsor knows today. Your proposal should address the text exactly and then add one or two paragraphs that demonstrate you understand where the problem has moved since the topic was written. That signals domain depth.

4. Burying the transition story

Phase I wins are decided on feasibility; Phase II wins are decided on transition. A Phase I proposal that does not articulate a credible Phase II transition path is starting the race 20 points behind. Put the Phase III commercial path in the first two pages. Do not bury it in the back.

The 30-day sprint from topic pick to submission

If you finish scoring this weekend, here is a defensible four-week sprint.

  • Week 1: Final topic selection. One-page technical outline for each of three topics. Pre-solicitation questions submitted where DSIP allows.
  • Week 2: First draft of technical volume for topic #1. Begin technical volume for topic #2. Cost volume draft for topic #1.
  • Week 3: Technical volume drafts complete for all three. Internal red-team review. Resolve any open questions from topic PoCs. Cost volumes complete.
  • Week 4: Final edits. Compliance matrix check against solicitation. Formatting pass. Submission through DSIP at least 48 hours before deadline (portals fail at the wire).

Four weeks is tight but not unrealistic for a disciplined small business with two to three domain experts and an administrative lead. It is unrealistic if you are starting from zero on topic research during week one.

Where AI/ML topics cluster in 26.1

Without naming specific topic numbers (check DSIP for the live list), AI/ML-heavy topics in recent DoD cycles have clustered in five areas. Expect 26.1 to look similar:

  • Agentic systems for decision support. Mission planning, logistics, C2. Often DARPA or service-level AI shops.
  • Computer vision and ISR exploitation. Overhead, airborne, ground. Usually Air Force, Space Force, or Army.
  • Autonomy and swarm control. Navy and Air Force have led here for three cycles.
  • Cybersecurity LLM applications. Threat triage, code analysis, log exploitation. Cross-service.
  • Model assurance and evaluation. Red-teaming, adversarial robustness, prompt injection. Growing share every cycle.

Bottom line

115 topics is a lot. Most of them are not for you. The ones that are for you are usually obvious within the first sixty seconds of reading the objective. Build the funnel, do the scoring, write three strong proposals. A disciplined small business that submits three well-matched proposals per cycle will outperform a firm that submits fifteen.

Frequently asked questions

How many topics are in DoD SBIR 26.1?

115 topics released April 13, 2026 across Army, Navy, Air Force, Space Force, DARPA, DTRA, and other DoD components. Topics are hosted on DSIP.

How many topics should a small business submit to?

For most small businesses, three focused submissions beat twelve diluted ones. Per-firm caps in the 2026 reauthorization explicitly penalize high-volume submitters.

Is the topic point of contact always the person who wrote the topic?

Not always. DoD topics are often written by a technical lead and routed through a contracting shop. The listed point of contact may be a KO, a program manager, or a technical representative. Write your pre-solicitation questions so you learn something useful regardless of who responds.

What TRL is DoD SBIR 26.1 looking for?

Phase I typically targets TRL 2 to 4. Phase II targets TRL 4 to 6. Each topic states its own TRL expectations — read them.

What is dual-use and why does it matter?

Dual-use means the technology has a defensible commercial market in addition to the defense use case. Dual-use topics are scored more favorably on Phase II transition and Phase III commercialization criteria, and dual-use firms are more resilient if a defense program office pulls back.

Can I submit to the same topic twice with different technical approaches?

No. One firm, one proposal per topic. If your team has two competing approaches, pick the stronger one before submission.

Keep reading
1 business day response

Got a topic picked?

We co-write the technical volume on a small number of proposals per cycle. Send the topic number and where you are strong.

Start the conversation
UEI Y2JVCZXT9HP5CAGE 1AYQ0NAICS 541512SAM.GOV ACTIVE