The pre-release window: what it is and why it matters
Every DoD SBIR solicitation moves through three phases. First, topics are pre-released — made public for review and TPOC outreach, typically three to four weeks before the solicitation opens. Second, the solicitation opens and proposals can be submitted, usually for a 30-day window. Third, the solicitation closes and proposals go into review. Most firms that lose SBIR lose it in the second phase, under the clock, writing against topics they first read the week the solicitation opened. Firms that win concentrate their effort in the first phase, before the clock starts.
The structural reason is simple. During pre-release, TPOCs can be contacted directly. They will take a three-sentence question by email and send you a three-paragraph answer about what the topic is really looking for. That exchange is gold. It tells you what the program office cares about, what approaches have already been tried, what reviewers will look for, and often which topics are "warm" (real customer pull) versus which are placeholders. Once the solicitation opens, that direct channel closes. Questions must go through the formal DSIP Q&A system and get sanitized answers that every proposer sees. The information advantage evaporates.
SBIR Topic Quality Signals — Evaluation Criteria
DSIP: navigating topic lists
DSIP — the Defense SBIR/STTR Innovation Portal at dodsbirsttr.mil — is the official portal for DoD SBIR topics and submissions. The topic list view, filtered by component, keyword, and status, is where scouting begins. At pre-release, export the full topic list for the components you care about. For an AI firm, that usually means Army (DEVCOM and PEOs), Navy (NAVAIR, NAVSEA, ONR), Air Force (AFRL, AFWERX separately), DARPA, SOCOM, CDAO, and OSD. Each component publishes between 10 and 120 topics per cycle.
Inside each topic listing, the fields that matter are: the topic number (tracks across cycles — repeat topic numbers signal agency commitment), the TPOC name and email, the technical description, the Phase I and Phase II objectives, and the references section. The references section is often ignored and is frequently the most informative part of the topic — it tells you what the program office has already funded, what papers they consider foundational, and what terminology they expect proposers to use.
DSIP also publishes a Q&A thread per topic once the solicitation opens. Reading the Q&A thread for the previous cycle's version of the same topic (when one exists) is an undervalued research move — it tells you what reviewers asked clarifying questions about and what misconceptions they corrected.
AFWERX open topics vs solicitation topics
The Air Force runs two SBIR programs that function very differently. AFWERX Open Topic — when active — invites firms to propose their own problem aligned to an Air Force or Space Force need. Solicitation topics are written by program offices and published through the standard DoD SBIR cycle. Both are accessible through afwerx.com and DSIP.
AFWERX Open Topic rules and availability have changed several times over the past few cycles. Current practice as of 2026 includes structured Open Topic calls under specific operational themes (e.g., autonomy, networking, spectrum) rather than fully unconstrained open topics. The AFWERX portal publishes active calls, eligibility rules, and sample winning pitches. Open Topic proposals are typically shorter than traditional SBIR proposals and use a pitch-plus-proposal two-stage format similar to NSF, which lowers the cost of attempting a proposal.
For a small AI firm, AFWERX Open Topic (when active) is the single most accessible front door to DoD because the firm controls the problem framing. The win rate is harder to estimate because the structure changes, but historically has been in the 10 to 20 percent range for advancing past the pitch stage. The trade-off is that Phase III transition paths are more variable — a traditional solicitation topic has a named program office pulling for the capability, while Open Topic wins must cultivate the customer during Phase I.
SAM.gov forecast for civilian SBIR
Civilian SBIR agencies do not use DSIP. NIH uses eRA Commons and its own Program Announcements; NSF uses its own Project Pitch process; DOE uses the SBIR Exchange. But SAM.gov is the cross-agency forecast ground truth. The SAM.gov opportunity forecast flags upcoming SBIR solicitations months before release — usually before the agency's own portal publishes them. For civilian agencies with less predictable cycles than DoD, the SAM.gov forecast is the earliest useful signal.
The practical workflow: set saved searches on SAM.gov for "SBIR" and for specific agency set-asides (HHS, NSF, DOE, etc.), and review the forecast list weekly. Pair the forecast with each agency's program announcement page. NIH publishes Standard SBIR Omnibus Solicitations plus institute-specific topics. NSF publishes topic categories that update annually. DOE publishes annual SBIR solicitations with clearly enumerated technical topics.
Topic quality signals
Not every topic is worth pursuing. The signals that indicate a topic is worth a serious proposal effort are learnable and consistent across cycles.
Specificity. A topic that names a program, a platform, and a specific technical problem (e.g., "anomaly detection for F-35 propulsion health monitoring") will attract fewer proposals than a broad topic ("AI for aviation maintenance"). Narrow topics are easier to win if you are aligned to them; broad topics have lower effective win rates because every vaguely qualified firm submits.
TPOC responsiveness. Email the TPOC during pre-release. If they reply substantively within a week, the topic has real program pull. If they reply with a one-line "please review the topic statement" or do not reply at all, the topic is weaker — likely a placeholder or a dispersed interest without a strong internal champion.
Agency alignment. Does your team's actual delivered work match the topic's technical core, not just the topic's keywords? Agencies are experienced at spotting firms that learned the topic vocabulary last week. A topic about acoustic classification will select for firms with acoustic classification past work, not firms with generic ML past work and a copied glossary.
Recurrence. If the topic number or an obvious topic twin appeared in the previous one or two cycles, the agency is signaling durable interest. Even if the topic did not get a good proposal last cycle, the program office is still looking. Recurring topics often have better Phase II conversion because the program office has already internalized the need.
Award size-to-scope ratio. A Phase I capped at $250K with a scope written for $750K is a warning. The winning firm will over-promise and under-deliver, the program office will be frustrated, and Phase II will be awkward. A topic where the scope is sized to the award is a better setup for a successful execution and conversion.
On-time award history. Some agencies award Phase I on time, some slip. Navy, Army, and Air Force are generally reliable. Some civilian agency cycles have slipped 4-6 months on Phase I award. For a small firm living on Phase I cash flow, award timing is not a detail — it is a planning input.
The TPOC outreach play
The single most valuable move in the scouting cycle is the TPOC email during pre-release. Most proposers do not send it. The ones who do get an information advantage that cannot be replicated once the solicitation opens.
The pattern that works: a short, specific email — four to six sentences — that introduces the firm in one sentence, shows you read the topic closely in two sentences, asks a specific clarifying question or two, and proposes a written reply (not a call). The specificity is what earns the reply. Generic "can you tell me more about the topic" emails get generic replies. "The topic references X approach, but approach Y might better address the degraded GPS use case mentioned in the objectives — is the topic open to proposals built on Y?" gets a substantive reply.
Critical: ask for a written reply, not a phone call. TPOCs are busy, fair procedures require equal access, and a written exchange can be referenced later in the proposal. A phone call puts the TPOC in an awkward position, does not produce a paper trail, and often yields less useful information than a written exchange.
Log every TPOC interaction — who, when, what was asked, what was replied. That log becomes the raw material for the proposal's approach section and for the transition narrative.
Topic scoring for a shortlist
By the end of pre-release, a serious scouting effort has reviewed 100 to 400 topics across the relevant components and reduced them to a shortlist of 10 to 20 for serious consideration. The right scoring rubric, scored 1 to 5 on each axis:
- Capability fit — how well your actual past work maps to the topic core.
- TPOC signal — based on pre-release outreach quality.
- Topic specificity — narrow topics score higher.
- Phase III visibility — is there a named customer and a plausible transition path.
- Competition density — your estimate of how many serious firms will submit.
- Effort cost — proposal effort required, including any teaming or past performance assembly.
Sum the scores and rank. A score above 20 is a strong "yes, write this." 15-20 is "maybe, write if capacity allows." Below 15, skip. A first-year firm running a 10-proposal portfolio should not need scores below 18 in the pool — and if the pool does not have 10 topics above 18, the portfolio should shrink, not pad with weaker entries.
Managing a portfolio of 8-15 topics simultaneously
Writing proposals is not the bottleneck most first-year firms expect. Coordination is. Managing 10 topics in parallel means tracking 10 TPOCs, 10 past performance assemblies, 10 cost proposals, and 10 compliance checklists. A shared tracker with status per topic (scouting / TPOC engaged / drafting / reviewed / submitted), owner, and deadline is mandatory. Without it, proposals slip on small mistakes — missed certifications, wrong page counts, expired SAM.gov registrations — that have nothing to do with proposal quality.
The proposal production cadence that works: week 1-2, draft the technical narrative; week 3, draft the cost proposal and assemble past performance; week 4, internal review and compliance check; week 5, final edits and submission. For 10 proposals in a 30-day window, that means staggering starts — three proposals in flight at any given time, with the earliest submitting in week 4 and the last in week 5.
The team pattern that scales: one proposal lead (writes the technical narrative), one reviewer (reads for compliance and clarity), and one coordinator (manages submissions, certifications, and cost). A solo founder can do all three at 3-4 proposals per cycle; at 8+, the coordinator role needs a dedicated pair of hands.
Bottom line
SBIR wins are manufactured in pre-release. The firms that know this invest in topic scouting, TPOC outreach, and scoring before the solicitation opens. The firms that do not invest there end up under the clock, writing less competitive proposals against topics they do not fully understand. The scouting system described here is not sophisticated — a weekly DSIP review, a TPOC email template, a scoring rubric, and a shared tracker. What makes it work is consistency. Most firms run an ad hoc scouting effort and miss the window. A firm that runs the system every cycle compounds an information advantage that shows up directly in Phase I win rate by cycle three.
Frequently asked questions
For DoD SBIR topics, Technical Points of Contact can be contacted during the pre-release window, which runs roughly from topic publication to approximately one to two weeks before the solicitation opens. Once the solicitation opens, direct TPOC contact is prohibited and questions must go through DSIP's Q&A system.
DSIP (dodsbirsttr.mil) is the official DoD SBIR topic and submission portal. It publishes component solicitations, topic listings, Q&A, and proposal submissions. For serious scouting, export the topic list at pre-release and score each topic against your capability map. Civilian SBIR agencies use their own portals.
For a first-year firm, 8 to 12 serious submissions per DoD cycle is the viable range — enough to survive the base rate, few enough to write well. Mature firms running a dedicated proposal operation can go to 20 or more. The portfolio should concentrate on two to three target components rather than spreading across all eleven agencies.