Every GovCon opportunity-discovery tool ranks opportunities. Filters, saved searches, relevance scoring, even the occasional “smart match.” They all produce a ranked list. None of them, as of this writing, produce a ranked list where each row carries a per-factor English explanation of its rank.
We think this is the single most important product decision in the category. Not the ranking — the reasoning surface.
The bid/no-bid decision is high-stakes
A small federal contractor spends somewhere between $5,000 and $50,000 to respond to a non-trivial solicitation. The cost isn’t the writing; it’s the capture-manager time, the subject-matter-expert pull, the pricing analysis, the compliance review, and the opportunity cost of the deals that didn’t get pursued because this one ate the week.
Against that backdrop, “here is a list of opportunities, sorted by a score we won’t explain” is a non-starter. No bid manager is going to trust a black-box ranking with a five-figure decision.
The keyword baseline was bad but it was legible
The last generation of tools at least had the virtue of legibility. You set up a saved search on a NAICS code and a keyword. You got back every solicitation matching those filters. The ranking was garbage — it was chronological, or alphabetical, or a relevance score nobody trusted — but the inclusion criteria were legible. You could read a result and immediately answer “why is this here?”
A black-box match score is worse than legible-keyword-search, for the same reason a recommendation system that refuses to tell you why it recommended something is worse than a staff pick.
A reason, per factor
Our scorer blends deterministic qualifiers — the things that deterministic rules are actually good at, like set-aside eligibility and NAICS — with the softer “does this look like work you do” dimensions where most of the real signal lives. Every factor produces a number and a sentence.
The sentence is the product. The number is an implementation detail.