Methodology | BuildvBuy
Skip to main content

Methodology

How a BuildvBuy recommendation is made.

Every report is a research artifact, not a sales pitch. The methodology is new because the answer has moved: AI tooling has reshaped what a small team can actually ship by renewal time. This page explains the shape of the work behind a score, what we are honest about not knowing, and the structural reason we can be honest about any of it.

IndependentNo vendor pays for placement or coverage.
Buyer-fundedSubscriptions from the people reading the reports.
UnreviewedNo vendor sees a report before it is published.

What we weigh

Four questions, asked of every product, in the same order.

The same four questions sit behind every BvB Score. The exact way each answer flows into the final number is the part we keep under the hood. The questions themselves are public.

  1. 01

    Can a small team actually build it?

    Substance, not surface — judged against what a small team can ship with the AI tooling it has now, not what was buildable three years ago. The schema underneath the screens, the integration tail, the tests that catch the things that break in year two. A vendor with two hundred surface features and four essential ones is judged on the four.

    Not.Not feature parity. Not whether AI tools could mock up the homepage in a weekend.

  2. 02

    Does the math pay back over three years?

    Engineer-quarters, on-call rotation, integration work, data migration, the year-three maintenance tail. Run against the renewal price, not the marketing price.

    Not.Not licensing cost alone. A build that ships in six months and costs more to keep running is worse than the line item it replaced.

  3. 03

    Can the team keep it running once the original engineer leaves?

    Incident handling, dependency upgrades, on-call burden, the rebuild that quietly happens when knowledge transfer fails. Day 2 is where most internal builds become Day 1 again.

    Not.Not Day 1 reliability. Day 1 is table stakes.

  4. 04

    What does the vendor actually own that a buyer can't reach?

    Proprietary data, network liquidity, regulatory licenses, frontier research, exclusive integrations. The things a small team building from scratch cannot acquire with time and effort.

    Not.Not brand. Not incumbency. Salesforce has brand; that is not a moat. Stripe has acquirer relationships; that is.

Under the hood

An algorithm with a signature.

The engine is deterministic; the interface is public. Below is the shape of the function that produces every score: its inputs, its sequence, and what it returns. What it weighs stays under the hood.

bvbScore(entity) {
classifyarchetype
decomposecomponents
estimateeffort, tco
adjudicatecaps, drag, regression
return{ score, band, confidence }
}
deterministic. no LLM in the score path.

The output is a number. The argument is the evidence behind it.

A recommendation, anatomized

What a recommendation looks like.

The score is the artifact; the bars show how each question landed; the paragraph below is the actual lever that decided the recommendation.

Mid-market CRM — fictional

A four-year incumbent with a deep integration footprint.

BvB 41/100Buy
Can a small team build it?
Yes, in pieces
Three-year math
Buy is cheaper
Running it long
Hard at scale
Vendor's real moat
Integration lock-in

What tipped it

The buyer has spent four years wiring this CRM into a dozen internal tools and two pieces of revenue infrastructure. Building an internal replacement is feasible; rebuilding the integration mesh costs more than two full renewal cycles, and the math never catches up. Buy at renewal, negotiate hard on per-seat pricing.

What you are not seeing. The same recommendation also runs through adjustment checks for hard barriers, low-priced incumbents, switching cost, and thin evidence. None fired here. When they do, the report says so, and shows the score before and after the adjustment.

When we say we don't know

A confidence label is not a hedge. It is a contract.

Every score carries one of three confidence labels — High, Medium, or Low. The label is set by the breadth, recency, and corroboration of the evidence behind the score. When it lands on Low, the report withholds the verdict and shows what is missing instead. The label is the feature.

HighRecommendation provided with conviction. The evidence is wide, current, and corroborated across independent sources.
MediumRecommendation provided, labelled provisional. The evidence is enough to commit to a number, not enough to commit hard.
LowRecommendation withheld. The score is shown on its own; the verdict waits for better evidence.

What insufficient evidence looks like

Time-series databases — fictional

An open-source project with a sparse evidence base.

BvB /100Score withheld

Three primary sources, one of them stale by eleven months. Enough to estimate a range, not enough to commit to a number. The page declines the verdict and shows what is missing instead.

When we get it wrong

Corrections live in public, on the original report, forever.

Every published correction lives at /corrections, annotated to the original report. Reports are not quietly unpublished, and prior scores are never silently overwritten. The history is part of the artifact.

The fastest way to know whether to trust a methodology is to read the list of mistakes it has admitted to.

How we are paid

The structural reason a recommendation can read this honestly.

Independence is not a virtue we claim. It is a consequence of who writes the check. The category we displace is paid by the vendors it covers. We are paid by the people reading the report. That is the whole wedge.

Who funds it
Buyers, by subscription. Vendors cannot pay for placement, coverage, or favorable treatment, at any tier.
Editorial control
Stays with the BuildvBuy editorial team. Reports are not sent to the products they cover for pre-publication review.
What 'leader' means here
A BvB Score above 70, at High confidence. The label is earned by the math; nothing else qualifies. There is no paid tier.
When Buy is the answer
Buy gets the same prominence in the report as Build. If the renewal price is low enough that a build does not pay back, the report says so plainly.
When we change our mind
Recommendations are recalibrated as the AI-era cost shift moves. A flip is annotated, dated, and explained against the prior version.
A defensible answer for Build, Buy, or Adapt.

The methodology, applied.

The proof of any methodology is the artifact it produces. Read a real report, or read the corrections log first if that is the faster way for you to decide whether to trust the next one.

Independent. Buyer-aligned. Transparent by default.

© 2026 BuildvBuy. Independent analyses backed by transparent methodology. Not professional advice.