SuiteCompete Logo
Commercial Diligence: A Practical Framework for Harvey Balls Scoring hero image

Commercial Diligence: A Practical Framework for Harvey Balls Scoring

Published on November 7, 2025

Written for acquirers of Lower-Middle-Market (LMM) software companies: Private Equity, holding companies, and strategic buyers.

TL;DR

There is little point in using Harvey Balls to characterize diligence perspectives, without establishing some rigor that is more than subjective. Use a simple, repeatable rubric tied to observable evidence so scores withstand investment committee scrutiny and guide post‑close priorities.

  • Tie each score to a defined 5‑level scale and minimum evidence tier (≥ Tier 2) before assigning any half/full ball.
  • Score relatively (vs. named competitors) to avoid inflating absolute claims.
  • Capture artifacts per-cell to create an audit‑ready trail for easy access.
  • Summarize to an evidence‑weighted grid that reduces bias, de‑risks valuation, and sharpens post‑close execution.

1. Scoring Rubric and Scale

Harvey Balls work when everyone scores the same way. Anchor to a five‑level, relative scale across a defined peer set and keep the unit of analysis consistent (capabilities, not marketing bullets).

Scale definition (relative to named competitors)

Level Symbol Label Definition Minimum Evidence
0 Absent Capability not present or only aspirational Tier 1 (claim) or better confirming absence/roadmap only
1 Partial Exists in limited scope, lacks depth or reliability Tier 2 (docs/API)
2 Competitive Meets common requirements at parity with peers Tier 2 + 1 corroborating artifact
3 Strong Clearly deeper/broader than median peer Tier 3 (third‑party/integration)
4 Best‑in‑set Differentiated capability with customer‑verifiable impact Tier 4 (verifiable usage/metric)

Implementation notes:

  • Score relatively: ask “vs. this peer set today,” not “in absolute terms.”
  • Keep the grid scoped to a concrete capability taxonomy (verbs/outcomes), e.g., “Auto‑reconcile payments,” not “Great finance features.”

Deliverable: a single grid where each cell has a Harvey Ball (easy to do in SuiteCompete) with clickable supporting evidence.

2. Evidence Standards and Audit Trail

Your grid is only as good as the proof behind it. Use a simple, repeatable standard so any reviewer can reproduce a cell in under two minutes.

Evidence tiers (highest = strongest conviction that your Harvey Ball score is defensible):

  • Tier 0 — Marketing claim only (website, deck). Do not use for >1 scores.
  • Tier 1 — Claim + static artifact (screenshot, PDF). Weak—use only for 0–1.
  • Tier 2 — Interactive docs/API/reference that demonstrate how it works.
  • Tier 3 — External corroboration (integration marketplace, partner listing, analyst write‑up).
  • Tier 4 — Customer‑verifiable usage or outcome (changelog with metrics, demo showing real data, independent testimonial with specifics).

Where to keep it: a system of record beats a spreadsheet that gets forwarded, duplicated, or forked. SuiteCompete can centralize Harvey Ball grids with auto‑gathered evidence URLs and short blurbs to speed refinement of fill levels.

3. Calibration and Governance

Consistency is the control. Calibrate reviewers, monitor drift, and manage changes explicitly.

Calibration protocol:

  1. Pick a 10×5 sample (5 capabilities × 10 competitors).
  2. Two analysts score independently using the rubric and capture evidence.
  3. Reconcile differences ≥1 level; refine wording in the rubric for ambiguous cases.
  4. Repeat until inter‑rater agreement ≥80% within ±1 level.
  5. Lock the rubric (version/date) and start full scoring.

Change and drift management: re‑calibrate after major taxonomy changes.

Quiet nudge: If you decide to operationalize this, choose a workflow that makes it trivial to attach sources to cells, and render into a presentation-ready format without exporting / forking / duplicating. It would be our pleasure to serve you with SuiteCompete.