The Resolution Ledger: The Missing Product Behind Outcome-Based Pricing
Outcome pricing sounds simple until a customer asks why you billed them for 3,214 resolutions last month. A resolution ledger is the boring, spreadsheet-shaped product that makes outcome-based AI support pricing auditable and defensible.
If you can't explain why you billed for it, you shouldn't be charging for it - simple.
Outcome pricing sounds simple until a customer asks: why did you bill me for 3,214 resolutions last month?
If your answer is "trust us," the next conversation is going to be about contract terms, not renewal.
This is the part of hybrid AI support pricing that most teams skip. They spend weeks designing the pricing stack, defining tiers, and setting per-resolution rates. Then they launch without building the one product that makes the whole model defensible: a resolution ledger.
A resolution ledger is a monthly report that answers what was resolved, why each one counted, what evidence backs it up, what reopened, and what got escalated. It's not just a billing document. It's the thing that turns "we resolved your tickets with AI" into something a finance team can actually verify.
---
What goes in the ledger
Outcome summary
Total verified resolutions. Total AI conversations. Resolution rate. Overage count and which tier applied.
Category breakdown
Top intents by volume, resolution rate by intent, escalation rate by intent. This is the section where customers can actually see which types of issues the AI handles well and which ones still need a human.
Verification evidence
For each counted resolution, one of three verification methods should have applied: the ticket was closed and stayed closed through the cool-down window, the customer explicitly confirmed it was fixed, or a telemetry signal confirmed success. Report the breakdown by method. Throw examples in an appendix for customers who want to dig in.
Reopen and quality signals
Reopen rate overall and by intent. Repeat contact rate for the same issue. This is the section that keeps everyone honest. If you're counting resolutions but the same customers keep coming back about the same problems, the ledger will surface that before the customer brings it up themselves.
Escalation integrity
What percentage of escalations included full context. How much time the pre-filled case packs saved. What the top escalation reasons were: missing data, permissions issues, edge cases that fell outside the autonomy boundary.
---
The ledger protects you too
Outcome pricing creates incentives on both sides. Some customers will route everything through the AI agent to cut headcount costs. Others will dispute counts when budgets get tight.
The ledger gives you something to point to: what counted, why, how you prevent double-counting, and what the quality signals look like next to the volume numbers. It turns a pricing argument into a shared document both sides can reference.
---
A ledger template you can use
Monthly AI Support Ledger
Outcomes: Verified resolutions, included allowance, overage count, blended effective price per resolution.
Performance: Verified resolution rate, reopen rate (seven-day window), escalation rate, escalation context preservation rate.
Top intents: A table with each intent, its volume, verified resolution percentage, reopen percentage, and escalation percentage.
Verification methods: Percentage resolved by cool-down window, customer confirmation, and telemetry success.
Notes: New intents added this month. Intents moved to human-first. Governance changes (policy updates, permission changes).
The bottom line on ledgers
Outcome pricing isn't really a pricing decision. It's a product decision. You either build the infrastructure to explain what you're billing for, or you don't. And if you don't, you'll end up retreating to flat subscription pricing where margins take the hit every time AI adoption grows.
The resolution ledger is boring on purpose. It's a spreadsheet-shaped product that makes the whole model work.
---