You are two days from a board meeting. You pull your MRR from Stripe — it says $127,000. Then you open ChartMogul to prep the deck slides — it says $119,400. Then your finance lead sends over a spreadsheet she reconciled last night — her number is $131,200. Three systems. Three numbers. Three versions of what is supposed to be the single most important metric in your business.
This is a structural problem in how SaaS companies collect, store, and calculate their financial data. And it is more common than most founders want to admit.
Here's why SaaS metrics accuracy is an infrastructure problem. When the raw data feeding your metrics is scattered across disconnected systems, each applying its own definitions and logic, the outputs will always contradict each other. And every decision you make downstream — from budget allocation to fundraising narratives to product investment — is built on numbers you cannot trust.
Working with SaaS companies doing $1M to $10M in ARR, I see the same pattern repeatedly. The founders know the formulas. They understand what MRR, churn, and LTV are supposed to measure. What they don't realise is that the data underneath those formulas is quietly wrong.
PART 01Three lies your billing platform tells you¶
Your billing platform isn't lying on purpose. But it was built to process payments, not to be your source of financial truth. When you treat it as your metrics engine, three specific problems emerge.
Lie 1: Your MRR includes revenue that isn't recurring
Most billing platforms, including Stripe, treat every active charge as MRR unless you manually exclude it. Setup fees, onboarding charges, one-time consulting add-ons, and professional services invoices all get lumped into the same number. The result is an MRR figure that overstates your predictable revenue — sometimes significantly.
If you've charged three enterprise clients $5,000 each for onboarding this month, your MRR calculation is inflated by $15,000. That's not recurring. It will not repeat next month. But your dashboard doesn't know that.
Annual contracts create another distortion. When a customer pays $24,000 upfront for an annual plan, Stripe records a $24,000 charge that month. Some analytics tools correctly normalise this to $2,000/month MRR. Others don't — or they normalise inconsistently depending on how the subscription was configured. If you have a mix of monthly and annual billing, there's a real chance your MRR number is wrong simply because of how charges are categorised.
In eight of the last twelve SaaS audits I ran, MRR was overstated by more than 10%. The source was always non-recurring line items treated as subscription revenue.
Lie 2: Your churn mixes voluntary and involuntary together
Research from Paddle shows that roughly 20–40% of MRR churn across SaaS comes from failed credit cards — not from customers deliberately choosing to leave. That is involuntary churn. Expired cards, insufficient funds, bank declines.
When your churn rate bundles these two categories together, it becomes useless for product decisions. Your product team sees a 6% monthly churn rate and assumes customers are unhappy. In reality, 2.5% of that might be preventable with a proper dunning sequence and payment recovery system. The product is fine. The billing infrastructure is not.
Enterprise targets for gross MRR churn sit below 1%. SMBs should aim for 2–2.5%. But if you're not separating voluntary from involuntary, you have no idea where you actually stand against these benchmarks.
Lie 3: Your LTV model assumes linear behaviour
The standard SaaS LTV calculation: take the average revenue per user, divide by your monthly churn rate. Simple formula — and almost always wrong. The problem is that it treats all customers as identical. A customer acquired through a Google Ads campaign behaves nothing like one who came through a product-led growth loop. An SMB on your $49/month plan has a completely different retention curve than an enterprise client on a $2,000/month contract. Averaging them produces a number that describes no actual customer in your business.
When your LTV model is wrong, your LTV:CAC ratio is wrong. And when your LTV:CAC ratio is wrong, every investment decision built on top of it — how much to spend on acquisition, which channels to scale, when to hire — is distorted at the foundation.
PART 02Why the data infrastructure layer is the real problem¶
Most SaaS companies have their financial data scattered across five to eight disconnected systems. Billing lives in Stripe or Chargebee. Customer relationships live in HubSpot or Salesforce. Product usage lives in Mixpanel or Amplitude. Support data lives in Intercom or Zendesk. Accounting lives in QuickBooks or Xero. Ad spend lives in Google and Meta.
Each system was built to serve its own purpose — and each one defines core concepts differently. What counts as an "active customer" in Stripe differs from what counts as an "active user" in Mixpanel. What Stripe calls a "subscription" might not match what your CRM considers an "account." When ChartMogul calculates your MRR, it applies its own normalisation logic. When your finance lead builds a spreadsheet, she applies a third set of definitions.
This is the root of the problem. You don't have one data infrastructure. You have six or seven competing data islands, each producing its own version of reality — and they never agree.
The fix is not better formulas. It is not a more expensive analytics tool layered on top of broken data. It is a unified data layer where every metric is calculated from the same raw data, using the same definitions, applied consistently.
PART 03The five metrics that break first¶
If your data infrastructure has the fragmentation problem described above, these are the five metrics most likely to be materially wrong. In every audit I conduct, at least three of these are inaccurate.
1. Gross MRR vs. Net MRR confusion
Gross MRR churn measures only the revenue lost from cancellations and downgrades. Net MRR churn factors in expansion revenue from upgrades and cross-sells. These are fundamentally different numbers. If your net MRR churn looks healthy at −1% (negative, meaning you're growing from existing customers), that might be masking a gross MRR churn of 5%. You're losing customers at an alarming rate, but expansion revenue is papering over the hole. Eventually, the churn catches up.
2. Churn rate denominator problems
Are you dividing churned customers by the number at the start of the period? Or by the average during the period? This isn't trivial — the choice changes your churn rate by 10–30% in either direction. Most companies don't even know which method their analytics tool uses. And different tools in the same company often use different denominators, which is why the numbers never agree.
3. Net revenue retention inflation
NRR above 100% is the metric most strongly correlated with SaaS company valuation. Above 110% is considered strong. Above 120% is exceptional. But NRR can be artificially inflated by forced plan upgrades, pricing increases applied without product changes, or billing corrections that look like expansion. If your NRR is 115% but most of that "expansion" came from a company-wide price increase rather than genuine product adoption, investors will see through it during due diligence.
4. LTV:CAC ratio distortion
The benchmark is 3:1. But this ratio is only as accurate as its inputs. If your LTV is calculated on a flawed churn rate, and your CAC excludes sales team salaries, benefits, and tooling costs, your "healthy" 4:1 ratio might actually be 1.5:1 when calculated correctly.
5. CAC payback period miscalculation
Under 12 months is the target. Above 18 months signals capital-intensive growth. But most SaaS companies calculate CAC payback on blended numbers rather than segmenting by channel, plan tier, or acquisition cohort. A blended 10-month payback might consist of organic at 3 months and paid at 22 months. Knowing the difference is the difference between scaling profitably and burning cash.
PART 04How to fix it¶
The infrastructure required to get your metrics right is accessible, affordable, and does not require a team of data engineers.
- Centralise into a data warehouse. BigQuery or Snowflake. Connect Stripe, your CRM, product analytics, and support tools using automated pipelines (Airbyte, Fivetran, or Stitch). For a SaaS company doing $1M to $5M in ARR, the warehouse itself costs roughly $50–$200/month. This is not an enterprise-level investment.
- Define your metric definitions in code. Write SQL models that codify exactly what "MRR" means for your business — net of refunds? Annual contracts pro-rated monthly? Excluding one-time fees? Excluding free trials? These are business decisions that should be encoded in logic, not left to the interpretation of whichever analyst happens to pull the number. Tools like dbt make this process structured, version-controlled, and auditable.
- Separate voluntary from involuntary churn. Tag every cancellation by reason. Failed payments go into a dunning recovery bucket, not your product churn metric. This single separation changes most SaaS companies' churn picture dramatically.
- Build cohort-level LTV models. Stop using company-wide averages. Calculate LTV by acquisition channel, plan tier, and signup month. You'll discover that some cohorts are 3–5× more valuable than others. When you see this segmented, budget allocation becomes obvious instead of political.
- Automate the reporting layer. Connect your warehouse to a dashboard that updates daily. No more Monday morning spreadsheet reconciliation. One source of truth, updated automatically, visible to everyone who needs it. When the finance lead, the founder, and the board all see the same numbers from the same source, the conversation shifts from "whose spreadsheet is right?" to "what should we do about it?"
FINALWhat accurate metrics make possible¶
Board meetings become strategic. Instead of spending 30 minutes reconciling numbers and defending methodology, you spend the entire meeting discussing what the data means and what to do about it. One SaaS founder told me his board prep went from 12 hours to under two after centralising his data — not because the presentation got simpler, but because the numbers were defensible from a single source.
Budget allocation becomes evidence-based. When you can see that organic search customers have a 6:1 LTV:CAC ratio while paid social sits at 1.8:1, the budget conversation changes from opinion to math. You stop arguing about gut feelings and start arguing about thresholds.
Investor confidence compounds. Investors evaluate metric consistency across reporting periods. When your MRR definition changes between board meetings because different people pulled the number from different tools, it erodes trust even if the business is performing well. Clean, consistent, automated metrics signal operational maturity — and operational maturity is what separates companies that raise at strong multiples from those that scramble.
Revenue leakage becomes visible. When your contracted revenue, billed revenue, and collected revenue all live in the same warehouse, gaps become obvious. The $50K in unbilled overages, the three enterprise clients whose price increases were never applied, the dunning sequences that stopped running after a system update — these leaks only become visible when you have a unified data layer to expose them.
If your metrics are potentially wrong — and in my experience, about 70% of the time they need material correction — the SaaS Metrics Framework engagement I run starts with a full metric audit before building anything new. Two to three days of focused SQL work on your billing system. Clear findings, clear fixes.
Sources. Paddle "SaaS churn benchmarks" (2024); MGI Research "Revenue leakage report" (2025); SaaS Capital Q4 2025 benchmarks; author's audit sample of 12 UK/EU SaaS companies, seed to Series A, 2024–2026.
