Your Dashboard Isn’t Wrong - Your KPI Logic Is

Wait 5 sec.

A dashboard got called “wrong” in one of my meetings, and for a minute I thought we had a data issue.We didn’t. The refresh had run, the SQL hadn’t failed, the chart was pulling from the right table, and the totals were exactly what the logic told them to be. But finance was looking at one number, operations had another in a spreadsheet, and the dashboard was showing a third. Same business, same week, same metric name, different answers.That was the moment I realised::::infoMost dashboard fights are not really about dashboards.:::They are about data definitions people thought were right but weren’t. Everyone says they want one source of the truth, but the truth usually falls apart much earlier than the visual layer and is it differs from team to team, when one team defines revenue by booking date, another by completion date, and nobody thinks that difference is important enough to document until they visually see it.That is why I’ve become a lot more skeptical of complaints like “the dashboard is wrong.” Sometimes it is wrong. More often, it is doing exactly what it was built to do, and the real problem is that nobody agreed properly on what the number was supposed to mean in the first place.Why this happenedMost KPI logic starts life in a messy way. It begins with a reasonable business question like how many active customers do we have, what was revenue last week, what is our conversion rate, then somebody writes a query, somebody else copies it into a report, someone downstream changes one filter, and within a few months a metric that sounded quite simple has split into three unofficial versions.Nobody plans for that to happen, it just does. I didn’t either. Marketing team counted customers by login, Finance counted them by transaction and Operations excluded paused accounts. Revenue gets defined one way for finance and another way for operations because both definitions are useful for different purposes. Then eventually, all of them ended up on different dashboards with the same label.:::infoAt that point, the dashboard was not doing analytics anymore. It is hosting a never-ending argument.:::The part nobody admitsProfessionals love to say they want “one source of truth” but what they usually mean is “one source of data.” That is not the same thing.:::warningYou can have one warehouse, one pipeline, one BI tool, (as in my case) and still have a mess on your hands if nobody agreed on the logic between the raw data and the metric shown to the business. That gap is where trust breaks and ambiguity creeps in.:::You see it when someone asks why the dashboard does not match a spreadsheet. You see it when a stakeholder says, “That’s not how we define churn.” You see it when a weekly report gets derailed by ten minutes of metric debate before anybody even talks about what changed.The problem isn’t that people are too picky, it’s that the KPI was never stable enough to survive ‘change’ with the business.\A simple test I use nowI simply use four blunt questions now.When a KPI keeps causing trouble, I ask:\What exactly are we counting? Customer, order, account, session, product, case, day? If the counting unit is fuzzy, the KPI is already unstable.What gets excluded? Refunds, test accounts, cancelled records, internal traffic, duplicate rows, partial periods. If exclusions are not explicit, expect fights.What date are we using? Booking date, event date, invoice date, resolution date, payment date. Half of KPI confusion comes from time logic that never got written down properly.Who owns the definition? Not who built the query. Who has the authority to say, “This is the definition, this is when it changed, and this is the one version other reports should reuse.”If those four questions do not have sharp answers, the metric is not ready for a dashboard.What this looks like in practiceLet’s take completed revenue as an exampleAll product teams are excited by the sound of it but almost nobody means the same thing when we say, ‘completed revenue’Here is a simple version of how I would define it in code if I wanted the logic to be clear enough that people could argue with the definition, not the dashboard.\with base_orders as (    select        order_id,        customer_id,        completed_at::date as order_day,        gross_revenue,        coalesce(refund_amount, 0) as refund_amount,        status,        is_test_order    from fact_orders), kpi_ready_orders as (    select        order_id,        customer_id,        order_day,        gross_revenue - refund_amount as net_revenue    from base_orders    where status = 'completed'      and is_test_order = false), daily_completed_revenue as (    select        order_day,        count(distinct order_id) as completed_orders,        sum(net_revenue) as completed_revenue    from kpi_ready_orders    group by order_day) select *from daily_completed_revenueorder by order_day;\That query is not too clever; it is useful because it makes the assumptions visible.:::tipCompleted means status = 'completed'. \n Revenue means grossrevenue - refundamount. \n Test orders are excluded. \n The grain is daily, based on completed_at.:::Now somebody can disagree properly. They can say, “We should use invoice date instead,” or “Refunds should be reported separately.” Fine, that is a real business discussion. What they should not be doing is discovering those assumptions accidentally after the dashboard is already live.Dashboard layerThis is the flow I keep seeing:The complaint lands at the dashboard layer because that is what people see but the damage usually happened in the KPI definition layer.That is the layer that decides:what countswhat gets filtered outwhat date matterswhich edge cases are includedwhether the same metric name means the same thing everywhereIf that layer is weak, the dashboard has no chance. It can only display the confusion more neatly.The measurable resultI did not measure success here by prettier charts or fewer comments about formatting.The real metric is how often the number gets challenged. A useful result statement for this kind of story is::::infoAfter moving disputed KPIs into a shared logic layer and forcing reports to reuse the same calculation path, metric clarification threads dropped by [X%] over [Y weeks], and recurring review time fell by [Z hours] per month.:::That is the outcome that matters, less debate, less rework and faster decisions.