Skip to content
← ALL WRITING

2026-04-23 / 9 MIN READ

Dashboard design for operators: what founders actually read

Contrarian essay on dashboard design for DTC operators. Why the 12-tile exec overview is a trap and what founders actually read in 90 seconds.

Most DTC exec dashboards are built to pass an imaginary peer review. They have twelve KPI tiles across the top, a dense sparkline grid, a leaderboard, and a geographic heatmap nobody has looked at twice. They were built by a data person for a data person, and the operator who actually has to make decisions off them opens the dashboard maybe twice a week for 90 seconds each time.

This is the contrarian take. The twelve-tile dashboard is a trap. What operators actually read is three to five numbers, a direction arrow, and one question answered. Everything else is noise masquerading as rigor.

I have been on both sides of this. Building dashboards that looked impressive and nobody read. Building dashboards that looked sparse and got pulled up in every Monday meeting. The second kind wins. Here is what I learned about why.

Fits into the warehouse-first analytics rebuild hub at stage 5, the dashboard layer. Pairs with the Looker Studio DTC templates decision log on which dashboards to build versus buy.

dashboard design
Revenue
$142K
-3%
Orders
1,483
-2%
AOV
$95.75
+1%
CR
2.8%
-0.1%
CAC
$42
+$3
LTV
$128
+$1
ROAS
3.05
-0.08
Returns
4.1%
+0.2%
Repeat %
38%
+1
New Cust
1,240
-18%
Email %
22%
+0.5
Geo Top
US
-

12 equal-weight tiles. Eye bounces. Nothing is most important. Avg read time: 10 seconds.

Hiding data is the design. Everything else is below the fold.

The 90-second test

Open your current exec dashboard. Start a timer. Read it as if you were the founder checking on the business between meetings. Stop the timer when you have absorbed enough to feel informed.

For most dashboards I have audited, that number is 8 to 14 seconds. Not 90. The founder is not reading twelve tiles. They are glancing at three, maybe four, that grab their eye.

So the design question is not "what are the twelve most important metrics?" It is "what are the three numbers the founder actually glances at, and how do I make those three inform a decision instead of just informing existence?"

What founders actually read

From watching operators use dashboards in real meetings, the pattern is consistent.

Number 1: this week's revenue versus plan. Are we on track? Yes/no/by how much. One number, one delta, one arrow. If the answer is no, they want to know why. If the answer is yes, they move on.

Number 2: new customer count versus last week. Is acquisition still working? This is the leading indicator. Revenue can be propped up by repeat buyers for a couple of weeks while acquisition silently falls off a cliff.

Number 3: 90-day LTV of last month's acquisition cohort versus the prior cohort. Is the quality of the traffic we are buying degrading? This is the existential-risk number. Every DTC brand that collapses does so because this number trended down for six months and nobody was looking.

Three numbers. That is the full dashboard for 80 percent of operator-level decisions. Everything else is a deep-dive for when one of these three raises a flag.

The tile-grid fallacy

The default dashboard pattern is a grid of KPI tiles at the top: Revenue, Orders, AOV, Conversion Rate, CAC, LTV, ROAS, Return Rate. Each tile has a number, a delta, and a sparkline.

The problem is that a grid flattens priority. Twelve tiles say "these are all equally important." Nothing is more important than anything else, so nothing is important at all. The eye bounces between them looking for the one that matters.

The fix is to stop using a grid. Put one number at the top, huge. Put two supporting numbers underneath it, smaller. Everything else goes below the fold or in a separate "details" page that the operator never opens.

This feels wrong. It feels like you are hiding data. You are. You are hiding data the operator was not going to read anyway, in service of the data they will.

The "so what" column

Every number on an operator dashboard should answer "so what." If revenue is down 3 percent week-over-week, the dashboard should either tell me why or tell me that it is noise.

The design primitive I use: every KPI tile has a one-sentence interpretation underneath the number. Not a rule-engine-generated explanation. A human-written sentence that says "this is noise" or "this is the promo ending" or "this is concerning."

This week's revenue: $142K (-3% vs last week)
  ↳ Promo ended Tuesday. Expected bounce-back next week.

New customers: 1,240 (-18% vs last week)
  ↳ Meta ad spend paused Friday. This is the consequence.

90-day cohort LTV: $96 (-$4 vs prior cohort)
  ↳ Flat month over month. Still in the healthy band.

The interpretation is the dashboard. The number is just the evidence. Most dashboards skip the interpretation and ask the reader to do the thinking every time they load the page, which is why the dashboard gets loaded twice a week instead of daily.

Is this more work? Yes. Someone has to write the interpretation line each week. That is a 10-minute discipline. It is also what separates a dashboard that gets used from a dashboard that gets ignored.

The charts that earn their keep

Not all charts. A few.

The trend line for the single most important metric. Usually weekly revenue over the last 13 weeks. The founder's eye reads the line shape (up, flat, down, wobble) before reading any numbers.

The cohort LTV grid. Cohort LTV from Shopify raw data in a simple month-by-horizon grid. The founder scans the most recent rows and the most recent columns. The shape of the grid (numbers getting bigger from left to right, newer rows starting at the same place as older ones) is the story.

The CAC-payback curve. How many days does it take for a new customer's cumulative revenue to exceed their acquisition cost? The number that matters is "payback day," the X-intercept of the curve. The curve itself is not what the operator reads; they read "payback at day 72" versus "payback at day 84 last quarter."

That is most of the charts. Not twelve. Three.

What to kill, mercifully

Things I remove from every operator dashboard I audit:

  • Any tile whose number does not drive a weekly decision
  • Geographic heatmaps (on the exec dashboard; build a separate one for the marketing team)
  • Funnel diagrams (Funnel numbers are useful, but the Sankey diagram is not the way to present them)
  • Leaderboards of individual SKUs or campaigns (belongs on a product or marketing dashboard, not on the exec one)
  • "This month versus last month" comparisons where both periods include a major promo (comparing two noisy numbers is noise)
  • Any KPI with fewer than 12 months of history (you do not know what "normal" looks like yet)

This is unpopular. Every removed tile had someone who wanted it. That is fine. The test is not "does someone want this" but "does the founder read it weekly." Most removed tiles fail the test.

The worst dashboard is the one that technically has every answer. The best one is the one that makes three decisions easy.

The "make one decision" dashboard pattern

A pattern I keep coming back to. Every operator dashboard should be designed to make exactly one decision easy.

  • The exec dashboard: "are we on track, and if not, what do we need to address first?"
  • The marketing dashboard: "where should we put next week's spend?"
  • The merchandising dashboard: "what should we promote, what should we discount, what should we cull?"
  • The retention dashboard: "is the repeat-rate curve healthy?"

Four dashboards. Each makes one decision easy. None tries to make all four.

The reason this works: operators do not make decisions from dashboards. They confirm decisions with dashboards. The decision is already half-formed in their head. The dashboard's job is to give the confirmation quickly and make the next step obvious. A dashboard that tries to serve too many decisions serves none of them well.

FAQ

What if different stakeholders want different numbers?

Build different dashboards. Do not try to serve the CEO, the CFO, the CMO, and the CTO from one dashboard. Each person's question is different, and the dashboard should be designed to answer their question in 30 seconds. Shared infrastructure underneath (same warehouse, same dbt models); different dashboards on top.

How do I get buy-in to remove tiles?

Run the 90-second test with the stakeholder watching. Walk them through the current dashboard. Then walk them through a hypothetical three-number version. Most people recognize the first one is a cognitive tax. The political work is less hard than it feels.

Isn't more data always better?

For the underlying warehouse, yes. For the dashboard, no. More data on a dashboard increases cognitive load and decreases read frequency. The warehouse should have everything; the dashboard should surface the subset the operator needs for the decision they are making.

What about drill-down from summary to detail?

Good drill-down is great. Bad drill-down is worse than no drill-down. If the drill-down takes more than one click and three seconds, nobody uses it. Metabase and Hex handle drill-down well; Looker Studio's click-through-to-another-report pattern is workable but clunky.

Should the exec dashboard be live or refreshed daily?

Daily is fine for 90 percent of DTC exec decisions. Live monitors belong on separate pages for live-sensitive decisions (a flash sale, a viral moment). Trying to make the exec dashboard live usually adds cost and latency without adding value.

What to try this week

Pull up your current exec dashboard. Run the 90-second test. Note which three tiles your eye actually lands on. Then build a new version with only those three tiles plus one trend line, and put a one-sentence interpretation under each. Ship it alongside the old one for two weeks and ask the founder which they open.

If the 90-second test reveals that your current dashboard has no clear lead number, the deeper issue is that the three-number operator framework has not been adopted in the first place. A DTC Stack Audit helps scope the warehouse-plus-dashboard rebuild that makes that framework executable.

Sources and specifics

  • The 8-to-14-second dashboard-read observation is from watching operators in real meetings across multiple engagements, not a published benchmark.
  • The three-number framework (revenue vs plan, new customers, cohort LTV direction) is what held up across engagements; your brand may need different three.
  • Cohort LTV grid reading pattern draws from cohort LTV from Shopify raw data.
  • The "make one decision" framing is borrowed loosely from the Amazon PR/FAQ culture and adapted to dashboard design.
  • Pattern drawn from the Q1 2026 analytics engine case study, where a handful of well-designed dashboards replaced multiple browser tabs of weekly CSV exports.

// related

Let us talk

If something in here connected, feel free to reach out. No pitch deck, no intake form. Just a direct conversation.

>Get in touch