A dashboard is mostly a saved bunch of queries with a visual-aid in interpretting them.
Staleness, which Majors alludes to, is one problem.
The visuals being a poor aid to interpretability is another.
Usually these problems are coexisting & compounding: there’s too many charts here, half of them are out of date, half I don’t know how to make sense of them, and I’m not sure which half is which.
It would be interesting to revisit some of these problems in the AI era, because often the missing layer is a guide & runbook (which maybe an agent could do) that’s better than “page everyone who might know why this chart looks bad”
Reminds me of another fun dashboard question which is like “how many different ‘single source of truth’s does your company have and how many are true?”
Dashboards, like any reductive representation of reality, are inherently limited. They highlight certain aspects while obscuring others, capturing specific data points while ignoring others. This is similar to how a map simplifies a territory or an image represents, but is not, the actual object (like in Magritte's famous painting "The Treachery of Images").
The problem is that dashboards are static snapshots of a dynamic reality. They are built on past understanding, and as systems evolve, they can become outdated and misleading. This can happen in several ways, two common ones are:
- Data drift: The underlying meaning of the data changes. For example, a bug might cause app crashes to be misreported, rendering crash-related metrics unreliable.
- Blind spots: Dashboards can't capture what they're not designed to measure. If user needs shift, a dashboard focused on existing feature usage won't reveal those changes.
This limitation isn't unique to dashboards. Any projection from a higher-dimensional space (reality) to a lower-dimensional representation (dashboard) will inevitably lose information. The problem is exacerbated when the representation doesn't adapt to the changing reality.
We’re working on a static site generator for data analysts called evidence. It’s an alternative to conventional BI tools.
Procedurally generating pages from data and linking them together is a core part of the offering. In many applications this is far easier for users than presenting them with a conventional filter interface.
It’s mostly a light criticism of the Honeycomb perspective. All the value is presumably in the outbound links to thinking about dashboards that the author laments is being ignored.
Not sure about that... to me it's more that a dashboard has to be seen as some kind of ever-changing "scrap paper"... it shouldn't be a "product", it should always be evolving with your business / your goals / your tracking etc
Staleness, which Majors alludes to, is one problem.
The visuals being a poor aid to interpretability is another.
Usually these problems are coexisting & compounding: there’s too many charts here, half of them are out of date, half I don’t know how to make sense of them, and I’m not sure which half is which.
It would be interesting to revisit some of these problems in the AI era, because often the missing layer is a guide & runbook (which maybe an agent could do) that’s better than “page everyone who might know why this chart looks bad”
Reminds me of another fun dashboard question which is like “how many different ‘single source of truth’s does your company have and how many are true?”