working-with-data

Workshop: Faster Analytics with Metabase & AI

· 45 minutes

  days
:
  hours
:
  minutes
:
  seconds
 

Guest

Stephen Tracy

Stephen Tracy

CDO & Udemy Course Creator

Stephen is a data and analytics practitioner with 15+ years of experience across data science, market research, and AI. He’s a former Chief Data Officer at North One and co-founder of Milieu Insight, and has taught tens of thousands of students through his data courses. He also hosts the Empirical Storytellers podcast, where he talks with data and AI leaders about how decisions actually get made.

Summary

Most centralized data teams hit the same bottleneck: questions come in through Slack, become tickets, and move through a familiar cycle of scoping, SQL, charting, and follow-up requests. A lot of analyst time goes to routine questions that are important, but repetitive.

In this workshop, Stephen Tracy showed how to shorten that loop with Metabase and AI. Stakeholders ask questions in plain language, Metabot generates queries and charts, and analysts step in where judgment matters most: definitions, edge cases, and messy joins.

The setup was intentionally practical: Metabase Open Source in Docker, Slack connected through Metabot, and an external model provider like Anthropic (with OpenRouter as another option).

One core takeaway: scope matters more than adding every possible feature. Giving Metabot access to your entire schema usually hurts quality and speed. A curated collection of trusted dashboards and saved questions gives it better context and leads to more reliable answers.

Stephen also emphasized that the semantic layer is the real foundation. In Metabase Data Studio, clear table and column descriptions, defined relationships, and shared business language (segments, measures, glossary terms) make AI outputs more accurate and easier to trust.

Inside Metabase, you can ask questions and get back queries and charts quickly, but useful results still need review and saving. In Slack, the workflow becomes more collaborative: replies stay in-thread, and each answer links back to the underlying Metabase question so teams can inspect, reuse, and improve it.

For sharing outcomes, Stephen highlighted Documents as a better fit than dashboards for fast-moving analysis. Dashboards are best for stable monitoring; Documents work better for evolving investigations where you need charts, narrative context, and iteration in one place.

He closed with practical guidance on model selection and operations: smaller models can struggle with real business questions, larger ones can be unnecessarily expensive, and mid-sized models (like Claude Sonnet) are often a strong balance. If things fail, token or rate limits are a common culprit, and local installs benefit from more memory and richer logging.