top of page

Supersimple CEO on the Future of Data Analytics and Explainable AI

Marko Klopets, CEO of Supersimple, discusses the state of data analytics, the role of explainable AI, and the future of business intelligence.



Efficiency and profitability are king in 2024. Data holds the necessary context for winning decisions: data about how customers behave, teams operate, and insights into revenue and expenses. Until recently, however, data teams were the lone guardians for all things data analytics and Business Intelligence (BI) needed to unlock the critical insights buried within the business. Additionally, with the rise of AI, opportunities to keep track of more things than ever are now possible, allowing users to check a near-infinite number of hypotheses each day while lowering the barrier to entry for non-technical people to get answers to complex ad-hoc questions.


We spoke further with Supersimple co-founder and CEO Marko Klopets about the intersection of data analytics, business intelligence, and the concept of “explainable AI.”


Q: What is the state of today’s data analytics and business intelligence (BI) tools, and why are they not good enough for organizations?


A: These kinds of data tools are, in a sense, a very mature market – they’ve been around for several decades. Even though "business intelligence" sounds fancy, these tools are generally glorified dashboard-builders.


These legacy tools solve the problem of customizing report visuals quite well. The problem is that staring at static KPI dashboards isn't very useful. Accurate data-informed decisions instead require deep dives into complex ad-hoc questions. This is best done iterative, testing hypotheses, comparing, zooming in, and then back out.


Over the past few years, the rest of the data landscape has grown significantly. The "modern data stack" means we now have countless tools for every little part of the data stack. A few years ago, databases couldn't handle the more complex, ad-hoc, analytical queries. Today's robust cloud data warehouses instantly scale up near-infinitely.

Despite these advances, the last mile of making data useful is still in the early 2000s.


Non-technical people can't get answers without asking the data team. Data teams are swamped and cannot keep up with everyone's requests, which means people end up asking fewer questions and getting fewer insights.


For non-trivial work, those who can code fall back on something like a Jupyter Notebook. There, they start from a blank slate, often creating duplicate definitions and wasting time.


Q: What is a proposed solution to these issues, and why is it essential in today’s market? 


A: Back in 2021, we started building Supersimple to help people get real insights from their data. We doubled down on the bet that the actual value of data comes from answering deep, specific questions – not from static dashboards. We designed the platform around letting anybody on the team answer complex ad-hoc questions.


One of the challenges is that it is normally all too easy for non-data experts to make mistakes. We solve for that in a few different ways. First, we have a semantic data modeling layer that data teams use to clean up data and centralize definitions.


Thanks to that same data model, we can also let users think at a higher abstraction layer than database joins. We built a small set of simple no-code data exploration steps that users can combine to accomplish complex things. This way, nobody needs to worry about accidentally duplicating rows or messing up their join keys.


At the same time, we want the same platform to work for the technical teams, too. This is because it's important to have everyone on the same page, using the same definitions. Making it easy for people to collaborate from all across the company is also critical. This is why we also put a lot of effort into making the lives of data teams easy, empowering others, and answering their questions.


Q: How does GenAI fit into the picture to create more value concerning data analytics and BI tools? Talk about your concept of explainable AI.


A: There is an incredible number of companies currently doing the obvious thing – using large language models (LLMs) to generate SQL queries. Eighteen months ago, this made for a great Twitter demo. The problem with generating SQL using an LLM is two-fold.


First, people need to understand the data they're looking at, trust it and use it. Even if an experienced data scientist hands you a report whose results surprise you, you will ask follow-up questions! Before drawing conclusions, people want to know what the data means and where it comes from. Even if you're technical and can read SQL, debugging someone else's query isn't trivial. It's easier for me to write 20 lines of SQL than to read someone else's 20 lines to ensure it does exactly the right thing. Not everybody can read or write SQL.


Second, although LLMs have progressed greatly over the past few years, they are just not good at outputting accurate SQL that you can blindly trust.


We have a feature that lets you ask data questions in plain English. However, the way our platform handles this is very different from text-to-SQL. We have an ensemble of fine-tuned language models that interact with our no-code app to answer users' questions. The model effectively clicks buttons and leaves a trail in our UI. Anybody can then read through these steps, top to bottom, to understand precisely what it did.


Occasionally, the model will not do precisely what you wanted it to! Humans misspeak and describe data requests as complex. We can give you a few options if you need to change something or go deeper. Sometimes, it makes sense for you to use natural language again to ask a follow-up question. Other times, using a structured UI and clicking one or two buttons will make more sense. For example, I would much rather click the "remove" button on a filter step to remove a filter, rather than type out a complete sentence request.


Interestingly, this type of model also outperforms those just using GPT-4 to generate SQL. This is partly because the AI doesn't need to worry about all the database complexities involved. Much of that work is offloaded to our so-called "query engine," which turns the no-code steps into executable SQL.


I think this kind of explainability in AI is critical in any workplace context. The fact that this can give you better model performance is a nice bonus!


Q: What is the future of BI, and why is it essential for developers to pay attention?


A: On the data stack side, we will see another re-bundling of many of the tools that the "modern data stack" recently unbundled. Evaluating, picking, and wiring dozens of tools is unfeasible for most data teams and companies.


In BI and analytics, we will see the roles of the technical people changing – for the better. Today, data teams in practice act like a translation layer between man and machine. We're finally starting to see a trend where technical folks can empower others to do this data work. Modern tools allow engineers to empower others without having quality levels drop. This frees up their time to work on more strategic and exciting work – instead of the current constant firefighting.

bottom of page