Users can perform ad-hoc analysis, at speed, using natural language.
Kinetica just introduced the first analytics database to integrate with ChatGPT. This gives users, even “citizen data scientists” and business owners, the opportunity to ask any question of their proprietary data.
I was able to ask Nima Negahban, Cofounder and CEO at Kinetica some questions leading up to the release of their new offering.
Explain how the conversational querying of databases works.
Conversational querying involves the use of a generative AI interface with a real-time analytic database. Generative AI can take natural language and convert it to SQL, which is then run, and an answer is returned. What's key is that the database must answer ad-hoc questions quickly, not just questions that are known in advance and optimized through tedious data engineering to come back quickly.
How will conversational querying change how we query databases going forward?
Soon, the predominant way to query will be through natural language, not Structured Query Language (SQL). SQL is a skill that most people do not have, so many more people will now be able to ask questions of their data. People will come to expect that they can ask any question of their data and get immediate responses.
What are some of the business problems that the conversational querying of databases will solve?
Conversational query can solve business problems by enabling intuitive and dynamic data exploration, expanding user access, and improving decision-making. By ingesting large amounts of streaming data, we are able to ensure answers include the most up-to-date information, for questions like, “What is the real-time status of our inventory, and should we reroute delivery vehicles to reduce out-of stocks?”
Do you have some specific use cases?
Users can interact with data generated by IoT devices, such as sensor data, telemetry data, or device logs, in a conversational manner to monitor performance, identify anomalies, or trigger actions for remote operations or maintenance.
Within the supply chain, users can ask questions about inventory levels, supplier performance, or demand forecasts in a conversational manner to optimize supply chain operations, identify bottlenecks, or even solve for re-routing of delivery fleets in real-time.
For fraud detection, users can leverage conversational query to analyze complex data from diverse sources, such as transaction and log data, and apply graph analytics techniques to detect anomalies, patterns, or suspicious connections in real-time.
What's the biggest challenge a user will need to overcome to be successful querying their databases conversationally?
Just because there's a generative AI front end on your database doesn't make it conversational. There will be some disillusionment when users are told they can only ask canned questions or have to wait hours for an answer to come back. Organizations are going to need to overcome technical debt associated with siloed data and analytics, batch architectures, and complex data pipelines that restrict analytic agility.
How will this make developers' lives simpler and easier?
Conversational querying will enable developers to interact with data sources using natural language, eliminating the need for complex query languages or code. This simplifies the querying process and reduces the need for developers to learn and use specialized query languages, making it more accessible and intuitive for non-experts. It also removes the burden of building and maintaining tedious data pipelines, allowing developers to spend less time on the plumbing of an analytic platform.
Is there anything else developers need to know about conversational querying of databases?
The future is here. Anyone can start trying it at kinetica.com/sqlgpt.