For decades, SQL has been the universal language organizations use to query their data. Simple in its basic syntax yet powerful in its expressiveness, SQL has survived every technological wave—from the relational databases of the 1980s to modern data warehouses—remaining the fundamental tool developers, analysts, and systems of all kinds use to access structured information.
It’s no surprise, then, that when generative AI began transforming software development, SQL became one of the first areas of experimentation. The promise was compelling: AI agents capable of answering questions in natural language by automatically translating them into SQL queries, eliminating the technical barrier between those who have a question and those who know how to extract the answer from a database.
The problem no one expected
In theory, it works. In practice, something goes wrong.
Modern AI agents—even sophisticated ones like Claude—are actually very good at writing syntactically correct SQL. They can construct complex joins, advanced aggregations, and well-formed subqueries. The problem isn’t grammar. The problem is meaning.
An agent that doesn’t know your domain doesn’t know that the status column in the orders table follows a specific business logic. It doesn’t know that certain fields are deprecated and shouldn’t be used in current analyses. It doesn’t know that two seemingly similar tables represent radically different concepts in your organization’s context. The result? Queries that are technically flawless but return incorrect answers. And when faced with ambiguity, the agent doesn’t ask for clarification—it guesses.
This is exactly the problem organizations encounter when they try to build AI analysts on top of real, complex databases. The tool seems to work in demos, then fails in production—not with obvious errors, but with plausible yet incorrect answers, which are the most dangerous kind.
The solution: give the agent a map of the territory
The diagnosis, as Kris Jenkins explains, is clear: the agent knows SQL syntax, but it doesn’t know your software. It’s like hiring an expert in Italian grammar and expecting them to understand how your company operates.
The solution is equally clear: you need to teach it the domain. Not through improvised prompts or long informal instructions, but through a structured tool—the semantic model. A file that encodes all the tacit knowledge an experienced new hire would accumulate during their first six months: how tables relate to each other, what column names really mean, which types of queries make sense, which patterns are correct, and which should be avoided.
It’s the difference between handing someone a detailed map of the territory and letting them wander around hoping they’ll eventually find the right path.
In his talk at Codemotion Rome 2026, Jenkins—Lead Developer Advocate at Snowflake and host of the Developer Voices podcast, with a career that has included being CTO of a gold trading company, a Haskell specialist contractor, and a hackathon organizer—will guide the audience through the practical details of semantic models and the OSI standard. He will explain why standardization in this area is essential for scalability and will demonstrate concrete techniques for building effective semantic models quickly.
The ultimate goal is both simple and ambitious: an AI-based database analyst that is reliable and effective from day one—not after months of fixes and prompt engineering.
Come and see it in person.The talk “Beyond SQL Generation: How to Teach Agents What Your Database Means” by Kris Jenkins will take place at Codemotion Rome 2026. If you work with AI agents, data, and databases, it’s a session you won’t want to miss.

