
Airtable is applying its data-first design philosophy to AI agents with the debut of Superagent on Tuesday. It is a standalone research agent that deploys teams of specialized AI agents that work in tandem to complete research tasks.
The technical innovation lies in how Superagent’s orchestration maintains context. Early agent systems used simple model routing where an intermediary filtered information between models. The Airtable orchestrator maintains full visibility of the entire execution journey: the initial plan, execution steps and sub-agent results. It does what co-founder Howie Liu calls it "a parallel journey" where the orchestra makes all the decisions along the way.
"It ultimately comes down to how you take advantage of the model’s self-reflection capability," Liu told VentureBeat. Liu founded Airtable more than a dozen years ago with a cloud-based relational database at its core.
Airtable built its business on a singular bet: Software should adapt to how people work, not the other way around. That philosophy has fueled the growth of more than 500,000 organizations, including 80% of the Fortune 100, using its platform to build custom applications that fit their workflows.
The superagent technology is an evolution of the capabilities originally developed by DeepSky (formerly known as Gradient), which was acquired by Airtable in October 2025.
From structured data to free-form agents
Liu framed Airtable and Superagent as complementary form factors that together address different business needs. Airtable provides the structured foundation, and the superagent handles the unstructured research tasks.
"Obviously we start with a data layer. It goes by the name Airtable: It is a table of data," Liu said.
The platform evolved as scaffolding around the core database with workflow capabilities, automation, and interfaces that scale to thousands of users. "I think Superagent is a complementary form factor, less structured," Liu said. "These agents are, by nature, very free form."
The decision to create free-form capabilities reflects the industry’s learnings about using more capable models. Liu said that as models become smarter, the best way to use them is to have fewer restrictions on how they run.
How Superagent’s multi-agent system works
When a user submits a query, the orchestrator creates a visual plan that breaks down complex research into similar workstreams. So, for example if you are researching a company for investment, that will be divided into different parts of that task, such as team research, funding history research, competitive landscape research. Each workstream is delegated to a special agent that executes independently. These agents work together, their work is coordinated by the system, each one contributes his part to the whole.
While Airtable defines Superagent as a multi-agent system, it relies on a central orchestrator that plans, dispatches, and monitors subtasks – a more controlled model than fully autonomous agents.
The Airtable orchestrator maintains full visibility of the entire execution journey: the initial plan, execution steps and sub-agent results. This creates what Liu calls "a parallel journey" where the orchestra makes all the decisions along the way. The sub-agent method aggregates the results without polluting the context of the main orchestrator. Superagent uses several boundary models for different sub-tasks, including OpenAI, Anthropic, and Google.
It solves two problems: It manages context windows by compiling cleaned results without pollution, and it enables adaptation during execution.
"Maybe it tried to do a research task in a certain way that didn’t work, didn’t find the right information, and then it decided to try something else," Liu said. "It knows it tried the first thing and it didn’t work. So it won’t make the same mistake again."
Why data semantics determine agent performance
From the builder’s perspective, Liu argues that agent performance depends more on the quality of the data structure than model selection or easy engineering. He based this on Airtable’s experience building an internal data analysis tool to find out what works.
Internal testing of the tool revealed that data preparation consumes more effort than agent configuration.
"We have found that the hardest part to get right is not the actual agent harness, but most of the special sauce has to do with massaging the data semantics," Liu said. "Agents really benefit from good data semantics."
The task of preparing the data is focused on three areas: changing the data so that the agents can find the right tables and fields, explaining what the fields represent, and ensuring that the agents can use them reliably in queries and analysis.
What businesses need to know
For organizations evaluating multi-agent systems or building custom implementations, Liu’s experience focuses on many technical priorities.
Data architecture prioritizes agent deployment. Internal experimentation has shown that businesses should expect data preparation to consume more resources than agent configuration. Organizations with unstructured data or poor schema documentation will struggle with agent reliability and accuracy regardless of model sophistication.
Context management is critical. Simply stitching together different LLMs to create an agent workflow is not enough. There must be a proper context orchestrator that can maintain state and information with a view of the entire workflow.
Relational databases are important. A relational database architecture provides cleaner semantics for agent navigation than document stores or unstructured repositories. Organizations that standardize on NoSQL for performance reasons should consider maintaining relational views or schemas for agent consumption.
Orchestration requires planning capabilities. Like a relational database with a query planner to optimize results, agentic workflows require an orchestration layer that plans and manages results.
"So the punchline and the short version is that a lot of it comes down to a very well-planned and executed orchestration layer for the agent, and it can fully leverage the models for what they’re good at," Liu said.








