Databricks’ serverless database cuts app development from months to days as companies prepare for agent AI



Five years ago, Databricks coined the term ‘data lakehouse’ to describe a new type of data architecture that combines a data lake with a data warehouse. That term and data architecture is now common in the data industry for analytics workloads.

Now, Databricks is looking to create a new category with its Lakebase service, which is generally available today. While the data lakehouse construct deals with OLAP (online analytical processing) databases, Lakebase is about OLTP (online transaction processing) and operational databases. The Lakebase service has been in development since June 2025 and is based on technology acquired by Databricks through its acquisition of PostgreSQL database provider Neon. It was further improved in October of 2025 with get the mooncake, which brings capabilities to help link PostgreSQL to lakehouse data formats.

Lakebase is a serverless operational database that represents a fundamental rethinking of how databases work in the age of autonomous AI agents. Early adopters, including easyJet, Hafnia and Warner Music Group, cut application delivery times by 75 to 95%, but more profound architectural changes position databases as ephemeral, self-service infrastructure that AI agents can provision and manage without human intervention.

This is not just another managed Postgres service. Lakebase treats operational databases as lightweight, disposable compute running on data lake storage rather than monolithic systems that require careful capacity planning and database administrator (DBA) oversight.

"In fact, to break the vibe coding trend, you need developers to believe that they can actually create new apps very quickly, but you also need the central IT team, or DBAs, to be comfortable with the tsunami of apps and databases," Databricks co-founder Reynold Xin told VentureBeat. "Classic databases don’t really scale to that because they can’t afford to put a DBA per database and per app."

92% faster delivery: From two months to five days

The production numbers show an immediate impact beyond the perspective of the agent’s giving. Hafnia reduced the delivery time for production-ready applications from two months to five days – or 92% – using Lakebase as the transaction engine for their internal operations portal. The shipping company has moved beyond static BI reports to real-time business applications for fleet, commercial and finance workflows.

EasyJet consolidated more than 100 Git repositories into just two and cut development cycles from nine months to four months – a 56% reduction – while building a web-based revenue management hub in Lakebase to replace a decade-old desktop app and one of Europe’s largest legacy SQL Server environments.

Warner Music Group mobilizes insights directly into production systems using a unified foundation, while Quantum Capital Group uses it to maintain consistent, managed data for the identification and evaluation of oil and gas investments – eliminating the duplication of data that previously forced teams to maintain multiple copies in different formats.

The acceleration comes from eliminating two major bottlenecks: database cloning for the test environment and ETL pipeline maintenance for syncing operational and analytical data.

Technical architecture: Why it’s not just managed Postgres

Traditional databases combine storage and compute — organizations provide a database instance with built-in storage and scale by adding multiple instances or storage. AWS Aurora innovates by separating these layers with proprietary storage, but the storage remains locked within AWS’s ecosystem and is not independently accessible for analytics.

Lakebase takes the separation of storage and compute to its logical conclusion by placing storage directly in the data lakehouse. The compute layer runs essentially vanilla PostgreSQL — maintaining full compatibility with the Postgres ecosystem — but each write goes to lakehouse storage in formats that can be quickly queried by Spark, Databricks SQL and other analytics engines without ETL.

"The unique technical insight is that data lakes decouple storage from computing, which is good, but we need to introduce data management capabilities such as data lake management and transaction management," Xin explained. "We’re actually not that different from the lakehouse concept, but we’re building lightweight, ephemeral computing for OLTP databases on top."

Databricks built Lakebase using technology it acquired from the Neon acquisition. But Xin emphasized that Databricks is significantly expanding on Neon’s original capabilities to create something fundamentally different.

"They don’t have business experience, and they don’t have cloud scale," Xin said. "We brought the novel architectural ideas of the Neon team with the infrastructure strength of Databricks and combined them. So now we have created a super scalable platform."

From hundreds of databases to millions built for agent AI

Xin outlined a vision that is directly tied to the economics of AI coding tools that explains why the Lakebase build goes beyond today’s use cases. As development costs drop, businesses will shift from buying hundreds of SaaS applications to building millions of internal applications.

"As the cost of software development decreases, which we are seeing today due to AI coding tools, it will shift from the proliferation of SaaS in the last 10 to 15 years to the proliferation of in-house application development," Xin said. "Instead of building perhaps hundreds of applications, they will build millions of customized apps over time."

This creates an impossible problem to manage the fleet with traditional methods. You cannot hire enough DBAs to manually provision, monitor and troubleshoot thousands of databases. Xin’s solution: Treat database management itself as a data problem rather than an operational problem.

Lakebase stores all telemetry and metadata – query performance, resource usage, connection patterns, error rates – directly in the lakehouse, where it can be analyzed using standard data engineering and data science tools. Instead of configuring dashboards with database-specific monitoring tools, data teams query telemetry data using SQL or analyze it using machine learning models to identify outliers and predict issues.

"Instead of creating a dashboard for every 50 or 100 databases, you can look at the chart to see if there is bad behavior," Xin explained. "Database management is very similar to an analytics problem. You look at outliers, you look at trends, you try to understand why things happen. This is how you manage scale when agents create and destroy databases programmatically."

The implications extend to autonomous agents themselves. An AI agent experiencing performance issues can query telemetry data to identify problems – treating database operations as a single analysis task rather than requiring specialized DBA knowledge. Database management becomes something agents can do for themselves using the same data analysis capabilities they already have.

What this means for enterprise data teams

The construction of Lakebase signals a fundamental change in how enterprises think about operational databases – not as valuable, carefully managed infrastructure that requires specialized DBAs, but as ephemeral, self-service resources that scale programmatically like cloud computing.

It’s important whether or not autonomous agents end up as soon as Databricks imagines, because the underlying principle of the architecture – treating database management as an analytics problem rather than an operational problem – changes the skill sets and team structure that businesses need.

Data leaders must pay attention to the integration of operational and analytical data that is occurring across the industry. When writing an operational database that can easily be queried by analytics engines without ETL, the traditional boundaries between transactional systems and data warehouses are broken. This unified architecture reduces the operational overhead of maintaining separate systems, but it also requires restructuring data team structures built around boundaries.

When the lakehouse was launched, competitors rejected the concept before adopting it themselves. Xin expects a similar trajectory for Lakebase.

"It just makes sense to separate storage and compute and put all the storage in the lake — it enables more capabilities and possibilities," he said.



Source link

  • Related Posts

    2026 plans: What’s next for the Startup Battlefield 200

    The TechCrunch Startup Battlefield 200 is the flagship of the early stage startup competition held at Disrupt TechCrunch in San Francisco, which promotes the best young companies in the world.…

    Sony’s WH-1000XM6 headphones are at a record-low price

    Sony’s wireless WH-1000XM6 headphones . This is a record low price, as it drops $62 off the price tag. Sale applies to all three colorways. It easily tops our list…

    Leave a Reply

    Your email address will not be published. Required fields are marked *