A new AI lab called Flapping airplanes launched Wednesday, with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal – finding a less data-hungry way to train large models – is a particularly interesting one.
Based on what I’ve seen so far, I’d rate them as Level Two at the trying-to-make-money scale.
But there was something really exciting about the Flapping Airplanes project that I couldn’t quite put my finger on until I read it. this post is from Sequoia partner David Cahn.
As Cahn describes, Flapping Airplanes was one of the first labs to move beyond scaling, the relentless data construction and computation that has defined much of the industry to this day:
The scaling paradigm argues for dedicating a large amount of society’s resources, as far as is economically possible, to the growth of current LLMs, in the hope that it will lead to AGI. The research paradigm argues that we will get 2-3 research breakthroughs from an “AGI” intelligence, and as a result, we should dedicate resources to long-term research, especially projects that may take 5-10 years to complete.
(…)
A compute-first approach prioritizes cluster scale above all else, and favors short-term wins (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A research-first method will spread the bets in the temporary period, and must be ready to make many bets with a small probability of working, but which jointly expand the area of search of what is possible.
People may be right about computing, and it’s pointless to focus on anything other than evolving server builds. But with so many companies already headed in that direction, it’s nice to see someone heading the other way.





