A Yann LeCun–Linked Startup Charts a New Path to AGI


If you ask Yann LeCunSilicon Valley has a problem with groupthink. Since leaving Meta in November, the researcher and AI luminary has taken aim in the orthodox view that large-scale language models (LLMs) will lead us to artificial general intelligence (AGI), the threshold at which computers will match or exceed human intelligence. Everything, he declared a recent interview“LLM-pilled.”

On January 21, San Francisco-based Logical Intelligence appointed LeCun to its board. Building a theory conceived by LeCun two decades ago, the startup claimed to have developed a different form of AI, better equipped to learn, reason, and self-correct.

Logical Intelligence has developed what is known as the energy-based reasoning model (EBM). While LLMs effectively predict the most likely next word in a sequence, EBMs absorb a set of parameters—say, the sudoku rules—and complete a task within those constraints. This method should eliminate errors and require less computation, as there is less trial and error.

The startup’s debut model, Kona 1.0, can solve sudoku puzzles several times faster than the world’s leading LLMs, despite the fact that it runs on just one Nvidia H100 GPU, according to founder and CEO Eve Bodnia, in an interview with WIRED. (In this test, LLMs are prevented from using coding capabilities that would allow them to “brute force” the puzzle.)

Logical Intelligence claims to be the first company to build a working EBM, so far only an academic flight. The idea is for Kona to address thorny problems such as optimizing energy grids or automating sophisticated manufacturing processes, in settings with zero tolerance for error. “None of these tasks are related to language. They are outside of language,” Bodnia said.

Bodnia expects Logical Intelligence to work closely with AMI Labs, a Paris-based startup recently launched by LeCun, which is developing yet another form of AI—a so-called world model, aimed at identifying physical dimensions, displaying continuous memory, and anticipating the results of its actions. The road to AGI, Bodnia argues, starts with layering these different types of AI: LLMs will interface with people in natural language, EBMs will take on reasoning tasks, while world models will help robots move in 3D space.

Bodnia spoke to WIRED via videoconference from his office in San Francisco this week. The following interview has been edited for clarity and length.

WIRED: I’m going to ask about Yann. Tell me how you met, his role in directing Logical Intelligence research, and what his role on the board will entail.

Bodnia: Yann has a lot of experience from the academic end as a professor at New York University, but he has been exposed to the real industry through Meta and other partners for many, many years. He saw both worlds.

For us, he is the only expert in energy-based models and various types of related architectures. When we started working on this EBM, he was the only person I could talk to. He helped our technical team navigate some directions. He is very hands-on. Without Yann, I can’t imagine we would have scaled this fast.

Yann talks about the potential limitations of LLMs and which model architectures are likely to advance AI research. Where do you stand?

LLMs are a big guessing game. So you need a lot of computing. You take a neural network, feed it pretty much all the garbage from the internet, and try to teach it how people communicate with each other.

When you speak, your language is intelligent to me, but not because of the language. Language is an expression of whatever is in your brain. My reasoning takes place in a kind of abstract space that I decode in language. I feel like people are trying to reverse engineer intelligence by simulating intelligence.



Source link

  • Related Posts

    Napster Is Now A Slop Farm for AI Music

    When you arrive at Back up slopthe hits just keep coming, and Napster (yes, that Napster) is responsible for the latest banger. To my dismay, all of the over 30…

    ‘Uncanny Valley’: Misinformation in Minneapolis, TikTok’s New Owner, and Moltbot Hype

    In today’s episode, Hosts Brian Barrett and Zoë Schiffer are joined by Tim Marchman, director of science, politics, and security at WIRED, to discuss the week’s news—including how far-right influencers…

    Leave a Reply

    Your email address will not be published. Required fields are marked *