Saturday, March 7, 2026

Yann LeCun-affiliated startup charts a recent path to AGI

Share

If you ask Yann LeCun, Silicon Valley has a groupthink problem. The researcher and AI luminary left Meta in November chosen goal according to the orthodox view that vast language models (LLM) will lead us to artificial general intelligence (AGI), the threshold at which computers match or exceed human intelligence. Everyone, he declared recent interviewwas “swamped with LLM”.

On January 21, San Francisco-based startup Logical Intelligence appointed LeCun to its board of directors. Based on theory invented by LeCun two decades earlier, the startup claimed to have developed a different form of artificial intelligence, better equipped to learn, reason and self-correct.

Logical Intelligence has developed what it calls an Energy-Based Reasoning Model (EBM). While LLMs effectively predict the most likely next word in a sequence, EBMs absorb a set of parameters – say, Sudoku rules – and perform the task within those limits. This method is designed to eliminate errors and require much less computation because it requires less trial and error.

The startup’s debut model, Kona 1.0, can solve Sudoku puzzles many times faster than the world’s leading LLM, even though it runs on just a single Nvidia H100 GPU, founder and CEO Eve Bodnia says in an interview with WIRED. (For this LLM test, they cannot exploit coding capabilities that would allow them to “brute force” the puzzle.)

Logical Intelligence claims to be the first company to build a working EBM, previously only a figment of academic imagination. The idea is that Kona will solve thorny problems, such as optimizing power grids or automating sophisticated manufacturing processes, in conditions with no tolerance for errors. “None of these tasks are related to language. It’s nothing more than language,” says Bodnia.

Bodnia expects Logical Intelligence to work closely with AMI Labs, a Parisian start-up recently founded by LeCun that is developing another form of artificial intelligence – the so-called a model of the world designed to recognize physical dimensions, demonstrate lasting memory, and predict the consequences of one’s actions. Bodnia says the path to AGI begins with an overlay of different types of artificial intelligence: LLMs will communicate with humans in natural language, EBMs will perform reasoning tasks, and world models will aid robots take actions in 3D space.

This week, Bodnia spoke to WIRED via video conference from her office in San Francisco. The following interview has been edited for clarity and length.

WIRED: I should ask about Yann. Tell me about how you met, his role in leading research at Logical Intelligence, and what his role will be on the board.

Bodnia: Yann has extensive academic experience as a professor at NYU, but has had many, many years of exposure to real industry through Meta and other colleagues. He saw both worlds.

For us, he is the only expert in energy-based models and various types of related architectures. When we started working on this EBM, he was the only person I could talk to. It helps our technical team take specific directions. He was very, very committed. Without Yann, I can’t imagine us growing so quickly.

Yann talks openly about the potential limitations of the LLM and which model architectures are most likely to accelerate AI research. Where are you standing?

LLM is a huge guessing game. That’s why you need a lot of calculations. You take a neural network, feed it almost all the junk from the Internet, and try to teach it how people communicate with each other.

When you speak, your language is knowledgeable to me, but not because of the language. Language is a manifestation of what is in your brain. My reasoning takes place in some abstract space that I decode into language. I feel like people are trying to reverse intelligence by imitating it.

Latest Posts

More News