Friday, March 20, 2026

The Godmother of Artificial Intelligence Wants Everyone to Be a World Builder

Share

According to tech experts and professional skeptics who are fascinated by the market, the AI ​​bubble has burst and winter is back. Fei-Fei Li doesn’t believe it. In fact, Li—who has earned the nickname “the godmother of AI”—is betting on the opposite. She’s on a part-time leave from Stanford University to co-found a company called World Laboratories. While current generative AI relies on language, it envisions a frontier where systems construct complete worlds with the physics, logic, and affluent detail of our physical reality. That’s an ambitious goal, and despite the dour nabobs who say progress in AI has reached dismal levels, World Labs is on the swift track to funding. The startup is probably a year away from having a product—and it’s not at all clear how well it will work, or when or if it will ever appear—but investors have put in $230 million and are apparently appreciating a billion dollar startup.

About a decade ago, Li helped AI get off the ground by creating ImageNet, a custom database of digital images that allowed neural networks to become much smarter. She thinks today’s deep-learning models need a similar boost if AI is to create real worlds, whether they’re realistic simulations or entirely imagined universes. Future George R. R. Martins could compose their dream worlds as prompts rather than prose, which could then be rendered and wandered around in. “The physical world of computers is seen through cameras and the computer brain behind the cameras,” Li says. “Transforming that vision into reasoning, generation, and ultimately interaction requires understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.” World Labs calls itself a spatial intelligence company, and its fate will assist determine whether that term becomes a revolution or a punch line.

Li has been fascinated by spatial intelligence for years. While everyone else was going crazy over ChatGPT, she and her former student, Justin Johnson, were excitedly chatting on the phone about the next iteration of AI. “The next decade is going to be about creating new content that takes computer vision, deep learning, and AI out of the world of the internet and embeds it in space and time,” says Johnson, who is now an assistant professor at the University of Michigan.

Li decided to start the company in early 2023 after having dinner with Martin Casado, a pioneer in virtual networking who is now a partner at Andreessen Horowitz, a VC firm known for its near-messianic embrace of AI. Casado sees AI following a similar path to computer games, which started with text, moved to 2D graphics, and now have dazzling 3D images. Spatial intelligence will be the engine of change. Ultimately, he says, “You can take your favorite book, throw it into a model, and then literally walk inside and watch it play out in real time, in an immersive way,” he says. Casado and Li agreed that the first step to that is moving from immense language models to immense world models.

Li began putting together a team co-founded by Johnson. Casado suggested two more people — one of them was Christoph Lassner, who had worked at Amazon, Meta’s Reality Labs, and Epic Games. He is the inventor Pulsarrendering scheme that led to the celebrated technique called Gaussian 3D Splattering. It sounds like an indie band at an MIT toga party, but it’s really a way to synthesize scenes, as opposed to individual objects. Another suggestion from Casado came from Ben Mildenhall, who created a powerful technique called NeRF—neural radiative fields—that transforms two-dimensional pixel images into three-dimensional graphics. “We brought real objects into VR and made them look perfectly realistic,” he says. He left his position as a senior research scientist at Google to join Li’s team.

One obvious goal of a immense world model would be to give robots a sense of, well, the world. That’s indeed on World Labs’ agenda, but not yet. The first phase is to build a model with a deep understanding of three-dimensionality, physicality, and the concepts of space and time. Then there’ll be a phase in which the models support augmented reality. After that, the company could move into robotics. If that vision pans out, immense world models could enhance autonomous cars, automated factories, and maybe even humanoid robots.

Latest Posts

More News