Tuesday, March 10, 2026

The next frontier of AI? Consciousness algorithm

Share

As a journalist who works in artificial intelligence, I hear from countless people who seem completely convinced that ChatGPT, Claude, or some other chatbot has achieved “consciousness”. Or “awareness.” Or – my personal favorite – “a mind of your own”. It’s true that the Turing test was passed some time ago, but unlike rote intelligence, these things can’t be ascertained so easily. Vast language models will claim to think for themselves and even describe inner torment or profess undying love, but such statements do not imply interiority.

Could they ever? Many AI creators don’t speak in these terms. They are too busy chasing a performance benchmark known as “artificial general intelligence,” which is a purely functional category and has nothing to do with a machine’s potential experience of the world. So – although I’m skeptical – I thought that spending time with a company that thinks it can crack the code of consciousness itself might be eye-opening, and maybe even enlightening.

Conscium was founded in 2024 by British artificial intelligence researcher and entrepreneur Daniel Hulme, and its advisors include an impressive group of neuroscientists, philosophers and experts in animal consciousness. When we first spoke, Hulme was a realist: there are good reasons to doubt whether linguistic models are capable of consciousness. Crows, octopuses, and even amoebas can interact with their surroundings in ways that chatbots cannot. Experiments also suggest that AI utterances do not reflect consistent or consistent states. As Hulme put it, echoing the common consensus: “Large language models are very primitive representations of the brain.”

But – a massive but – everything depends primarily on the importance of consciousness. Some philosophers argue that consciousness is too subjective to ever be studied or reproduced, but Conscium is betting that if it exists in humans and other animals, it can be detected, measured and built into machines.

There are competing and overlapping ideas about key features of consciousness, including the ability to feel and “feel”, awareness of oneself and one’s surroundings, and what is known as metacognition, the ability to think about one’s own thought processes. Hulme believes that a subjective experience of consciousness arises when these phenomena are combined, much like the illusion of movement is created when one scans through images in a book. But how to identify the components of consciousness – the individual animations, as it were – and the force that connects them? You’re turning the AI ​​against itself, says Hulme.

Conscium aims to break down conscious thought to its most basic form and catalyze it in the laboratory. “There must be something that consciousness is made of, from which it evolved,” said Mark Solms, a South African psychoanalyst and neuropsychologist involved in the Conscium project. In his 2021 book Hidden springSolms proposed a thorny recent way of thinking about consciousness. He argued that the brain uses perception and action in a feedback loop designed to minimize surprise, generating hypotheses about the future that are updated as recent information arrives. The idea is based on the “free energy principle” developed by Karl Friston, another notable, if controversial, neuroscientist (and another Conscium advisor). Solms further suggests that in humans this feedback loop has evolved into a system mediated by emotions and that it is feelings that evoke sensitivity and awareness. The theory is supported by the fact that damage to the brainstem, which plays a key role in regulating emotions, appears to cause a loss of consciousness in patients.

At the end of his book, Solms suggests a way to test his theories in the laboratory. Now, he says, he has done just that. He didn’t publish the newspaper, but he showed it to me. Did it break my brain? Yes, a little. Solms’ artificial agents live in a straightforward, computer-simulated environment and are controlled by algorithms with the kind of Fristonian loop through feelings that Solms proposes as the basis of consciousness. “I have several reasons for doing this research,” Solms said. – First of all, it’s damn intriguing.

Solms’ laboratory conditions are constantly changing and require constant modeling and adaptation. The agents’ experience of this world is mediated through simulated responses such as fear, excitement, and even pleasure. In brief, they are entertainment robots. Unlike the AI ​​agents everyone talks about today, Solms’s creations are literal desire explore your surroundings; To understand them properly, you need to try to imagine how they “feel” in their own little world. Solms believes that it should ultimately be possible to combine the approach he is developing with a linguistic model, thereby creating a system that can talk about one’s own sensory experiences.

Latest Posts

More News