IN one paper Eleos AI has published, the NON -PROFIT organization argues for the assessment of AI’s awareness by means of the “calculation function” approach. A similar idea was once supported by Putnam, though he criticized Later in his career. . The theory suggests that human minds can be treated as specific types of computing systems. From there, you can find out if other computing systems, such as cornflower, have sensitivity indicators similar to these people.
Eleos Ai said in the article that “the main challenge in use” is the approach “consists in the fact that it includes significant evokes of judgment, both in formulating indicators and assessing their presence or lack in AI systems.”
Model prosperity is of course the emerging and still developing field. He has many critics, including Mustafa Suleyman, general director of Microsoft AI, who recently published a blog about “seemingly conscious artificial intelligence”.
“It is both premature and honestly dangerous,” wrote Suleyman, generally referring to the field of modeling research. “All this exacerbates the illusions, causes even more problems related to the dependence, victim of our psychological security, introduce new dimensions of polarization, complicate existing struggle for rights and create a huge new error of the category for society.”
Suleyman wrote that “there is no evidence today”, that there is a conscious artificial intelligence. He made a link to paper This long co -author in 2023, who proposed a novel assessment framework for whether the AI system has “indicator properties” of consciousness. (Suleyman did not answer the request for comment from Wired.)
I talked to Long and Campbell shortly after Suleyman published his blog. They told me that although they agreed with a lot of what he said, they did not believe that model social care research should cease to exist. Rather, they argue that the Suleyman in question are the exact reasons Why They want to study this topic.
“When you have a big, misleading problem or question, one of the ways to guarantee that you are not going to solve it, is to raise your hands and be like” oh, it is too complicated, “says Campbell. “I think we should at least try.”
Testing awareness
Modeling social welfare researchers mainly concern the issue of consciousness. If we can prove that you and I are aware, they argue, then the same logic can be used for large language models. To make it clear, neither long nor Campbell think that AI is aware today, and they are not sure that it will be so. But they want to develop tests that would allow us to prove it.
“Immacts come from people dealing with the actual question:” Is it AI, aware? “And I think, having a scientific thinking framework about it is simply solidly good,” says Long.
But in a world where AI research can be packed in sensational headlines and films on social media, robust philosophical questions and mind -related experiments can easily be misinterpreted. Take what happened when Antropic published Safety report This showed that Claude Opus 4 can take “harmful actions” in extreme circumstances, such as blackmailing a fictitious engineer to prevent it from turning off.
