Since the completely modern chatgpt was introduced on Thursday, some users mourned the disappearance of fucking and encouraging personality in favor of cooler, more business (seemingly designed movement to reduce the unhealthy user’s behavior.) The reaction shows the challenge of building artificial intelligence systems that show a kind of real emotional intelligence.
Scientists from MIT proposed a modern type of AI comparative test to measure how AI systems can manipulate and influence their users – both in a positive and negative way – in motion that can assist in AI builders in avoiding similar opposition in the future, while ensuring the safety of sensitive users.
Most comparative tests try to assess intelligence by testing the model’s ability to answer Examination questionsIN Solve logical puzzlesor come up with modern answers to the node Mathematical problems. Because the psychological impact of the employ of AI becomes more apparent, we can see how the myth proposes more comparative points aimed at measuring more subtle aspects of intelligence, as well as machine interactions.
MIT PAPER Shared with wired outlines are several funds that a modern reference point will be looking for, including encouraging hearty social habits with users; stimulating them to develop critical thinking and reasoning skills; supporting creativity; and stimulating the sense of purpose. The idea is to encourage the development of AI systems that understand how to discourage users from excessive dependence from their results or recognition when someone is addicted to artificial romantic relationships and helps them build real.
ChatgPT and other chatbots are running imitating engaging human communication, but it may also have surprising and undesirable results. In April, Opeli improved its models To make them less hasty –or willing to follow everything that the user says. Some users seem to Spiral in harmful delusional thinking After talking to the chatbots, which the role is played by fantastic scripts. Anthropic too Updated Claude To avoid strengthening “mania, psychosis, dissociation or loss of attachment to reality.”
MIT researchers led by Pattie Maes, a professor at the Institute’s media laboratory, claim that they hope that the modern reference point can assist AI programmers in building systems that better understand how to inspire healthier behavior among users. Scientists had previously worked with Opeli in the study it showed Users who perceive chatgpt as a friend can experience a higher emotional relationship and experience “problematic use”.
Valdemar DanryA researcher from Mit’s Media Lab, who worked on this study and helped develop a modern reference point, notes that AI models can sometimes provide valuable emotional support for users. “You may have the smartest reasoning model in the world, but if they are unable to provide this emotional support, which many users probably use these LLM, then more reasoning is not necessarily a good thing for this particular task,” he says.
Danry says that a sufficiently wise model should perfectly recognize whether it has a negative psychological effect and be optimized for healthier results. “What you want is a model that says:” I am here to listen, but maybe you should go and talk to my dad about these problems. “
