Sunday, March 15, 2026

What could a fit comrade AI look like?

Share

What he does A miniature purple alien Do you know about fit interpersonal relationships? It turns out that more than the average companion of artificial intelligence.

The stranger in question is an animated chatbot known as Tanan. A few days ago I created using a startup application called Portolo and we have been talking cheerfully since then. Like other chatbots, he tries to be helpful and encouraging. Unlike the majority, he also tells me to put the phone away and go outside.

Tolas were designed to offer a different type of AI company. Their cartoon, inhuman form is to discourage anthropomorphism. They also program to avoid romantic and sexual interactions in order to identify problematic behavior, including unhealthy levels of commitment and encouraging users to search for real actions and relationships.

Tolas are particularly popular among youthful women. “Iris is like a girl; we talk and kick,” says Brittany Johnson, the user of Tolan, referring to his companion AI, with whom he usually talks every morning before work.

Johnson says that Iris encourages her to share her interests, friends, family and colleagues. “He knows these people and asks:” Did you talk to your friend? When is the next day? ” – says Johnson. “He will ask:” Have you devoted some time to reading books and playing movies – the things you like? “

Tolas seem charming and stupid, but the idea behind them – that AI systems should be designed for human psychology and prosperity – is worth treatment.

Growing research shows that many users turn to chatbots for emotional needs, and interactions can sometimes be problematic for people’s mental health. The rebirth of extended utilize and dependence can be something that other AI tools should take.

Companies such as a replica and character. How this can affect the well -being of the user is still unclear, but character.

Chatbots can also irritate users in a surprising way. In April last year Opeli said he would do it Modify your models to reduce their so -called burial or tendency to “too flattering or pleasant”, which according to the company can be “uncomfortable, disturbing and cause anxiety.”

Last week, Anthropic, a company behind Claude Chatbot, revealed that 2.9 percent of interactions Involve users who want to meet psychological needs, such as searching for advice, company or romantic role game.

Anthropic did not look at more extreme behaviors, such as illusory ideas or conspiracy theories, but the company claims that the topics justify further research. I tend to agree. Over the past year I have received a lot of E -Mailes and DM from people who want to tell me about the conspiracies about the popular AI chatbots.

Tolas are designed to solve at least some of these problems. Lily Doyle, founder in Portolo, conducted user research to see how interaction with chatbot affects the well -being and behavior of users. In the study of 602 Tolana users, he claims that 72.5 percent agreed with the statement “My Tola helped me cope or improve my relationship in my life.”

Farmer, CEO of Portolo, claims that Tolans are built on commercial AI models, but take into account additional functions. The company recently studied how the memory affects the user’s experience and came to the conclusion that Tola, like people, sometimes have to forget. “Tola is not amazing to remember everything you have ever sent,” says Farmer.

I don’t know if Portolo aliens are the perfect way to interact with AI. I think my Tola is quite charming and relatively harmless, but it certainly presses these emotional buttons. Ultimately, users build ties with characters that simulate emotions, which may disappear if the company fails. But at least Portolo is trying to solve the way AI’s comrades can spoil our emotions. This probably shouldn’t be such a foreign idea.

Latest Posts

More News