Sunday, March 15, 2026

You sound like chatgptt

Share

Join any zoom connection, go to any lecture hall or watch any movie on YouTube and listen carefully. After the content and inside the language designs you will find the crawling uniformity of the voice of AI. Words such as “vodka” and “tapestry”, which are favored by chatgpt, crept into our vocabulary, while words such as “amplifier”, “Unearth” and “Nuance”, words less favored by chatgpt, refused to employ. Scientists already document changes in the way we speak and communicate as a result of chatgpt – and see this language influence accelerating in something much larger.

Within 18 months after the release of ChatgPT, the speakers used words such as “meticulous”, “dele”, “realm” and “adept” to 51 percent more often than three years earlier, according to researchers from the Max Planck Institute for Human Development, which analyzed nearly 280,000 YouTube movies from academic channels. Scientists have ruled out other possible changes before issuing CHATGPT and confirmed that these words are in line with this model, as established in Earlier studies comparing 10,000 human and AI-edited texts. The speakers do not realize that their language is changing. That’s what it is about.

In particular, one word stood out to scientists as a kind of linguistic watermark. “Delve” became an academic shibboleth, a neon sign in the middle of each conversation Chatgpt was here. “We internalize this virtual vocabulary in everyday communication,” says Hiromu Yakura, main author of the study and doctoral students at the Max Planck Institute of Human Development.

“Delve” is just the tip of the iceberg. “

But it’s not just that we are taking AI – it’s about how we start to sound. Although current studies focus mainly on vocabulary, scientists suspect that the influence of AI also begins to appear in a tone – in the form of a longer, more structured speech and subdued emotional expression. Like Levin Brinkmann, a scientist from Max Plancka Institute of Human Development and co -author of the study, he put it, “Delve” is just the tip of the iceberg. “

AI apparently appears in functions such as clever answers, autocorrec and check check. Cornell research He looks at our employ of clever answers on chats, saying that the employ of clever answers increases the overall cooperation and a sense of closeness between participants, because users choose a more positive emotional language. But if people believed that their partner was using artificial intelligence in interaction, they assessed their partner as less cooperation and more demanding. Most importantly, they turned off their actual employ of artificial intelligence – it was suspicion. We create perception based on language guidelines and in fact language properties drive these impressions, says Malte Jung, a professor of computer science at the University of Cornell and a study co -author.

According to Mora Naman, a professor of computer science at Cornell Tech, this paradox – AI by improving communication when increasing suspicions – indicates a deeper loss of trust. He identified three levels of human signals that we lost in the reception of artificial intelligence in our communication. The first level is the basic signals of humanity, tips that talk about our authenticity as a human being like moments of susceptibility or personal rituals that say to others: “It’s me, I’m human.” The second level consists of signals of attention and effort that prove: “I fell in love enough to write it myself.” And the third level is skill signals that show our sense of humor, our competences and our real self for others. This is the difference between sending SMSs: “I’m sorry you are nervous” compared to “Hey, I’m sorry, I will frighten during dinner, I probably shouldn’t skip the therapy this week.” One sounds flat; The second sounds human.

For Naman, thinking on how to restore and raise these signals is a path forward in communication via AI, because artificial intelligence not only changes language-but what we think. “Even on dating sites, what does it mean to be fun on your profile or chat, where we know that artificial intelligence can be fun for you?” Asks Naaman. The loss of the agency starting in our speech and in particular moving to our thinking, which is worried about. “Instead of expressing our own thoughts, we express what AI helps us to express … We become more convinced.” Without these signals, he warns Naaman, we only trust face-to-face communication-nave video connections.

We lose verbal streams, regional idioms and phrases except a few, which signal susceptibility, authenticity and personality

The problem of trust is combined when you take into account that AI quietly establishes, who sounds “legally”. University of California, Berkeley Research He stated that AI’s answers often contained stereotypes or misleading approximations when they are encouraged to employ dialects other than standard American English. Examples of this include chatgpt repeating the monitor from non-standard-American-callish due to the lack of understanding and exaggeration of the input dialect. One Singapore English respondent commented“Super exaggerated in one of the answers was somewhat privileged.” The study revealed that artificial intelligence simply does not simply the will of standard American English, but actively flattens other dialects in a way that can humiliate their speakers.

This system consists not only of the community, but also about what “correct” English is. So the rates are not just about maintaining language diversity – it is not about protecting imperfections that actually build trust. When everyone around us begins to sound “correct”, we lose verbal stumbling, regional idioms and phrases beyond a few, which signal susceptibility, authenticity and personality.

We are approaching the division point, in which the influence of artificial intelligence on the way of speaking and we write a movement between the poles of standardization, such as professional E -Maili templates or formal presentations and an real expression in personal and emotional spaces. There are three basic tensions between these poles in the game. Early slack signals, such as scientists, avoiding “delving”, and people actively try not to sound like AI, suggest that we can self -regulated against homogenization. AI systems themselves will probably become more expressive and personalized over time, potentially reducing the current AI voice problem. And the deepest risk of everyone, as Naaman pointed out, is not language uniformity, but the loss of conscious control over our own thinking and expression.

The future is not predetermined between homogenization and hyperpersonalization: it depends on whether we will be conscious participants of this change. We see early signs that people will push away when the influence of AI becomes too obvious, while technology can evolve to better reflect human diversity than to flatten it. This is not the question of whether artificial intelligence will continue to shape the way we say – because it will be – but whether we actively decide to keep space for verbal quirks and emotional mess that make communication recognizable, independently human.

Latest Posts

More News