Friday, March 13, 2026

“Vibe Hacking” is now the best threat of AI

Share

“Agentical AI Systems are armed”.

This is one of the first lines of anthropics New threat intelligence reportOut today, which In detail the wide range of cases in which claude – And probably many other leading AI and chatbot agents – are used.

First: “Hacking Vibe”. One sophisticated cybercrime ring, which according to Anthropic, which recently disrupted the Claude code used, AI coding agent to force data from at least 17 different organizations around the world in one month. The intensified pages included healthcare organizations, emergency services, religious institutions and even government entities.

“If you are a sophisticated actor, what would otherwise require a team of sophisticated actors, such as the air conditioning, to conduct-the now one person can lead, with the help of agency systems,” said Jacob Klein, head of the anthropic threat intelligence team, told the Anthropic threatening threat intelligence team. The Verge In an interview. He added that in this case Claude “performed surgery from end”.

Anthropic wrote in a report that in such cases AI “serves both as a technical consultant and an active operator, enabling attacks that would be more difficult and time -consuming for individual entities to perform manual.” For example, Claude was specially used to write “mentally focused on extortion.” Then the cyber criminals came up with how many data – which included health care data, financial information, government references and many others – would be worth in a gloomy network and would make ransom demands exceeding $ 500,000 to anthropic.

“This is the most sophisticated use of agents that I saw … in the case of cyber criminals,” said Klein.

In another case study, Claude helped North -Korean IT employees in dishonest finding a job in Fortune 500 companies in the US to finance the country’s weapon program. Usually, in such cases, North Korea is trying to utilize people who were in college, have experience, or have some ability to communicate in English, according to Klein – but said that in this case the barrier is much lower for people in North Korea to pass technical interviews in enormous technology companies, and then maintain work.

With the aid of Claude, Klein said: “We see people who do not know how to write a code, they do not know how to communicate professionally, do not know much about English or culture, who simply ask Claude to do everything … and then when they land on work, most of the work they do with Claude, keeps work.”

Other case study included romance fraud. Telegram Bot from over 10,000 monthly users advertised Claude as a “high EQ model” to aid generate knowledgeable emotional messages, allegedly for fraud. This made it possible that non -life English users wrote convincing, free messages to gain trust of victims in the USA, Japan and Korea and ask them for money. One example in the report showed that the user sent a picture of a man in a draw and asking how to praise him best.

In the report, Anthropic herself admits that although the company “developed sophisticated security and safety measures to prevent improper use of its artificial intelligence, and although the means are” generally effective “, bad actors still manage to find ways around them. Anthropic claims that artificial intelligence has reduced barriers to sophisticated cybercrime and that bad actors use this technology for profiling victims, automating their practices, creating false identities, analyzing stolen data, theft of information about credit card and others.

Each case studies in the report increases the growing number of evidence that AI, as they can, can often not keep up with the social risk associated with the technology they create and expose. “Although the Claude, the case studies below, probably reflect consistent behavioral patterns in all AI Frontier models,” says the report.

Anthropic said that for each case study he banned related accounts, created new classifiers or other detecting means, and provided information with relevant government agencies, such as intelligence agencies or law enforcement agencies, confirmed Klein. He also said that studies in which his team saw are part of a wider change in risk of AI.

“There is a change in which AI systems are not just chatbot, because they can now take many steps,” said Klein, adding: “They are able to actually carry out action or activity, as we see here.”

6 Comments

Follow topics and authors From this story to see more in the personalized main page channel and receive E -Mail updates.


Latest Posts

More News