Friday, March 13, 2026

Anthropic has fresh rules for a more hazardous AI landscape

Share

Anthropic has updated the policy of using its Claude Ai Chatbot in response to growing security concerns. In addition to the introduction of more severe cybersecurity principles, Anthropic now defines Some of the most hazardous weapons that people should not develop with Claude.

Anthropic does not emphasize the amendments made to the weapon policy Summing up its changes in the postBut comparison between Old company use policy And his fresh reveals a significant difference. Although anthropic had previously forbade Claude’s apply for “production, modification, design, sale or distribution of weapons, explosives, dangerous materials or other systems designed to cause damage or loss of human life”, updated weapons (CBRN).

Anthropic in May “AI security 3” was implemented In addition to the introduction of the fresh Claude Opus 4 model. Security is aimed at hindering the model to Jailbreak, as well as assist in preventing assist in the development of CBRN weapons.

In his post, Anthropic also confirms the risk created by Agentic Ai Tools, including the apply of a computer that allows Claude to take control of the user’s computer, as well as Claude code, a tool that deposits Claude directly at the programmer’s terminal. “These powerful possibilities introduce a new risk, including the potential of scaled abuse, creating malware and cyber attacks,” writes Anthropic.

The AI startup reacts to this potential risk by submitting a fresh section “Do not violate computer or network systems” in principle of apply. This section contains rules regarding the apply of Claude to discover or apply gaps, create or disseminate malware, develop tools for refusal and others.

In addition, anthropics relaxes its policy regarding political content. Instead of prohibiting all kinds of content related to political campaigns and lobbying, Anthropic now only prohibits people using Claude to “use cases that are deceptive or disrupting democratic processes or include the direction of voters and campaigns.” The company also explained that its requirements for all cases of “high risk” that come into play when people apply Claude to give recommendations for natural persons or clients, they only apply to consumer scenarios, and not for business apply.

Latest Posts

More News