Skip to main content
Sep 13

Anthropic Expands Safety Rules for Claude AI

Anthropic updates Claude’s policy, banning its use for CBRN weapons and adding strict cybersecurity rules amid rising AI safety concerns.

2 min read713 views1 tags
Anthropic Expands Safety Rules for Claude AI
Originally reported bytheverge
Anthropic has revised the usage policy for its Claude chatbot to address rising safety concerns. The AI startup now explicitly bans the use of Claude to aid in the development of biological, chemical, radiological, or nuclear weapons. This expansion builds on its earlier rule, which prohibited the creation or distribution of weapons, explosives, or harmful materials in general. The update now specifically calls out high-yield explosives and CBRN weapons, signaling a stronger stance against misuse. The company made this change alongside broader safeguards aimed at limiting dangerous applications. In May, Anthropic introduced “AI Safety Level 3” with the launch of its Claude Opus 4 model, designed to make the chatbot harder to manipulate and to prevent it from assisting with weapon development. Anthropic also highlighted risks tied to its new agentic AI tools, such as Computer Use, which allows Claude to control a user’s computer, and Claude Code, which integrates the AI directly into a developer’s terminal. These features, while powerful, raise concerns about possible large-scale abuse, including malware creation and cyberattacks. To address this, the company added a new section to its policy titled “Do Not Compromise Computer or Network Systems.” It bans the use of Claude for hacking, discovering vulnerabilities, building malware, or conducting denial-of-service attacks. In addition to tightening security rules, Anthropic has eased restrictions on political content. Instead of blocking all campaign-related uses, it will now only prohibit scenarios that could mislead voters, disrupt democratic processes, or involve voter targeting. The company also clarified that its high-risk use case requirements apply only to consumer-facing services, not to internal business applications. The updated policy reflects Anthropic’s attempt to balance innovation with responsibility as AI tools become more capable. By explicitly ruling out harmful uses while loosening certain political limits, the company is aiming to address both public safety concerns and practical needs for developers.
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news