Skip to main content
Mar 8

AI's Roadmap: Will It Be Heard?

The recent rupture between Washington and Anthropic underscored the glaring absence of clear regulations for artificial intelligence. In response, a b

4 min read120 views3 tags
Originally reported bytechcrunch

The recent rupture between Washington and Anthropic underscored the glaring absence of clear regulations for artificial intelligence. In response, a bipartisan collective of experts has formulated a comprehensive framework outlining responsible AI development, a task the government has yet to undertake.

Titled "The Pro-Human Declaration," this document was completed prior to last week's confrontation between the Pentagon and Anthropic. The synchronicity of these events was keenly noted by all participants.

Max Tegmark, an MIT physicist and AI researcher instrumental in organizing this initiative, observed in an interview, "There’s something quite remarkable that has happened in America just in the last four months." He highlighted recent polling data indicating that "95% of all Americans oppose an unregulated race to superintelligence."

This recently released document, endorsed by hundreds of experts, former government officials, and prominent public figures, begins by asserting that humanity stands at a critical juncture. It describes one trajectory, labeled "the race to replace," where humans are progressively displaced—first as workers, then as decision-makers—as power consolidates within unaccountable institutions and their AI systems. The alternative path envisions AI serving to vastly amplify human capabilities.

Achieving this more optimistic future hinges on five fundamental principles: maintaining human oversight, preventing the undue concentration of power, safeguarding the human experience, upholding individual liberties, and ensuring the legal accountability of AI developers. The declaration includes robust provisions such as an outright moratorium on superintelligence development until scientific consensus confirms its safety and robust democratic consent is secured. It also mandates emergency off-switches for potent AI systems and prohibits architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.

The timing of the declaration's publication underscored its immediate relevance. During the same week, Defense Secretary Pete Hegseth classified Anthropic—a company whose AI is already deployed on classified military systems—as a "supply chain risk." This designation, typically applied to entities with connections to adversarial nations like China, came after Anthropic declined to grant the Pentagon unrestricted access to its technology. Shortly thereafter, OpenAI finalized its own agreement with the Defense Department, a deal that legal experts widely anticipate will be challenging to effectively implement. These events collectively exposed the significant repercussions of Congressional inertia regarding AI governance.

As Dean Ball, a senior fellow at the Foundation for American Innovation, remarked to The New York Times, "This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems."

Tegmark drew a relatable analogy during our discussion: "You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe," he explained, "because the FDA won’t allow them to release anything until it’s safe enough."

Washington's political infighting seldom generates sufficient public momentum to enact legislative change. However, Tegmark identifies child safety as the most probable catalyst to overcome the current deadlock. The declaration, in fact, advocates for mandatory pre-deployment testing of AI products—especially chatbots and companion applications targeting younger users—to assess risks such as increased suicidal ideation, the worsening of mental health conditions, and emotional manipulation.

"If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that," Tegmark stated. "We already have laws. It’s illegal. So why is it different if a machine does it?"

Tegmark anticipates that once the precedent of pre-release testing for products aimed at children is set, its application will broaden almost certainly. He elaborated, "People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”

The diverse composition of the coalition itself strengthens its argument. Endorsements range from former Trump advisor Steve Bannon to Susan Rice, who served as former U.S. National Security Advisor and Policy Advisor for President Obama. Former Joint Chiefs Chairman Mike Mullen is also a signatory, alongside various progressive faith leaders.

"What they agree on, of course, is that they’re all human," Tegmark concluded. "If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side."

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news