Skip to main content
Mar 1

Anthropic's Self-Woven Snare

A breaking news alert interrupted an interview Friday afternoon: the Trump administration was severing its ties with Anthropic, the San Francisco-base

8 min read111 views3 tags
Originally reported bytechcrunch

A breaking news alert interrupted an interview Friday afternoon: the Trump administration was severing its ties with Anthropic, the San Francisco-based artificial intelligence company. Founded in 2021 by Dario Amodei and other former OpenAI researchers who departed due to safety concerns, Anthropic found itself blacklisted from Pentagon contracts. Defense Secretary Pete Hegseth invoked a national security law, typically used to counter foreign supply chain threats, after Amodei reportedly refused to permit Anthropic's technology for mass surveillance of U.S. citizens or for autonomous armed drones capable of selecting and eliminating targets without human intervention.

This development sent shockwaves through the industry. Anthropic now faces the loss of a contract valued at up to $200 million and will be barred from collaborations with other defense contractors. The President himself amplified the directive via a Truth Social post, instructing all federal agencies to "immediately cease all use of Anthropic technology." In response, Anthropic has indicated it will challenge the Pentagon's decision in court, arguing that the supply-chain-risk designation is legally unsound and has "never before publicly applied to an American company."

For nearly a decade, Max Tegmark, a Swedish-American physicist and professor at MIT, has been a vocal critic, warning that the rapid advancement of AI systems is outstripping humanity's capacity to govern them. As the founder of the Future of Life Institute in 2014, Tegmark notably co-organized an open letter in 2023, which garnered over 33,000 signatures, including that of Elon Musk, advocating for a moratorium on advanced AI development.

Tegmark's perspective on the Anthropic crisis is uncompromising: he believes the company, much like its industry counterparts, is largely responsible for its current predicament. His analysis traces the issue not to the Pentagon's recent actions, but to a collective decision made years prior across the AI sector to resist binding regulation. While companies like Anthropic, OpenAI, and Google DeepMind have consistently pledged responsible self-governance, Anthropic itself, earlier this week, abandoned a core tenet of its own safety commitment—its promise to withhold increasingly powerful AI systems until confident they would cause no harm.

Without robust regulatory frameworks, Tegmark asserts, there is little to safeguard these industry players. The following is an edited excerpt from our interview, with the full conversation available next week on TechCrunch’s StrictlyVC Download podcast.

When asked for his initial reaction to the news concerning Anthropic, Tegmark responded, "The road to hell is paved with good intentions." He reflected on the stark contrast between the early optimism for AI—envisioned as a tool to cure cancer, foster prosperity, and strengthen America—and the present reality where the U.S. government is at odds with a company for its refusal to deploy AI for domestic mass surveillance or autonomous killer robots that operate without any human input.

Questioned on the apparent contradiction of Anthropic, a company built on a "safety-first" identity, collaborating with defense and intelligence agencies (a relationship dating back to at least 2024), Tegmark affirmed it was indeed contradictory. Offering a "cynical take," he noted Anthropic's adept marketing of its safety focus. However, he argued that a factual examination reveals that Anthropic, OpenAI, Google DeepMind, and xAI have all extensively discussed safety but none have actively supported binding safety regulations akin to those in other industries. Furthermore, he pointed out that all four companies have reneged on their own promises: Google abandoned its "Don't be evil" slogan and a subsequent commitment not to cause harm with AI, enabling its use for surveillance and weapons. OpenAI removed the word "safety" from its mission statement. xAI disbanded its entire safety team. And Anthropic, just this week, rescinded its most crucial safety pledge—its promise not to release powerful AI systems until their harmlessness was assured.

Delving into how companies with such prominent safety commitments arrived at this juncture, Tegmark explained that these firms, particularly OpenAI and Google DeepMind, and to some extent Anthropic, have consistently lobbied against AI regulation, asserting, "Just trust us, we’re going to regulate ourselves." Their lobbying efforts proved successful, resulting in a landscape where AI systems in America are less regulated than sandwiches. He drew a vivid analogy: a health inspector would prevent a sandwich shop with 15 rats from selling food until the issue is resolved. Yet, an AI developer proposing "AI girlfriends for 11-year-olds linked to suicides in the past," or "superintelligence which might overthrow the U.S. government," is met with no regulatory impediment, so long as they aren't selling sandwiches.

This regulatory vacuum, Tegmark asserted, is where these companies collectively bear responsibility. Had they transformed their early "safe and goody-goody" promises into U.S. law, binding even their less scrupulous competitors, the current situation might have been averted. He warned that such corporate amnesty inevitably leads to calamities like thalidomide, tobacco companies targeting children, and asbestos causing lung cancer. It is, he concluded, ironic that their resistance to establishing clear legal boundaries for AI is now rebounding to their detriment.

He further elaborated that the absence of a law against developing AI that could harm Americans leaves the government free to request such capabilities. Had these companies proactively advocated for such legislation, they would not be in this predicament. "They really shot themselves in the foot," Tegmark stated.

Addressing the common counter-argument from AI companies' lobbyists—that a "race with China" necessitates rapid, unregulated development, otherwise Beijing will gain an advantage—Tegmark dismissed its validity. He highlighted the formidable lobbying power of AI companies, now exceeding that of the fossil fuel, pharmaceutical, and military-industrial sectors combined. He then countered the "China" argument by pointing out that China is actively moving to ban AI girlfriends and all anthropomorphic AI, not to appease the U.S., but because they perceive such technologies as detrimental to Chinese youth and national strength—a concern equally relevant to American youth.

Tegmark further challenged the notion of a "race to build superintelligence to win against China," arguing that humanity currently lacks the ability to control superintelligence, meaning the default outcome could be humanity losing control of Earth to "alien machines." He pointed out the Chinese Communist Party's strong emphasis on control, questioning who would believe Xi Jinping would tolerate a Chinese AI company developing something capable of overthrowing the government. This risk, he stressed, is equally pertinent to the American government, rendering superintelligence a national security threat rather than an asset.

When asked if this framing of superintelligence as a national security threat, rather than an asset, was gaining traction in Washington, Tegmark expressed optimism. He suggested that if national security officials consider Dario Amodei's vision of a "country of geniuses in a data center," they might begin to view such an entity as a threat to the U.S. government. He believes that the U.S. national security community will soon recognize that uncontrollable superintelligence is a danger, not a tool. He drew an analogy to the Cold War: the U.S. achieved economic and military dominance over the Soviet Union without engaging in a suicidal nuclear arms race. The same logic, he argued, applies to AI.

Regarding the broader pace of AI development and the proximity to the advanced systems he described, Tegmark noted that just six years ago, most AI experts predicted human-level mastery of language and knowledge by 2040 or 2050—predictions that proved incorrect, as that level has already been achieved. AI has rapidly progressed from high school to college, PhD, and even university professor levels in certain fields. Last year, AI secured the gold medal at the International Mathematics Olympiad, one of the most challenging human endeavors. He referenced a recent paper he co-authored with Yoshua Bengio, Dan Hendrycks, and other leading researchers, which rigorously defined Artificial General Intelligence (AGI). According to this definition, GPT-4 was 27% of the way to AGI, and GPT-5 was 57% there. While not yet complete, this rapid leap from 27% to 57% suggests that AGI's arrival "might not be that long."

He shared that he cautioned his MIT students the previous day that even if it takes four more years, by their graduation, they "might not be able to get any jobs anymore." He underscored that it is "certainly not too soon to start preparing for it."

With Anthropic now blacklisted, the question arises as to how other AI giants will react: will they stand in solidarity, refusing similar demands, or will a company like xAI step in to claim the contract Anthropic rejected? (Editor’s note: Hours after this interview, OpenAI announced its own deal with the Pentagon.)

Tegmark noted that the previous night, Sam Altman of OpenAI publicly affirmed his support for Anthropic and stated he shared the same "red lines," a stance Tegmark admired for its courage. Google, however, had remained silent at the time of the interview, which Tegmark found "incredibly embarrassing" for the company, believing many of its staff would agree. No statement had yet emerged from xAI. He concluded that this moment would reveal "everybody's true colors."

In response to whether a positive outcome is still possible, Tegmark expressed a "strange" sense of optimism. He posited an "obvious alternative": treating AI companies like any other industry, ending the "corporate amnesty." This would necessitate requirements such as clinical trials before releasing powerful AI systems, compelling companies to demonstrate control to independent experts. Such a path, he argued, could usher in a "golden age" of AI, delivering its benefits without the accompanying existential dread. While acknowledging this is not the current trajectory, he stressed that it remains an achievable one.

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news