Skip to main content

AI's Political Resistance Rises

Source

theverge

March 4, 2026

In early January, a diverse assembly of approximately 90 political, community, and thought leaders convened at a New Orleans Marriott for a clandestine conference focused on artificial intelligence. The secrecy was so profound that attendees only discovered the full guest list upon entering the room. This unique gathering saw church leaders and conservative academics seated alongside labor union representatives, while progressive strategists, known for backing Bernie Sanders, shared space with prominent MAGA commentators. The AI thought leaders who orchestrated the event reportedly hoped for a productive, non-confrontational dialogue among the ideologically varied participants.

This past Wednesday, the Future of Life Institute (FLI), a highly respected authority in AI safety, unveiled the culmination of that meeting: the Pro-Human AI Declaration. This succinct document outlines five core guidelines emphasizing that AI development must prioritize humanity. Key tenets include preventing the concentration of power among the elite, safeguarding the well-being of children, families, and communities, and preserving human agency and liberty. The Declaration stands out for garnering an exceptionally broad spectrum of signatories, arguably the most diverse ever seen on a single political document.

Powerful civic organizations, extending far beyond the technology sector, have endorsed the Declaration. These include major unions such as the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild; religious entities like the G20 Interfaith Forum Association and the Congress of Christian Leaders; political groups like the Progressive Democrats of America (who supported Bernie Sanders' 2016 presidential bid); conservative think tanks such as the Institute for Family Studies; and advocacy organizations like Parents RISE!.

Individual signatories further illustrate this wide ideological reach, encompassing figures like Democratic presidential candidate Ralph Nader, AFT president Randi Weingarten, Signal Foundation president Meredith Whittaker, The Blaze’s Glenn Beck, War Room’s Steve Bannon, Virgin Group founder Sir Richard Branson, former National Security Advisor Susan Rice, members of SAG-AFTRA, and leaders of major evangelical organizations. Additional endorsements are anticipated in the coming days.

The meeting itself was conducted under Chatham House Rules, meaning the list of attendees remains private. However, participants who spoke to The Verge about their experience revealed they were invited by Max Tegmark, FLI co-founder and an MIT professor recognized on the TIME 100 AI list. Randi Weingarten, a powerful advocate for teachers' unions, told The Verge in a phone interview, "We spent a lot of time talking to him over the course of the last few months." Although she could not attend in person, Weingarten played a role in drafting the document and discovered remarkable parallels between FLI's perspective and the AFT’s own "common sense guardrails" for AI in schools, noting, "We’ve been on parallel tracks for quite a while without knowing it."

Joe Allen, co-founder of Humans First and a former correspondent for Bannon’s War Room, also confirmed to The Verge that Tegmark had invited him to New Orleans, following an earlier proof-of-concept meeting in Manhattan. Despite the initial jarring diversity of attendees and lingering political tensions, Allen expressed surprise at the swift consensus on critical issues. These included the principle that autonomous lethal weapons should not be solely AI-powered, that AI companies must not exploit children’s emotional attachments for profit, and that AI should not be granted legal personhood. Notably, even the least popular position within the Declaration still received approval from 94% of attendees.

Allen drew an analogy, stating, "I think about it like, if there’s knowledge that there’s poison in the water supply, or that drugs are flooding schools — anything like that, in general — most people are going to be against it and it isn’t partisan." He acknowledged that AI presents a slightly more complex challenge, with public opinion on specific AI models often dividing along party lines (e.g., Grok as "based" AI versus Anthropic as "woke" AI). However, to Allen, such distinctions are ultimately meaningless, questioning, "Like, what does ‘based’ and ‘woke’ even mean at this point?"

A recurring sentiment echoed throughout the conference was a profound sense of urgency: "‘We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.’"

This recent initiative contrasts with FLI's earlier endeavors. Nearly a decade prior, FLI had outlined a more optimistic framework for AI research—specifically, 23 principles crafted during the 2017 Asilomar Conference for Beneficial AI. That event attracted over 100 leading figures from the tech world, with signatories and endorsers including AI pioneers like Sam Altman, Elon Musk, and Demis Hassabis, luminaries such as Stephen Hawking and Ray Kurzweil, and representatives from major corporations like Google, Intel, and Apple.

However, for the Pro-Human AI Declaration, industry representatives, particularly those at the level of Altman and Musk, were deliberately excluded. Emilia Javorsky, director of FLI's Futures Program, explained to The Verge that this was "a very deliberate design choice." She observed that corporate interests frequently dominate discussions at AI impact conferences "just by nature of their size and weight and funding capabilities." Instead, invitations were extended to civil society organizations, all of whom are grappling with significant disruption from AI and share a frustration with Big Tech's perceived indifference to their concerns.

Anthony Aguirre, another FLI co-founder and a distinguished cosmology professor at UC Santa Cruz, stressed that this declaration is not an attempt to revise the Asilomar Principles. Rather, it serves as a sober acknowledgment of a new, darker reality. In this new landscape, former colleagues now helm major corporations, relentlessly pursuing artificial general intelligence to outpace rivals and satisfy shareholders, often at the expense of safety. The power to shape AI's trajectory has become increasingly concentrated, a trend further exacerbated by the aggressive deregulation policies of the previous administration. Aguirre told The Verge, "Other than the overall mass of humanity, there was one entity that would have put meaningful control on what they could do, and that was the US government. Now that it’s backing them and wants to keep them unrestrained, the only thing that’s a real threat are other companies."

A driving conviction among the group was encapsulated in the powerful statement: "If the government won’t do it, then the people have to force the government to do it."

Javorsky noted that, in the absence of Big Tech influence and intense public scrutiny, the speed with which this diverse group converged on shared issues and conclusions was remarkable. Throughout the conference, she consistently heard the urgent refrain: "‘We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.’"

In Weingarten's assessment, the Declaration functions as the mission statement for what she termed a "key demanding coalition"—a strategic alliance of political adversaries—designed to coordinate their efforts against a government that often prioritizes corporate enterprise over societal well-being. She emphasized, "What is really important is that there are other people who have said, let’s try to create a bigger coalition to say that we need humanity to be at the center of AI." While the AFT alone could advocate for child safety in AI, their capacity to pressure lawmakers is limited. However, by uniting with multiple trade unions, religious organizations, and bipartisan allies, lawmakers would face significant pressure. Weingarten concluded, "If the government won’t do it, then the people have to force the government to do it. And you start with a statement of principles."

Allen articulated his belief in the Declaration's power to inspire: "If there’s one statement I would make about the whole thing, which is what I said to the group when I had their attention, is that no one is going to engineer a pro-human movement. The only thing you can do is inspire it." He expressed confidence that such a document "should inspire a pro-human movement. Like a fundamental document that’s setting the tone…There’s no amount of social engineering, or money, or media, or any of that, that’s really gonna do it."

The precise manifestation of this movement remains undefined, particularly in terms of electoral impact. FLI is currently running an ad campaign titled “Protect What’s Human,” though as a 501(c)3 organization, it is legally prohibited from endorsing or campaigning for candidates or ballot initiatives. However, a February poll conducted with Tavern Research assessed voter support for the Declaration's principles. Despite respondents exhibiting clear partisan divisions in their voting preferences and party affiliations, they overwhelmingly supported the Declaration's statements by a significant margin. Even the least popular principles—that AI must not foster monopolies or concentrate control—still garnered 69% support. The most endorsed principle—that humans must retain control of AI and prevent it from harming children, families, and communities—achieved 80% support.

For Javorsky, these poll results unequivocally validated the conference's conclusions. She remarked, "It’s one thing to have a whole bunch of civil society actors in a room together and think something’s representative. But you have to actually validate those with real people. This is actually resonating with them."

Against a backdrop of dynamic events—including Anthropic's recent discussions about its AI potentially achieving consciousness, its dispute with the Pentagon over military use of AI for autonomous lethal weapons without human oversight, OpenAI's subsequent move to secure its own Pentagon contract, the reported use of Anthropic-powered tools in a high-profile assassination, emerging reports of AI-driven layoffs, and increasing revelations about the Pentagon's extensive surveillance demands—Alan Minsky, CEO of the Progressive Democrats of America and a meeting attendee, conveyed to The Verge that he anticipates no significant political opposition to the declaration from either the left or the right.

Minsky sharply criticized certain tech leaders, stating, "Altman and Musk, certainly, have taken a flippant manner towards what are serious threats to communities: the psychological deterioration of a population that lives increasingly online, the impact of continual economic maldistribution of wealth, and, of course, contempt for the idea that basic protection must come before profits." He added, "The risk of an existential threat to humanity is no longer something they even blink at. As the public realizes that this is their attitude, that they have utter contempt for the average person’s welfare — yes, we think the public will be on our side."

Editorial Staff

Editorial Staff

The Editorial Staff at AIChief is a team of Professional Content writers with extensive experience in the field of AI and Marketing. AIChief was Founded in 2025, AIChief has quickly grown to become the largest free AI resource hub in the industry. Stay connected with them on Facebook, Instagram and X for the latest updates.

View All Posts

User Comments

Filter:
No comments yet. Be the first to comment!