Jack Clark, co-founder of Anthropic and the designated leader of the newly established Anthropic Institute, has expressed “no concerns” regarding the funding for its research initiatives.
Following a protracted conflict with the Pentagon, which led to a blacklist and a subsequent lawsuit, Anthropic is undertaking significant changes to its executive leadership and research strategies. The company revealed on Wednesday the formation of a new internal think tank, the Anthropic Institute. This institute will consolidate three of Anthropic’s existing research teams and dedicate its efforts to exploring the broad societal implications of AI. Its agenda will encompass critical questions such as “what happens to jobs and economies, whether AI makes us safer or introduces new dangers, how its values might shape ours, and whether we can retain control,” according to the company’s statement.
These developments coincide with shifts within the company’s C-suite. Anthropic co-founder Jack Clark is transitioning to lead the new think tank, assuming the title of Head of Public Benefit, a move that follows over five years as Head of Public Policy. The public policy team, which Anthropic states tripled in size in 2025, will now be under the leadership of Sarah Heck, previously the Head of External Affairs. Furthermore, Anthropic plans to open its anticipated Washington, DC office, where the public policy team will continue to address key areas including national security, AI infrastructure, energy, and fostering “democratic leadership in AI.”
Clark informed The Verge that the launch of the Anthropic Institute has been in preparation for an extended period, and he has considered a role of this nature since November. However, its announcement closely follows Anthropic’s lawsuit against the U.S. government regarding its classification as a supply-chain risk. This designation would effectively prevent its clients from integrating Anthropic’s technology into their engagements with the Department of Defense. The lawsuit contends that the Trump administration unlawfully blacklisted the company for establishing “red lines” against mass domestic surveillance and fully autonomous lethal weapons.
When questioned about the recent controversies, Clark remarked, “It’s never dull working in AI here at Anthropic — there’s always something going on … The pace of AI progress isn’t slowing itself down for external events, and neither are we.” He clarified that while the situation hasn't “directly changed” the scheduled research agenda, he believes it “has affirmed” Anthropic’s commitment to greater public disclosure. Clark further stated, “What we’re experiencing with the last few weeks just sort of shows you how much hunger there is for a larger national conversation by the public about this technology.”
The Anthropic Institute is commencing operations with approximately 30 individuals, including distinguished founding members such as Matt Botvinick, previously with Google DeepMind; Anton Korinek, a professor currently on leave from the University of Virginia’s economics department; and Zoe Hitzig, a researcher who departed OpenAI following its decision to integrate advertisements into ChatGPT. This new think tank merges Anthropic’s existing societal impacts team, which investigates AI’s effects across various societal domains; its frontier red team, responsible for stress-testing AI systems for vulnerabilities; and its economic research team, which monitors AI’s implications for the economy and labor markets. Additionally, the Anthropic Institute intends to “incubate” new teams, including one under Botvinick’s leadership focused on AI’s influence on the legal system. Hitzig and Korinek are slated to head significant economic research endeavors. Clark anticipates that the think tank’s staff will double annually for the foreseeable future.
This year presents heightened scrutiny for high-valuation AI companies such as Anthropic, which is reportedly planning an IPO. Court filings from Anthropic disclosed that the company has accumulated over $5 billion in all-time commercial revenue, while investing $10 billion to date in model training and inference. The company also reported receiving “outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic,” noting that “dozens of companies have contacted Anthropic” seeking clarification and, in some instances, an understanding of their termination rights. Anthropic warned that, depending on the precise interpretation of government prohibitions, “hundreds of millions of 2026 revenue is at risk” at a minimum, with the most severe scenario potentially impacting multiple billions.
Given the potential for short-term revenue loss, a pertinent question arises: is Anthropic apprehensive about allocating substantial resources to long-term research? When posed this question by The Verge, Clark reiterated his stance, stating he had “no concerns.”
Clark elaborated on his perspective, asserting, “People tend to buy trust.” He continued, “A lot of what we can produce are the sorts of research that help businesses trust us … Long-term, Anthropic has always viewed its investment in safety — and studying and reporting on the safety of its systems — as being not a cost center but a profit center.”
Clark further indicated his belief that powerful AI — Anthropic’s preferred term for Artificial General Intelligence (AGI) — is likely to emerge by the close of this year or early 2027. He attributed his decision to transition roles largely to the “pace of AI progress.” He noted that his focus last year was predominantly on policy issues, such as SB 53, rather than the AI R&D and other subjects he wished to address. Anthropic’s official release states that the Anthropic Institute is specifically tasked with tackling the “hardest questions posed by powerful AI.”
As The Verge highlighted in December, many technology companies often champion transparency until it potentially impacts their business negatively. This raises a crucial question: what will be the company’s approach if the Anthropic Institute’s research yields findings that cast an unfavorable light on Anthropic itself?
Clark affirmed that Anthropic’s co-founders share “similar values” regarding the significance of public disclosure, particularly given the company’s status as a public benefit corporation. This designation allows it to pursue objectives “not solely for fiduciary gain.” He also mentioned a recent discussion with CEO Dario Amodei, where they both agreed on the paramount importance of transparency, even in the face of potential public relations difficulties.
However, the research undertaken by the Anthropic Institute could demand substantial computational resources, at a time when companies are intensely focused on advancing commercial products. Clark explained that, apart from resources earmarked for frontier model pre-training, Anthropic allocates its compute weekly based on “what seems most important.” While no exact portion has been specifically reserved for the Institute, he foresees no conflicts arising from this approach.
A key area of focus for the Anthropic Institute will be the study of human emotional dependence on AI, an escalating concern that has garnered increased public attention over the past year. Clark noted that Anthropic’s research teams have previously analyzed conversational patterns with Claude, assessing the technology’s capacity for persuasion or sycophancy. However, less attention has been given to direct engagement with users about their personal experiences. The think tank now intends to undertake extensive social science research, which will involve utilizing Anthropic’s AI to conduct interviews with users.
Drawing a parallel, Clark stated, “I think of this as: Social media had a huge effect on society, and it wasn’t just based on what was happening on the social media platforms. It was, ‘How was the use of social media changing people?’” He concluded, “We want to understand, ‘How does the use of AI change people?’”
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.