The UK government has renamed its AI Safety Institute to the AI Security Institute, signaling a shift in focus from existential AI risks to cybersecurity and crime prevention. This change aligns with the government’s broader strategy to modernize the economy and strengthen AI’s role in national security.
Alongside this, the government has announced a partnership with AI firm Anthropic, which will explore how its AI assistant, Claude, can enhance public services, contribute to scientific research, and assist in economic modeling. The institute will also leverage Anthropic’s tools to evaluate AI risks in security-related contexts.
This move reflects a broader trend in the government’s AI strategy, which prioritizes development and economic growth. Recent initiatives include the deployment of AI assistants for civil servants and digital wallets for citizens’ government documents.
The change in messaging is evident in the government’s AI Plan for Change, which notably avoids terms like “safety” and “threat,” focusing instead on AI’s role in driving progress. While concerns about AI safety persist, the government appears committed to balancing those risks with the need for rapid technological advancement.
Despite the rebranding, officials insist that the institute’s mission remains unchanged. Ian Hogarth, the institute’s chair, emphasized that security has always been a core priority, and the new criminal misuse team, along with deeper collaboration with national security agencies, marks the next phase of tackling AI risks. Meanwhile, international approaches to AI safety continue to evolve.
In the U.S., discussions around AI safety have shifted, with some policymakers even considering dismantling the AI Safety Institute. The UK’s move suggests a global shift toward prioritizing AI security and development over broad existential concerns.