The UK government has renamed its AI Safety Institute to the AI Security Institute, signaling a shift in focus from existential AI risks to cybersecurity and crime prevention. This change aligns with the government’s broader strategy to modernize the economy and strengthen AIโs role in national security.
Alongside this, the government has announced a partnership with AI firm Anthropic, which will explore how its AI assistant, Claude, can enhance public services, contribute to scientific research, and assist in economic modeling. The institute will also leverage Anthropicโs tools to evaluate AI risks in security-related contexts.
This move reflects a broader trend in the governmentโs AI strategy, which prioritizes development and economic growth. Recent initiatives include the deployment of AI assistants for civil servants and digital wallets for citizensโ government documents.
The change in messaging is evident in the governmentโs AI Plan for Change, which notably avoids terms like “safety” and “threat,” focusing instead on AIโs role in driving progress. While concerns about AI safety persist, the government appears committed to balancing those risks with the need for rapid technological advancement.
Despite the rebranding, officials insist that the institute’s mission remains unchanged. Ian Hogarth, the instituteโs chair, emphasized that security has always been a core priority, and the new criminal misuse team, along with deeper collaboration with national security agencies, marks the next phase of tackling AI risks. Meanwhile, international approaches to AI safety continue to evolve.
In the U.S., discussions around AI safety have shifted, with some policymakers even considering dismantling the AI Safety Institute. The UKโs move suggests a global shift toward prioritizing AI security and development over broad existential concerns.