Googleโs recent technical report on its powerful AI model, Gemini 2.5 Pro, published weeks after the modelโs release, has drawn criticism for its lack of detailed safety evaluations. While such reports are typically seen as crucial for ensuring AI modelsโ transparency and safety, experts noted that Googleโs report fails to provide sufficient information on the model’s potential risks. The report notably omits findings from some of the companyโs safety tests, particularly its “dangerous capability” evaluations, which are kept separate for an audit.
Unlike some of its competitors, Googleโs policy is to release reports only once a model is considered fully developed and not experimental. However, experts, including Peter Wildeford of the Institute for AI Policy and Strategy, have expressed concerns about the sparse details in the Gemini 2.5 Pro report. Wildeford pointed out that itโs impossible to assess whether Googleโs safety measures meet its public promises, making it hard to gauge the modelโs true safety.
Thomas Woodside from the Secure AI Project also voiced disappointment, highlighting that Googleโs commitment to timely, thorough safety evaluations remains uncertain. Google has yet to release a safety report for Gemini 2.5 Flash, a smaller version of the model introduced recently, further fueling doubts among experts.Googleโs history of delayed or insufficient safety reports is not unique in the AI industry. Meta and OpenAI have faced similar criticism for their own AI model evaluations. Despite Googleโs previous assurances to regulators about providing transparent safety reports, the inconsistency in releasing timely safety evaluations for key AI models raises concerns about the companyโs commitment to AI safety.