Google Faces Criticism for Lack of Transparency in AI Safety Reporting
Google recently released a technical report on its new AI model, Gemini 2.5 Pro, weeks after its launch. The report, however, has been criticized for being light on details, making it challenging for experts to assess the potential risks posed by the model.
Technical reports play a crucial role in unveiling information about AI that companies may not widely publicize. They are considered essential for independent research and safety evaluations within the AI community.
Google’s safety reporting approach differs from some of its competitors, as it only publishes technical reports once a model has moved past the experimental stage. The company also does not include findings from all dangerous capability evaluations in these reports, reserving them for a separate audit.
Despite Google’s efforts, experts have expressed disappointment over the lack of information in the Gemini 2.5 Pro report. Notably, the report does not mention Google’s Frontier Safety Framework (FSF), which aims to identify potential AI capabilities that could lead to severe harm.
Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, criticized the sparse nature of the report, stating that it hinders the verification of Google’s commitments to safety and security. Thomas Woodside, co-founder of the Secure AI Project, also raised concerns about the timeliness of supplemental safety evaluations from Google.
While Google has promised a report for its Gemini 2.5 Flash model, it has yet to release it. This delay has led to calls for more frequent updates and transparency in Google’s reporting practices.
Google is not the only tech company facing scrutiny over transparency in AI safety reporting. Meta and OpenAI have also been accused of providing insufficient information in their safety evaluations for new AI models.
Google’s commitments to regulators and governments regarding AI safety testing and reporting further highlight the importance of transparency in the development and deployment of AI technologies. Kevin Bankston, a senior adviser on AI governance, described the trend of vague and sporadic reports as a “race to the bottom” in AI safety.
Despite the lack of detail in its technical reports, Google has stated that it conducts safety testing and adversarial red teaming for its models before release.