UK Lawmakers Accuse Google of Breaking AI Safety Pledge with Gemini 2.5 Pro Release
Sixty U.K. lawmakers accused Google DeepMind of violating international AI safety commitments by releasing Gemini 2.5 Pro without comprehensive public safety disclosures. The allegations center on Google’s failure to ‘publicly report’ system capabilities and risk assessments as pledged at a February 2024 international AI summit co-hosted by the U.K. and South Korea.
Key concerns include:
- Releasing the model without detailed safety information
- Not immediately clarifying external testing processes
- Treating safety commitments as optional rather than mandatory
Notable signatories like Baroness Beeban Kidron and former Defence Secretary Des Browne warned that such practices could trigger a dangerous precedent in AI development.
The accusations highlight a broader industry trend where major AI companies appear to be retreating from comprehensive safety reporting, potentially undermining international AI governance efforts.
Key Actors
Sources (4)
- 60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge (2025-08-29)
- AI Safety Institute Overview (2024-02-01)
- British Lawmakers Accuse Google DeepMind of 'Breach of Trust' Over Delayed Gemini 2.5 Pro Safety Report (2025-08-29)
- Google DeepMind Accused of Breaking AI Safety Commitments by UK Lawmakers (2024-03-25)
Help Improve This Timeline
Found an error or have additional information? You can help improve this event.
Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.