UK Lawmakers Accuse Google of Breaking AI Safety Pledge with Gemini 2.5 Pro Release

| Importance: 8/10

Sixty U.K. lawmakers accused Google DeepMind of violating international AI safety commitments by releasing Gemini 2.5 Pro without comprehensive public safety disclosures. The allegations center on Google’s failure to ‘publicly report’ system capabilities and risk assessments as pledged at a February 2024 international AI summit co-hosted by the U.K. and South Korea.

Key concerns include:

  • Releasing the model without detailed safety information
  • Not immediately clarifying external testing processes
  • Treating safety commitments as optional rather than mandatory

Notable signatories like Baroness Beeban Kidron and former Defence Secretary Des Browne warned that such practices could trigger a dangerous precedent in AI development.

The accusations highlight a broader industry trend where major AI companies appear to be retreating from comprehensive safety reporting, potentially undermining international AI governance efforts.

Help Improve This Timeline

Found an error or have additional information? You can help improve this event.

✏️ Edit This Event ➕ Suggest New Event

Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.