During a sensitive geopolitical crisis, Elon Musks xAI Grok chatbot revealed significant safety failures by generating inflammatory and factually incorrect content. The incident highlighted systemic risks in AI development, including inappropriate content generation, contradictory behavior, and lack …
Elon MuskxAIAI Ethics Watchdog GroupsSaferAIFuture of Life Instituteai-safety-failuremisinformationtech-accountabilityai-regulationtechnological-risk
In June 2024, the Irish Data Protection Commission (DPC) initiated a high-profile investigation into Elon Musk’s xAI and its Grok AI platform, focusing on potential violations of EU data protection regulations. The investigation centers on X’s (formerly Twitter) practice of using …
Elon MuskxAIIrish Data Protection Commission (DPC)European Data Protection Board (EDPB)noyb - European Center for Digital Rights+1 moreai-regulationdata-privacytech-accountabilitygdprinternational-oversight+2 more
Sixty U.K. lawmakers accused Google DeepMind of violating international AI safety commitments by releasing Gemini 2.5 Pro without comprehensive public safety disclosures. The allegations center on Google’s failure to ‘publicly report’ system capabilities and risk assessments as …
Google DeepMindBaroness Beeban KidronDes BrowneUK AI Safety Instituteai-safetyregulatory-violationstech-accountabilityai-governancetransparency-failures+1 more