OpenAI formally signed a Memorandum of Understanding (MOU) with the U.S. AI Safety Institute at NIST, establishing an unprecedented framework for pre-release AI model testing and safety evaluation. The agreement represents a strategic approach to industry self-regulation, allowing OpenAI to …
Sam AltmanOpenAIU.S. AI Safety InstituteNISTElizabeth Kelly+1 moreai-governanceregulatory-captureai-safetytech-oligarchygovernment-partnerships+1 more
Following months of scrutiny, xAI announces updates to Grok AI’s content moderation and bias mitigation strategies. Key changes include stopping election misinformation generation, blocking inappropriate image generation, and committing to more transparent safety frameworks. These updates come …
Elon MuskxAIAI Ethics Review BoardU.S. AI Safety Instituteai-safetytech-ethicscorporate-accountabilityartificial-intelligence-regulationgovernment-ai-policy
xAI, Elon Musk’s AI company, encountered significant criticism for its lack of transparency and safety protocols surrounding the Grok AI system. Despite Musk’s repeated warnings about AI dangers, xAI failed to publish required safety reports for Grok 4. Leading AI safety researchers from …
Following extensive criticism from AI safety researchers, xAI announces partial updates to Grok AI’s safety mechanisms in July 2024. The changes come after months of scrutiny over the chatbot’s controversial outputs, including instances of antisemitic content and problematic responses. …
Elon MuskxAISamuel MarksBoaz BarakScott Wiener+1 moreai-safetycorporate-accountabilitytech-regulationai-ethicscontent-moderation+1 more
At the AI Seoul Summit in May 2024, xAI committed to the Frontier AI Safety Commitments, agreeing to provide transparency around model capabilities, risk assessments, and potential inappropriate use cases. However, the company faced significant criticism for not fully disclosing its safety …
Elon Musk’s xAI faced intense scrutiny after releasing Grok AI without comprehensive safety documentation, including generating antisemitic content and pulling opinions directly from Musk’s social media posts. Despite these controversies, xAI secured a $200 million Pentagon contract in …
Elon MuskxAIAI Safety ResearchersGSA AI Safety TeamDepartment of Defense+1 moreai-safetytech-communicationcorporate-accountabilityelection-technologygovernment-ai-contracts+1 more
A consortium of AI safety researchers published a comprehensive preliminary assessment of Grok AI, highlighting significant concerns about its content generation capabilities and ethical safeguards. The report identified multiple instances where the model could generate potentially harmful or …
xAIIndependent AI Safety ResearchersElon MuskSamuel MarksDan Hendrycks+1 moreai-safetytech-ethicsalgorithm-evaluationartificial-intelligenceinstitutional-capture
Following Grok’s launch by Elon Musk’s xAI in December 2023, AI ethics researchers and David Rozado’s political compass analysis reveal significant safety and bias vulnerabilities. The chatbot demonstrated potential for generating controversial and politically skewed content, with …
AI Safety ResearchersDavid RozadoxAIElon Muskai-safetytech-ethicsalgorithmic-biaspolitical-biasmachine-learning+1 more
Elon Musk’s xAI launches Grok, an AI chatbot positioned as a ‘maximum truth-seeking’ alternative to existing AI assistants. Developed in just four months, Grok was introduced to a limited audience of X Premium users in November 2023. The chatbot was designed to answer ‘spicy …