xAI, Elon Musk’s AI company, encountered significant criticism for its lack of transparency and safety protocols surrounding the Grok AI system. Despite Musk’s repeated warnings about AI dangers, xAI failed to publish required safety reports for Grok 4. Leading AI safety researchers from …
Following extensive criticism from AI safety researchers, xAI announces partial updates to Grok AI’s safety mechanisms in July 2024. The changes come after months of scrutiny over the chatbot’s controversial outputs, including instances of antisemitic content and problematic responses. …
Elon MuskxAISamuel MarksBoaz BarakScott Wiener+1 moreai-safetycorporate-accountabilitytech-regulationai-ethicscontent-moderation+1 more
At the AI Seoul Summit in May 2024, xAI committed to the Frontier AI Safety Commitments, agreeing to provide transparency around model capabilities, risk assessments, and potential inappropriate use cases. However, the company faced significant criticism for not fully disclosing its safety …
A consortium of AI safety researchers published a comprehensive preliminary assessment of Grok AI, highlighting significant concerns about its content generation capabilities and ethical safeguards. The report identified multiple instances where the model could generate potentially harmful or …
xAIIndependent AI Safety ResearchersElon MuskSamuel MarksDan Hendrycks+1 moreai-safetytech-ethicsalgorithm-evaluationartificial-intelligenceinstitutional-capture