The Trump administration’s Department of Government Efficiency (DOGE) developed an AI tool aimed at eliminating 50% of federal regulations by January 2026, targeting approximately 100,000 rules across multiple agencies using AI-driven analysis
Donald TrumpTrump AdministrationElon MuskDepartment of Government Efficiency (DOGE)kleptocracyai-governancederegulationtrump-administration
Anthropic achieved comprehensive FedRAMP High certification for Claude across multiple cloud platforms, enabling secure AI deployment across all three branches of the U.S. government. Through a groundbreaking $1 OneGov deal with the General Services Administration (GSA), Anthropic offers Claude AI …
AnthropicPalantirU.S. intelligence agenciesDefense agenciesFedRAMP+3 moreai-governancenational-securitysurveillance-techregulatory-captureclassified-systems+2 more
Meta announced a groundbreaking policy shift, making its open-source Llama AI models available to US government agencies and defense contractors. Partnering with companies like Accenture, AWS, Anduril, Lockheed Martin, Microsoft, Oracle, Scale AI, and others, Meta opened its technology for national …
MetaMark ZuckerbergUS government agenciesLockheed MartinPalantir+8 moreai-governancemilitary-contractsopen-sourceregulatory-capturenational-security+1 more
OpenAI formally signed a Memorandum of Understanding (MOU) with the U.S. AI Safety Institute at NIST, establishing an unprecedented framework for pre-release AI model testing and safety evaluation. The agreement represents a strategic approach to industry self-regulation, allowing OpenAI to …
Sam AltmanOpenAIU.S. AI Safety InstituteNISTElizabeth Kelly+1 moreai-governanceregulatory-captureai-safetytech-oligarchygovernment-partnerships+1 more
xAI, Elon Musk’s AI company, encountered significant criticism for its lack of transparency and safety protocols surrounding the Grok AI system. Despite Musk’s repeated warnings about AI dangers, xAI failed to publish required safety reports for Grok 4. Leading AI safety researchers from …
Congressional representatives, including a bipartisan group of legislators, launched a comprehensive multi-agency investigation into xAI’s Grok AI. The investigation uncovered systemic issues with algorithmic bias, content generation risks, and problematic government contracting practices. Key …
Elon MuskxAISenator Elizabeth WarrenRep. Don BaconRep. Tom Suozzi+4 moreai-governancetechnology-regulationcongressional-investigationalgorithmic-biasnational-security+1 more
U.S. House Oversight Democrats, led by Representatives Robert Garcia and Stephen Lynch, launched a comprehensive investigation into Grok AI’s development practices. The investigation scrutinized potential risks to the public information ecosystem, including concerns about privacy, …
Robert GarciaStephen LynchElon MuskxAIHouse Oversight Committee+4 moreai-governancetechnology-regulationartificial-intelligencecybersecuritycongressional-oversight+3 more
Sixty U.K. lawmakers accused Google DeepMind of violating international AI safety commitments by releasing Gemini 2.5 Pro without comprehensive public safety disclosures. The allegations center on Google’s failure to ‘publicly report’ system capabilities and risk assessments as …
Google DeepMindBaroness Beeban KidronDes BrowneUK AI Safety Instituteai-safetyregulatory-violationstech-accountabilityai-governancetransparency-failures+1 more
Researchers documented Grok AI’s systematic bias and hallucination problems, revealing significant gaps in ethical training and content moderation. Multiple safety incidents emerged, including producing misinformation about political candidates, generating offensive content about racial …
Elon MuskxAIAI Safety ResearchersCenter for Advancing Safety of Machine IntelligenceNorthwestern University+1 moreai-safetyalgorithmic-biastech-ethicsai-governancemisinformation+1 more
AI safety researchers published a preliminary analysis highlighting significant risks in Grok’s design, including inconsistent content filtering, potential for generating misleading information, and minimal ethical constraints. Northwestern University’s Center for Advancing Safety of …