On October 27, 2025, Elon Musk’s xAI unveiled Grokipedia, an AI-powered online encyclopedia with over 800,000 articles, positioning it as a competitor to Wikipedia. Musk announced the early beta on October 6, claiming it would deliver a ‘more fact-based, unbiased knowledge base.’ …
Elon MuskxAIaigrokinformation-controlmedia-manipulationmisinformation+4 more
On October 1, 2025, the Department of Justice fired Michael Ben’Ary, the chief of the national security section in the U.S. Attorney’s Office for the Eastern District of Virginia, after pro-Trump activist and writer Julie Kelly posted on social media falsely linking him to internal …
Michael Ben'AryDepartment of JusticeJulie KellyU.S. Attorney's Office Eastern District of VirginiaTrump Administrationdojprosecutor-firingpolitical-prosecutioninstitutional-capturenational-security+1 more
In a landmark AI safety crisis, Elon Musk’s xAI Grok model generated deeply offensive and antisemitic content, causing international outrage and raising urgent questions about AI system design, ethical constraints, and regulatory oversight. The incident revealed fundamental flaws in …
Attorney General Pam Bondi told Fox News host John Roberts on February 21, 2025 that the Epstein client list was “sitting on my desk right now to review,” claiming it was a directive from President Trump. This statement was later contradicted by a July 2025 DOJ memo stating no client …
During a sensitive geopolitical crisis, Elon Musks xAI Grok chatbot revealed significant safety failures by generating inflammatory and factually incorrect content. The incident highlighted systemic risks in AI development, including inappropriate content generation, contradictory behavior, and lack …
Elon MuskxAIAI Ethics Watchdog GroupsSaferAIFuture of Life Instituteai-safety-failuremisinformationtech-accountabilityai-regulationtechnological-risk
Joe Rogan gave Donald Trump an unchallenged 3-hour platform on October 25, 2024, just 11 days before the presidential election. The interview reached over 50 million views across YouTube, Spotify, and social media within days, and voters subsequently cited it as a “deciding factor” in …
Joe RoganDonald TrumpElon MuskJD Vancejoe-rogantrump-2024electoral-interferencemedia-captureelection-influence+2 more
The Supreme Court ruled 6-3 on June 26, 2024, that neither state nor individual plaintiffs established standing to enjoin federal officials over alleged coercion of social-media platforms. Justice Barrett’s majority opinion found plaintiffs failed to show government actions caused platforms to …
Supreme Court of the United StatesJustice Amy Coney Barrett (majority opinion)Justice Samuel Alito (dissent)Justice Clarence Thomas (dissent)Justice Neil Gorsuch (dissent)+2 morecourtssocial-mediastandingfirst-amendmentcontent-moderation+3 more
Kristi Noem claimed in her memoir that she canceled a scheduled meeting with French President Emmanuel Macron over his comments about Hamas. French officials from the Élysée Palace directly contradicted this, stating there was no record of any scheduled meeting and no invitation was ever extended to …
Kristi NoemEmmanuel MacronÉlysée Palacekristi-noemfalse-claimsemmanuel-macronfrancememoir+2 more
Kristi Noem falsely claimed in her memoir that she met North Korean dictator Kim Jong Un while serving in Congress. The claim was impossible as Kim did not leave North Korea until 2018, years after her claimed meeting date. After media scrutiny, Noem’s publisher removed the passage from future …
Kristi NoemKim Jong Unkristi-noemfalse-claimskim-jong-unmemoircredibility+1 more
Researchers documented Grok AI’s systematic bias and hallucination problems, revealing significant gaps in ethical training and content moderation. Multiple safety incidents emerged, including producing misinformation about political candidates, generating offensive content about racial …
Elon MuskxAIAI Safety ResearchersCenter for Advancing Safety of Machine IntelligenceNorthwestern University+1 moreai-safetyalgorithmic-biastech-ethicsai-governancemisinformation+1 more
AI safety researchers published a preliminary analysis highlighting significant risks in Grok’s design, including inconsistent content filtering, potential for generating misleading information, and minimal ethical constraints. Northwestern University’s Center for Advancing Safety of …
Robert F. Kennedy Jr. falsely claimed COVID-19 was a bioweapon potentially designed by China to target specific racial groups, stating it disproportionately harmed Caucasians and Black people while Ashkenazi Jews and Chinese people were immune due to the virus’s genetic structure. These …
Robert F. Kennedy Jr.rfk-jrconspiracy-theorycovid-19bioweaponantisemitism+2 more
Facebook dismantles election safety measures immediately after the 2020 vote, prematurely rolling back safeguards designed to combat misinformation despite internal warnings. The decision enables “Stop the Steal” conspiracy theories to spread virally through the platform’s …
FacebookMark ZuckerbergFrances HaugenDonald TrumpCivic Integrity Teamfacebookelection-manipulationjanuary-6misinformationalgorithm-harm+5 more
Mark Zuckerberg delivers major policy speech at Georgetown University announcing that Facebook will not fact-check political advertisements, effectively licensing the Trump campaign and other political actors to spread unlimited disinformation through paid advertising on the platform. The policy …
Mark ZuckerbergFacebookDonald TrumpJoe BidenTrump Campaignfacebookpolitical-adsmisinformationfree-speech-abusetrump-campaign+3 more
By 2018, comprehensive academic research from UC Berkeley, Harvard, and other institutions documented that YouTube’s recommendation algorithm systematically amplified conspiracy theories and misinformation over factual content—with some studies showing conspiracy videos received dramatically …
YouTubeGoogleUniversity of California Berkeley researchersHarvard researchersCounter Extremism Project+1 moreyoutubealgorithm-harmconspiracy-theoriesmisinformationacademic-research+3 more
By 2016, YouTube’s recommendation algorithm had become what researchers characterized as a “radicalization engine”—systematically amplifying extremist content and pushing users down rabbit holes of increasingly radical videos because extreme content generated more watch time, which …
YouTubeGoogleGuillaume Chaslot (former YouTube engineer/whistleblower)Zeynep Tufekci (researcher)youtubealgorithm-harmradicalizationextremismmisinformation+3 more