Facebook Algorithm Amplifies Myanmar Military Hate Speech Enabling Rohingya Genocide
Facebook’s engagement-maximizing algorithm proactively amplifies Myanmar military’s anti-Rohingya hate speech and genocide propaganda, directly contributing to systematic ethnic cleansing that kills thousands and displaces over 700,000 Rohingya Muslims. The platform’s surveillance capitalism business model prioritizes engagement over human safety despite years of warnings from civil society.
Algorithmic Amplification of Genocide Propaganda
Beginning in August 2017, Myanmar’s military forces launched a systematic campaign of ethnic cleansing against Rohingya Muslims in Rakhine state, unlawfully killing thousands, raping women and girls, burning entire villages, and forcing over 700,000 people to flee to Bangladesh. Facebook’s algorithms played a determinative role in enabling this genocide by proactively amplifying and promoting military propaganda and hate speech that incited mass violence.
Meta’s engagement-optimization algorithms functioned as a force multiplier for genocide propaganda, with over 70% of an anti-Rohingya hate figure’s video views coming from “chaining” - the platform’s recommendation system that suggests inflammatory content to users who had watched different videos. The algorithms disproportionately amplified the most inflammatory content in the lead-up to the 2017 atrocities, treating hate speech as highly engaging content that kept users on the platform longer and generated more advertising revenue.
The Tatmadaw (Myanmar’s armed forces) exploited Facebook’s algorithmic amplification systematically, with hundreds of military personnel operating fake accounts or posing as entertainment pages to flood the platform with anti-Rohingya content. Facebook’s algorithms amplified this coordinated propaganda campaign, pushing military disinformation and hate speech to millions of Myanmar users who increasingly relied on Facebook as their primary source of news and information.
Ignored Warnings and Catastrophic Moderation Failures
Civil society organizations repeatedly warned Facebook from 2013 to 2017 that the platform was fueling ethnic violence and contributing to conditions for genocide, warnings that the company systematically ignored. As late as mid-2014, Facebook employed only one Burmese-speaking content moderator - based in Dublin, Ireland - to monitor posts from Myanmar’s 1.2 million active users, a staffing level that made effective moderation mathematically impossible.
Human rights organizations documented that Facebook was being weaponized to incite violence against the Rohingya, explicitly warning Meta employees that the platform was contributing to a pending “genocide” similar to radio propaganda’s role in the Rwandan genocide. Despite these urgent warnings, Facebook failed to invest in Burmese-language content moderation, failed to adjust its algorithms to deprioritize inflammatory content, and failed to take action against military-linked accounts spreading genocidal propaganda.
Internal Meta documents from August 2019 acknowledged: “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” Yet the company took no action to fundamentally alter these engagement-maximizing mechanisms.
Surveillance Capitalism Business Model as Genocide Enabler
Facebook’s role in the Rohingya genocide demonstrates how surveillance capitalism business models create systematic incentives for algorithmic amplification of violence. The platform’s fundamental design - maximizing engagement to generate advertising revenue - treats inflammatory content advocating violence as desirable because it keeps users on the platform longer. This business model made Facebook’s algorithms structurally incapable of preventing genocide even when explicitly warned by human rights organizations.
The United Nations Investigation on Myanmar concluded in 2018 that “Facebook has been a useful instrument for those seeking to spread hate” and that the platform “turned into a beast” in Myanmar. UN investigators determined that Facebook played a “determining role” in the genocide against the Rohingya, with the platform’s algorithmic amplification of military hate speech creating conditions that enabled systematic atrocities.
The United States government formally declared the Myanmar military’s actions against the Rohingya to constitute genocide in 2022. Legal actions in U.S. and UK courts accuse Facebook of negligence that facilitated genocide, seeking over $150 billion in compensation on behalf of victims. Amnesty International has called for Meta to pay reparations for its role in the violent repression of Rohingya Muslims, emphasizing that the company’s profit-driven algorithmic systems made mass atrocities more likely and more severe.
As of 2025, Meta has paid no reparations and faced no criminal accountability for its algorithmic facilitation of genocide, demonstrating the impunity enjoyed by tech platforms even when their systems directly contribute to mass atrocities and crimes against humanity.
Key Actors
Sources (4)
- Facebook algorithms promoted anti-Rohingya violence in Myanmar (2022-09-29) [Tier 1]
- Amnesty report finds Facebook amplified hate ahead of Rohingya massacre (2022-09-29) [Tier 1]
- Facebook's systems promoted violence against Rohingya (2022-09-29) [Tier 1]
- Facebook and Genocide in Myanmar (2023-06-15) [Tier 1]
Help Improve This Timeline
Found an error or have additional information? You can help improve this event.
Edit: Opens GitHub editor to submit corrections or improvements via pull request.
Suggest: Opens a GitHub issue to propose a new event for the timeline.