Skip to main content

Kennedy School Review

Topic / Media

This Time is Different, or So They Say

We are less than four months away from a very contentious election in Myanmar. The pro-military Union Solidarity and Development Party is desperate to make a comeback, the case accusing Myanmar of genocide continues at the International Court of Justice, the repatriation of Rohingya refugees remains an open question, and COVID-19 has delayed the ongoing peace talks meant to end the 71 year-long civil war.

As we draw closer to election day, incidents of disinformation and hate speech are both expected to rise.

Facebook’s history in the country raises concerns about its ability to moderate content during this election. For the majority of Myanmar’s 21 million internet users, Facebook equals the internet. After 2017—a year where 25,000 Rohingyas were killed and 700,000 were displaced—Facebook received widespread criticism for its sluggish response to the hate speech targeted at minorities. The UN Independent Fact-Finding Mission on Myanmar explicitly called Facebook out for its “determining role” in the crisis. Business for Social Responsibility (BSR), a nonprofit that Facebook commissioned to run a human rights impact assessment, came back with similar findings.

But Facebook says 2020 will be different. In March, Facebook’s Public Policy Director for Southeast Asia, Rafael Frankel, proudly spoke about how the company is preparing for the election. It has established a team to work exclusively on Myanmar, he said, and has hired 100 native Burmese-language speakers to moderate content. Additionally, Frankel claims that Facebook has taken down hundreds of pages and groups that coordinate to mislead others (“coordinated inauthentic behavior”), has become four times better at proactively detecting hate speech, and is now collaborating with important stakeholders including Myanmar’s national election commission.

Impressive as it may be, this list conceals the many challenges and open questions that Facebook continues to reckon with. For all its self-proclaimed improvements, Facebook’s response is clouded by a fundamental lack of transparency, raising questions about whether it is truly ready for what’s to come.

Yes, Facebook took down hundreds of pages involved in coordinated inauthentic behavior; however, its methods lack consistency. In late 2019, students at Annie Lab, a fact-checking project at The University of Hong Kong, discovered that Facebook had overlooked a network of Myanmar-related pages engaged in coordinated, pro-military messaging campaigns. They linked these sources to Russia, an important ally of the Myanmar military, and found that the content had reached 4 million followers. If external parties with little to no access to Facebook datasets uncovered such major enforcement gaps, we can’t help but wonder how reliable the platform’s capabilities are and why Frankel did not acknowledge these limitations.

What’s equally worrying is Facebook’s flawed measure of success.

Facebook’s improvements in proactive detection of hate speech—the ability to identify and take down hate speech before users report it—are undoubtedly impressive. Successfully applying machine learning algorithms to such content is exceptionally challenging given the nuances in human language.

However, in the case of Myanmar, this metric is not particularly telling.  With low levels of digital literacy and widespread fear of a discriminatory state—a context that Facebook is aware of—vulnerable groups likely grossly underreport hate speech against them. Catching up to user reports may indicate progress, but it does not imply success.

What we really need data on is prevalence—the frequency at which hate speech is viewed by users. Here, Facebook’s Community Standards Enforcement Report leaves a placeholder. Apparently, it “cannot estimate this metric right now.” If we don’t know how often users see hate speech online (and how that has changed over time), can we really affirm that things have improved?

Finally, in the 20 months since BSR released its findings, Facebook has not once provided an update on its plans to act on the report’s recommendations—which the company itself solicited. Whether these plans are relatively straightforward (a Myanmar-specific version of the Community Standards Enforcement Report), highly sensitive (election-specific scenario planning), or incredibly complex (the stand-alone human rights policy), the platform has largely left us in the dark. As we wait for updates, we are left grappling with the same question yet again: Has Facebook allocated sufficient resources to Myanmar?

In November 2018, Alex Warofka, Product Policy Manager at Facebook, said: “We know we need to do more to ensure we are a force for good in Myanmar.” 20 months later, this still rings true.

Time is running out. We must urge journalists and policymakers to ask tough questions and demand that Facebook be more transparent about its election integrity plan for Myanmar. If we fail to do so, we too may be complicit in enabling violence and hate.

 

Sanjana Rajgarhia is a Harvard Kennedy School graduate who is passionate about technology policy and human rights. She has lived in Southeast Asia for over eight years and has worked with a variety of technology startups in the region, as well as the United Nations Secretariat in Bangkok, Amnesty International, and Harvard’s Shorenstein Center on Media, Policy, and Politics. 

 

Edited by: Derrick Flakoll

Photo by: Sippakorn Yamkasikorn