In just 21 days, a Facebook-led experiment in India reveals a dark truth

iNDICA NEWS BUREAU-

Facebook in India has been selective in curbing hate speech, misinformation, and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press.

The social media giant, set up a test account in India in February 2019, to determine how its own algorithms affect what people see in one of its fastest-growing and most important overseas markets. The results stunned the company’s own staff, as they cast doubt over the company’s motivations and interests.

The internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues.

Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own “recommended” feature and algorithms.

But they also include the company staffers’ concerns over the mishandling of these issues and their discontent expressed about the viral “malcontent” on the platform.

According to the documents, Facebook saw India as one of the most “at-risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India — a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country to near war with rival Pakistan.

Within those three weeks, the new user’s feed turned into a maelstrom of fake news and incendiary images. There were graphic photos of beheadings, doctored images of India airstrikes against Pakistan and jingoistic scenes of violence. One group for “things that make you laugh” included fake news of 300 terrorists who died in a bombing in Pakistan.

“I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” one staffer wrote, according to a 46-page research note that’s among the trove of documents released by Facebook whistleblower Frances Haugen.

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

The test proved telling because it was designed to focus exclusively on Facebook’s role in recommending content. The trial account used the profile of a 21-year-old woman living in Jaipur and hailing from Hyderabad. The user only followed pages or groups recommended by Facebook or encountered through those recommendations. The experience was termed an “integrity nightmare,” by the author of the research note.

It sparked deep concerns over what such divisive content could lead to in the real world, where local news at the time were reporting on Kashmiris being attacked in the fallout.

“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.

While Haugen’s disclosures have painted a damning picture of Facebook’s role in spreading harmful content in the U.S., the India experiment suggests that the company’s influence globally could be even worse. Most of the money Facebook spends on content moderation is focused on English-language media in countries like the U.S.

The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear “blind spots,” particularly in “local language content.”

Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such “unmoderated” and problematic content “could totally take over” during “a major crisis event.”

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content.

In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Again, it was noted that the platform didn’t have enough local language fact-checkers, which meant a lot of content went unverified.

Alongside misinformation, the leaked documents reveal another problem plaguing Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

The documents reveal the leadership dithered on the decision, prompting concerns by some employees, of whom one wrote that Facebook was only designating non-Hindu extremist organizations as “dangerous.”

However, the company has also repeatedly tangled with the Indian government over its practices there. New regulations require that Facebook and other social media companies identify individuals responsible for their online content — making them accountable to the government.

Facebook and Twitter Inc. have fought back against the rules. On Facebook’s WhatsApp platform, viral fake messages circulated about child kidnapping gangs, leading to dozens of lynchings across the country beginning in the summer of 2017, further enraging users, the courts and the government.

The Facebook report ends by acknowledging its own recommendations led the test user account to become “filled with polarizing and graphic content, hate speech and misinformation.”

It sounded a hopeful note that the experience “can serve as a starting point for conversations around understanding and mitigating integrity harms” from its recommendations in markets beyond the U.S.

“Could we as a company have an extra responsibility for preventing integrity harms that result from recommended content?,” the tester asked.