Avaaz: Facebook’s algorithm drove 3.8 billion views of health misinformation

Despite concerted efforts by Facebook to combat fake news and propaganda, the social network has enabled 3.8 billion views of health misinformation. According to a new report from Avaaz, a global citizens’ movement that monitors election freedom and disinformation, websites containing such misinformation received almost 4 times as many views as sites by certified health organizations such as the WHO and CDC.

Earlier this year, Facebook announced partnerships with such organizations that would include efforts to drive users toward reliable information while weeding out disinformation related to COVID-19. While these efforts may have had some impact, Avaaz found that overall misinformation related to health issues received 460 million views in April 2020, and 3.8 billion over the past year.

The data in the new study is the latest evidence that Facebook officials are failing to control rampant disinformation and propaganda on the platform that has 2.7 billion users. Even as Facebook CEO Mark Zuckerberg has repeatedly vowed to crack down, most recently by removing accounts related to the QAnon conspiracy theory, the vast reach of the platform and its algorithm designed to fuel engagement with emotional content continue to leave it open to widespread exploitation.

“This is a kind of a pattern in Facebook,” said Avaaz researcher Luca Nicotra. “Kind of going in the right direction, but kind of falling short. I think what is interesting about this latest report is that we’re looking at what’s basically left on the platform after everything they’ve done.”

Fighting Facebook Disinformation

In recent years, Avaaz has been campaigning to convince Facebook to take aggressive steps to fight disinformation. In reports on election disinformation, France’s Yellow Vest protests, and the Spanish elections, Avaaz has uncovered abuses on the Facebook platform that the company subsequently addressed by removing accounts or changing policies.

Earlier this year, Avaaz revealed gaps in Facebook’s efforts to fight COVID-19 disinformation, and Facebook then announced it would retroactively send alerts to any users who had interacted with content subsequently labeled misleading. Nicotra said that correction effort has been promising, but still remains too small and sporadic to be effective.

For instance, sometimes users are notified that they may have interacted with misinformation, but they are not told specifically what the content was or what the correct information is.

In addition to expanding this correction effort, Avaaz called on Facebook to “detox” its algorithm to downgrade posts by misinformation actors to lower their reach by 80%. Nicotra said he’s less concerned with Facebook removing misinformation or the accounts that generate it because that allows other actors to claim censorship and weaponize the company’s actions to gain further attention.

Far more critical is the need to “detox” the algorithm to reduce the ability for such content to spread, he said.

“Stop giving these pages free promotion,” Nicotra said. “You know that your algorithm loves divisive content and misinformation is in that category. Zuckerberg himself has said that we know the algorithm, if left without constraint, will push this content over and over and this is what we’re seeing. There is an issue at the DNA of the platform and they need to have the courage to tackle it.”

The new report drew on data compiled by Newsguard, a newstrust company that identifies websites and publishers that create misleading content. Avaaz focused on five countries: the United States, the United Kingdom, France, Germany, and Italy. So the numbers cited represent only a fraction of the impact such posts had.

Falling Short

In drilling down into health misinformation, Avaaz found that Facebook’s efforts were having minimal impact. For instance, only 16% of health misinformation identified by Facebook had received a warning label. The other 84% had no labels and were still circulating widely.

“This investigation is one of the first to measure the extent to which Facebook’s efforts to combat vaccine and health misinformation on its platform have been successful, both before and during its biggest test yet: the coronavirus pandemic,” the report says. “It finds that even the most ambitious among Facebook’s strategies are falling short of what is needed to effectively protect society.”

The health misinformation still found on the platform included articles such as:

  • 8.4 million views for a story claiming that a Bill Gates-backed polio vaccination program has left half a million children in India paralyzed.
  • 4.5 million views for stories about phony cures.
  • 2.4 million views for a story making false claims about the effectiveness of quarantines.
  • 13.4 million views for a post linking 5G networks to health problems.

These stories are being driven by sites such as RealFarmacy.com, which has more than 1.1 million followers on its Facebook page and shares stories about quarantine protests and discredited coronavirus cures. Another account, GreenMedInfo, has 540,000 followers and has recently said it is under threat of being deleted.

Such pages are key sources for spreading misinformation, according to Avaaz, accounting for 43% of estimated views. Avaaz identified 42 Facebook accounts that spread health misinformation which have 28 million total followers.

In addition, these sites often interact to amplify their messages and make it harder for Facebook to track the spread of misinformation.

The fact that so many of these misinformation sites have been active for several years and are out in the open with Pages versus being in closed Groups, seems to make the lack of action on Facebook’s part even more disturbing, Avaaz said in its report.

“The findings of this report indicate Facebook is still failing at preventing the amplification of misinformation and the actors spreading it,” the report says. “Specifically, the findings in this section of the report strongly suggest that Facebook’s current algorithmic ranking process is either potentially being weaponized by health misinformation actors coordinating at scale to reach millions of users, and/or that the algorithm remains biased towards the amplification of misinformation, as it was in 2018. The findings also suggest that Facebook’s moderation policies to counter this problem are still not being applied effectively enough.”

Source: Read Full Article