Facebook is battling its gravest crisis since the Cambridge Analytica scandal after a whistleblower accusing the company of placing “profit over safety” shed light on its inner workings through thousands of pages of leaked memos.
The documents were disclosed to US regulators and provided to Congress in redacted form by Frances Haugen’s legal counsel. A consortium of news organisations, including the Financial Times, has obtained the redacted versions received by Congress.
Earlier this month, Haugen testified in Congress that the social media company does not do enough to ensure the safety of its 2.9bn users, plays down the harm it can cause to society and has repeatedly misled investors and the public. The Wall Street Journal also ran a series of articles called the Facebook Files.
Here are four surprising revelations the documents contain:
Facebook has a huge language problem
Facebook is often accused of failing to moderate hate-speech on its English-language sites but the problem is much worse in countries that speak other languages, even after it promised to invest more after being blamed for its role in facilitating genocide in Myanmar in 2017.
One 2021 document warned on its very low number of content moderators in Arabic dialects spoken in Saudi Arabia, Yemen and Libya. Another study of Afghanistan, where Facebook has 5m users, found even the pages that explained how to report hate speech were incorrectly translated.
The failings occurred even though Facebook’s own research marked some of the countries as “high risk” because of their fragile political landscape and frequency of hate speech.
According to one document, the company allocated 87 per cent of its budget for developing its misinformation detection algorithms to the US in 2020, versus 13 per cent to the rest of the world.
Haugen said Facebook should be transparent on the resources it has by country and language.
Facebook often does not understand how its algorithms work
Several documents show Facebook stumped by its own algorithms.
One September 2019 memo found that men were being served up 64 per cent more political posts than women in “nearly every country”, with the issue being particularly large in African and Asian countries.
While men were more likely to follow accounts producing political content, the memo said Facebook’s feed ranking algorithms had also played a significant role.
A memo from June 2020 found it was “virtually guaranteed” that Facebook’s “major systems do show systemic biases based on the race of the affected user”.
The author suggested that perhaps the news feed ranking is more influenced by people who share frequently than those who share and engage less often, which may correlate with race. That results in content from certain races being prioritised over others.
As its AI failed, Facebook made it harder to report hate speech
Facebook has long said its artificial intelligence programs can spot and take down hate speech and abuse, but the files show its limits.
According to a March 2021 note by a group of researchers, the company takes action on only as little as 3 to 5 per cent of hate speech and 0.6 per cent of violent content. Another memo suggests that it may never manage to get beyond 10 to 20 per cent, because of it is “extraordinarily challenging” for AI to understand the context in which language is used.
Nevertheless, Facebook had already decided to rely more on AI and to cut the money it was spending on human moderation in 2019 when it came to hate speech. In particular, the company made it harder to report and appeal against decisions on hate speech.
Facebook said “when combating hate speech on Facebook, our goal is to reduce its prevalence, which is the amount of it that people actually see”. It added that hate speech accounts for only 0.05 per cent of what users views, a figure that it has reduced by 50 per cent over the past three quarters.
Facebook fiddled while the Capitol burned
The documents reveal Facebook’s struggle to contain the explosion of hate speech and misinformation on its platform around a January 6 riot in Washington, prompting turmoil internally.
Memos show that the company switched off certain emergency safeguards in the wake of the November 2020 election, only to scramble to turn some back on again as the violence flared. One internal assessment found that the swift implementation of measures was hindered by waiting for sign-off from the policy team.
Even proactive actions failed to have the desired effect. In October 2020, Facebook publicly announced it would stop recommending “civic groups”, which discuss social and political issues. However, because of technical difficulties in implementing the change, 3m US users were still recommended at least one of the 700,000 identified civic groups on a daily basis between mid-October 2020 and mid-January 2021, according to one research note.
Facebook declined to comment on some of the specifics of the allegations, instead saying that it did not put profit ahead of people’s safety or wellbeing and that “the truth is we’ve invested $13bn and have over 40,000 people to do one job: keep people safe on Facebook”.