The tech company Meta has quietly laid off workers specializing in identifying disinformation on its lead platform Facebook, CNN reported on July 11 2023.
According to the news network:
Several members of the team that countered mis- and disinformation in the 2022 US midterms were laid off last fall and this spring, a person familiar with the matter said. The staffers are part of a global team that works on Meta’s efforts to counter disinformation campaigns seeking to undermine confidence in or sow confusion around elections.
CNN reported that most of those cut worked as “content review” specialists focusing on election-related posts.
The layoffs came to light shortly after Agence-France Presse reported that in May 2023, Meta made it easier for Facebook users to bypass fact-checking notices from news organizations:
Under a new “content reduced by fact-checking” option that now appears in Facebook’s settings, users have flexibility to make debunked posts appear higher or lower in the feed or maintain the status quo.
Fact-checked posts can be made less visible with an option called “reduce more.” That, according to the platform’s settings, means the posts “may be moved even lower in feed so you may not see them at all.”
Another option labeled “don’t reduce” triggers the opposite effect, moving more of this content higher in their feed, making it more likely to be seen.
Previously, being debunked by one of Meta’s fact-checking “partners” automatically caused content to be downranked in Facebook’s algorithm, making it less likely to appear.
“We’re giving people on Facebook even more power to control the algorithm that ranks posts in their feed,” the spokesperson told the French outlet. “We’re doing this in response to users telling us that they want a greater ability to decide what they see on our apps.”
In June 2023 Meta chief executive officer Mark Zuckerberg blamed a vague “establishment” for what he claimed was a push against COVID-19 related information that “ended up being more debatable or true.” The company never responded to our request for more specific information on what he meant by that claim.
This claim also represented a further reversal from previously boasting about Facebook’s efforts to fight disinformation related to the pandemic. As Ars Technica reported in May 2020:
In a call, he told media that, in the month of April alone, Facebook’s fact-checkers put 50 million warning labels on COVID-19 content shared to the platform. Those labels were super effective, he crowed: 95 percent of the time, viewers didn’t click through to content that had been warned to be false.
He backtracked from that language just two weeks later, telling the right-wing Fox network that he believed that “Facebook shouldn’t be the arbiter of truth of everything that people say online.”
News of the layoffs followed Meta retaliating against legislation designed to make it pay for news-related content it scrapes from publishers; for example, it restricts Canadian users from accessing news through its platforms, and it has complained without proof that a new California law doing the same meant it would be “paying into a slush fund.”
Similarly, one of the company’s executives, Adam Mosseri, claimed that Meta would not “court” news-related content on the company’s new platform, called Threads, which has been promoted as an antagonist to Twitter.
“We won’t discourage or down-rank news or politics, we just won’t court them the way we have in the past,” Mosseri said on the platform, according to Business Insider. “If we are honest, we were too quick to promise too much to the industry on Facebook in the early 2010s, and it would be a mistake to repeat that.”
Meta has claimed that Thread has drawn 100 million users within a week of its July 2023 launch. But as The Guardian reported, the app is not active in the European Union because of questions over whether it could ban that group of nations’ Digital Markets Act, and experts are already concerned about user privacy. According to the Guardian:
Carissa Véliz, an associate professor at the University of Oxford, referred to Meta’s use of ads targeting users based on specific information as “surveillance advertising.”
“The company is trying to collect as much data as possible and trying to continue in the same direction as it has from the very start despite all the scandals, despite the public backlash, despite warnings from regulators, despite fines,” said Véliz, who is part of the university’s Institute for Ethics in AI. “It’s not reimagining its business model to make it a more respectful business model towards users.”
Meta did not respond to our request for more information.