twitter_nazi_hunting_algorithms_republicans

Is Twitter Withholding a ‘Nazi-Hunting’ Algorithm for Fear of Inadvertently Banning Republican Lawmakers?

On April 25 2019, a Twitter user posted a claim that the social media platform possessed a “Nazi-hunting” algorithm, but opted not to employ it specifically because it would also filter the tweets or accounts of some Republican lawmakers:

In a subsequent tweet, Allen cited Jewish Daily Forward as a source for the information, but the linked piece sourced its information from a Vice/Motherboard item published on the same day as his popular tweet. Vice’s piece was titled “Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too,” and its subheading immediately cast some doubt on the claim:

A Twitter employee who works on machine learning believes that a proactive, algorithmic solution to white supremacy would also catch Republican politicians.

Allen said that Twitter “won’t be employing their Nazi-hunting algorithm,” implying such a thing already existed. But the actual content of the Vice article concerned speculative, third-hand conversations as well as inferences based on other content-filtering technologies used by Twitter.

According to the first portion of Vice’s coverage (from which the tweet’s claims were originally derived), assumptions about Republicans being filtered out via algorithm resulted from a broken-up, two-part discussion. While Vice kept to the interpretation, it also did not report that the technology already existed:

At a Twitter all-hands meeting on March 22 [2019], an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?

An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. (As Motherboard has previously reported, algorithms are the next great hope for platforms trying to moderate the posts of their hundreds of millions, or billions, of users.)

With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said.

In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.

The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.

There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge?

The article then segued into an editorial examination of norms and hateful content, but didn’t really address the fact that the core portion about algorithms appeared to consist of cobbled-together inferences.

To recap, on the day Vice’s item appeared, Allen summarized it by saying Twitter refused to deploy a “Nazi-hunting” algorithm because it was unable to distinguish between white supremacists and Republicans. However, the body of the source article reported speculative discussions about whether algorithms would lump content together, with no evidence suggesting that was definitively the case.

A Twitter employee purportedly argued that such a tool could mistake Republican for white supremacist content as part of a larger point that the banning of politicians was not acceptable as a potential tradeoff. But no part of the unnamed person’s argument indicated that this existed, just that it could. And following on from that, no argument was presented that if such a tool was created and politicians were somehow “banned” by it that those missteps couldn’t be manually overridden.

It is possible that Twitter internally decided a tool to filter Nazi content might ensnare some Republican politicians, but nothing in the source material indicated that the concept was anything but speculative.