In June 2021, an indifferently punctuated January 2020 Facebook “golliwog” post was widely shared — along with a demonstrably false claim that the offensive content was at risk of being censored:
The engagement-baiting text, which was part of an apparent screenshot of yet another Facebook post, read:
How far can I travel around Facebook?? Before it’s gets me banned come on help me travel!!!!
A comment visible at the top, added on June 24 2021, was also clearly racist:
Is that the new royal baby.?
A December 2015 Reddit thread documented the longevity and efficacy of the golliwog gambit:
In fact, Facebook posts using a golliwog figure to generate engagement were so common and broadly circulated that the post above’s wording appeared in a 2019 employment tribunal related to one user’s decision to share the post on Facebook:
[Individual A] and [Individual B] were security officers at [workplace]. [Individual B] had a Facebook account. She shared an image on her Facebook page of a golliwog accompanied by the message ‘Let’s see how far he can travel before Facebook takes him off’. The image was shared with people who were on her list of Facebook friends … [Individual A] complained to his line manager that racist images were being circulated in the workplace.
Moreover, the post very clearly falls into “engagement bait” territory. In December 2017, Facebook claimed it was targeting and sanctioning engagement bait:
People have told us that they dislike spammy posts on Facebook that goad them into interacting with likes, shares, comments, and other actions. For example, “LIKE this if you’re an Aries!” This tactic, known as “engagement bait,” seeks to take advantage of our News Feed algorithm by boosting engagement in order to get greater reach. So, starting this week [in December 2017], we will begin demoting individual posts from people and Pages that use engagement bait.
As of June 30 2021, the post was shared more than 545,000 times. It “traveled” on the platform for 525 day, one year and five months, or 17 months in total — in which time Facebook elected not to ban the image.
What Is a Golliwog and Why Do They Exist?
While there are slight differences in definitions, it is clear they were intended as caricature. Wikipedia filed its “Golliwog” page (among other things) under “Anti-African and anti-black slurs” and “Stereotypes of African Americans,” and described the object in an introductory paragraph:
The golliwog, golliwogg or golly is a doll-like character – created by cartoonist and author Florence Kate Upton – that appeared in children’s books in the late 19th century, usually depicted as a type of rag doll. It was reproduced, both by commercial and hobby toy-makers, as a children’s toy called the “golliwog”, a portmanteau of golly and polliwog, and had great popularity in the UK and Australia into the 1970s. The doll is characterised by jet black skin, eyes rimmed in white, exaggerated red lips and frizzy hair, a blackface minstrel tradition.
Merriam-Webster defined “golliwog” as a “type of black rag doll with exaggerated features and colorful clothing that was formerly popular as a children’s toy in Britain and Australia.” A search on Brittanica.com did not return an entry for golliwog dolls, but the top result was its page for “Blackface minstrelsy.”
A November 2000 paper by Dr. David Pilgrim featured by the Jim Crow Museum of Racist Memorabilia provided an extensive history of golliwog figures, their role in anti-Black racism, and finally, their waning popularity:
The Golliwog (originally spelled Golliwogg) is the least known of the major anti-black caricatures in the United States. Golliwogs are grotesque creatures, with very dark, often jet black skin, large white-rimmed eyes, red or white clown lips, and wild, frizzy hair. Typically, it’s a male dressed in a jacket, trousers, bow tie, and stand-up collar in a combination of red, white, blue, and occasionally yellow colors. The golliwog image, popular in England and other European countries, is found on a variety of items, including postcards, jam jars, paperweights, brooches, wallets, perfume bottles, wooden puzzles, sheet music, wall paper, pottery, jewelry, greeting cards, clocks, and dolls. For the past four decades Europeans have debated whether the Golliwog is a lovable icon or a racist symbol.
[…]
In the 1960s relations between blacks and whites in England were often characterized by conflict. This racial antagonism resulted from many factors, including: the arrival of increasing numbers of colored immigrants; minorities’ unwillingness to accommodate themselves to old patterns of racial and ethnic subordination; and, the fear among many whites that England was losing its national character. British culture was also influenced by images — often brutal — of racial conflict occurring in the United States.
In this climate the Golliwog doll and other Golliwog emblems were seen as symbols of racial insensitivity. Many books containing Golliwogs were withdrawn from public libraries, and the manufacturing of Golliwog dolls dwindled as the demand for Golliwogs decreased. Many items with Golliwog images were destroyed. Despite much criticism, James Robertson & Sons did not discontinue its use of the Golliwog as a mascot. The Camden Committee for Community Relations led a petition drive for signatures to send to the Robertson Company. The National Committee on Racism in Children’s Books also publicly criticized Robertson’s use of the Golly in its advertising. Other organizations called for a boycott of Robertson’s products; nevertheless, the company has continued to use the Golliwog as its trademark in many countries, including the United Kingdom, although it was removed from Robertson’s packaging in the United States, Canada, and Hong Kong.
In many ways the campaign to ban Golliwogs was similar to the American campaign against Little Black Sambo …
Pilgrim added context about the perception of golliwog figures in the UK and Australia:
The claim that Golliwogs are racist is supported by literary depictions by writers such as Enid Blyton. Unlike Florence Upton’s, Blyton’s Golliwogs were often rude, mischievous, elfin villains. In Blyton’s book, Here Comes Noddy Again (1951), a Golliwog asks the hero for help, then steals his car. Blyton, one of the most prolific European writers, included the Golliwogs in many stories, but she only wrote three books primarily about Golliwogs: The Three Golliwogs (1944), The Proud Golliwog (1951), and The Golliwog Grumbled (1955). Her depictions of Golliwogs are, by contemporary standards, racially insensitive. An excerpt from The Three Golliwogs is illustrative:
Once the three bold golliwogs, Golly, Woggie, and N*****, decided to go for a walk to Bumble-Bee Common. Golly wasn’t quite ready so Woggie and N***** said they would start off without him, and Golly would catch them up as soon as he could. So off went Woogie and N*****, arm-in-arm, singing merrily their favourite song — which, as you may guess, was Ten Little N***** Boys.(p. 51)
Again, the Facebook post spread freely for more than 17 months over half a million times, untouched by the censorship the original poster claimed would occur.
Facebook Community Standards and What is Really Getting ‘Banned’
As indicated, the post in question was engagement-baiting, intended at the very least to drive up share counts under the idle threat of Facebook censorship; most people sharing it were likely aware that the proposed “ban” would be due to the racist nature of the once-popular golliwog figure.
By contrast, Facebook’s position on racist content and race-related discussions prompted discussions and news reporting for several years before the post’s appearance in January 2020, and its recirculation in June 2021.
On June 28 2017, ProPublica covered how Facebook’s censorship affects conversations about racism at all levels:
In the wake of a terrorist attack in London earlier this month [in June 2017], a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”
Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.
But a May [2017] posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.
“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.
The ProPublica report added:
One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.
One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.
[…]
Didi Delgado, whose post stating that “white people are racist” was deleted, has been banned from Facebook so often that she has set up an account on another service called Patreon, where she posts the content that Facebook suppressed. In May [2017], she deplored the increasingly common Facebook censorship of black activists in an article for Medium titled “Mark Zuckerberg Hates Black People.”
Facebook also locked out Leslie Mac, a Michigan resident who runs a service called SafetyPinBox where subscribers contribute financially to “the fight for black liberation,” according to her site. Her offense was writing a post stating “White folks. When racism happens in public — YOUR SILENCE IS VIOLENCE.”
In April 2019 (nearly two years later), USA Today published “Facebook while black: Users call it getting ‘Zucked,’ say talking about racism is censored as hate speech.” That article began with a similar story of posts objecting to racism leading to user stints in “Facebook jail”:
It was spirit week, and Carolyn Wysinger, a high school teacher in Richmond, California, was cheerfully scrolling through Facebook on a break between classes. Her classroom, with its black-and-white images of Martin Luther King Jr. and Che Guevara and a “Resist Patriarchy” sign, was piled high with colorful rolls of poster paper, the whiteboard covered with plans for pep rallies.
A post from poet Shawn William caught her eye. “On the day that Trayvon would’ve turned 24, Liam Neeson is going on national talk shows trying to convince the world that he is not a racist.” While promoting a revenge movie, the Hollywood actor confessed that decades earlier, after a female friend told him she’d been raped by a black man she could not identify, he’d roamed the streets hunting for black men to harm.
For Wysinger, an activist whose podcast The C-Dubb Show frequently explores anti-black racism, the troubling episode recalled the nation’s dark history of lynching, when charges of sexual violence against a white woman were used to justify mob murders of black men.
“White men are so fragile,” she fired off, sharing William’s post with her friends, “and the mere presence of a black person challenges every single thing in them.”
It took just 15 minutes for Facebook to delete her post for violating its community standards for hate speech. And she was warned if she posted it again, she’d be banned for 72 hours.
The piece explained that Facebook’s uneven application of “Community Standards” has severe consequences for nonprofit agencies and small businesses, as well as for individuals. The article included a detail about Facebook reversing such a ban when contacted by USA Today:
[Users] call [bans and lockouts] getting “Zucked” and black activists say these bans have serious repercussions, not just cutting people off from their friends and family for hours, days or weeks at a time, but often from the Facebook pages they operate for their small businesses and nonprofits.
A couple of weeks ago [in April 2017], Black Lives Matter organizer Tanya Faison had one of her posts removed as hate speech. “Dear white people,” she wrote in the post, “it is not my job to educate you or to donate my emotional labor to make sure you are informed. If you take advantage of that time and labor, you will definitely get the elbow when I see you.” After being alerted by USA TODAY, Facebook apologized to Faison and reversed its decision.
Additionally, the story indicated that Facebook was completely aware that their policies disproportionately affected people of color — but elected not to amend the policies to allow anti-racist content anyway:
In late 2017 and early 2018, Facebook explored whether certain groups should be afforded more protection than others. For now, the company has decided to maintain its policy of protecting all racial and ethnic groups equally, even if they do not face oppression or marginalization, says Neil Potts, public policy director at Facebook. Applying more “nuanced” rules to the daily tidal wave of content rushing through Facebook and its other apps would be very challenging, he says.
A progression of news articles over the span of several years pointed to a system of censorship which appeared to be stacked, intentionally or unintentionally, against people of color speaking out against racism. In July 2020, NBC News reported that the problem was distressingly and quantifiably unfair to Facebook users from marginalized groups:
In mid-2019, researchers at Facebook began studying a new set of rules proposed for the automated system that Instagram uses to remove accounts for bullying and other infractions.
What they found was alarming. Users on the Facebook-owned Instagram in the United States whose activity on the app suggested they were Black were about 50 percent more likely under the new rules to have their accounts automatically disabled by the moderation system than those whose activity indicated they were white, according to two current employees and one former employee, who all spoke on the condition of anonymity because they weren’t authorized to talk to the media.
The findings were echoed by interviews with Facebook and Instagram users who said they felt that the platforms’ moderation practices were discriminatory, the employees said.
The researchers took their findings to their superiors, expecting that it would prompt managers to quash the changes. Instead, they were told not share their findings with co-workers or conduct any further research into racial bias in Instagram’s automated account removal system. Instagram ended up implementing a slightly different version of the new rules but declined to let the researchers test the new version.
It was an episode that frustrated employees who wanted to reduce racial bias on the platform but one that they said did not surprise them. Facebook management has repeatedly ignored and suppressed internal research showing racial bias in the way that the platform removes content, according to eight current and former employees, all of whom requested anonymity to discuss internal Facebook business.
The lack of action on this issue from the management has contributed to a growing sense among some Facebook employees that a small inner circle of senior executives — including Chief Executive Mark Zuckerberg, Chief Operating Officer Sheryl Sandberg, Nick Clegg, vice president of global affairs and communications, and Joel Kaplan, vice president of global public policy — are making decisions that run counter to the recommendations of subject matter experts and researchers below them, particularly around hate speech, violence and racial bias, the employees said.
In a July 2020 press release via Facebook, the platform claimed it detected and removed “three million pieces of hate speech each month, or more than 4,000 per hour.” Presumably, much of the counted content included the examples excerpted above.
Summary
An enormously viral Facebook post featuring a racist “golliwog doll” shared in January 2020 implied to its readers that Facebook’s overzealous censors would target and remove the image — encouraging users to share the post in order to “protest” oversensitivity and an overbearing climate of political correctness. In reality, Facebook’s application of censorship measurably affected anti-racist content; posts stating that “white men are so fragile” led to people being locked out of individual and professional accounts, while the golliwog figure continued its spread freely through the usual channels Although the golliwog figure was unlikely to be subjected to Facebook censorship, a pattern of sanctioning users of color suggested that anyone who shared it with commentary about white fragility would certainly find themselves “zucced.”
- Golliwog Facebook post
- I've just seen a post on Facebook asking for a million likes to prove that Golliwogs aren't racist
- Fighting Engagement Bait on Facebook
- Golliwog Facebook post | Legal
- Golliwog | Wikipedia
- Golliwog | Dictionary
- Blackface minstrelsy
- The Golliwog Caricature
- Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children
- Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech
- Facebook ignored racial bias research, employees say
- Sharing Our Actions on Stopping Hate