Mass Shooter Media Myth

On February 2019, television personality John Stossel released a video to Facebook watch, entitled “Media Hype Questionable Gun Control Study/MASS SHOOTER MEDIA MYTH.”

That video was first shared to YouTube on December 18 2018. The clip was just under five-and-a-half minutes, and in it, Stossel introduces a segment challenging a purported “media myth” involving mass shootings in the United States:

The Content of the Video, “Mass Shooter Media Myth”

What exactly is the referenced “media myth”? It is difficult to articulate based on the content of the video itself. The clip begins with a montage of news personalities and politicians like former United States President Barack Obama referencing a widely-cited study authored by University of Alabama criminology professor Adam Lankford, “Public Mass Shooters and Firearms: A Cross-National Study of 171 Countries.” Lankford’s study was published in the bimonthly peer-reviewed academic journal on criminology Violence and Victims in January 2016.

In the segment, Stossel quickly cedes his platform to guest John Lott, a prominent advocate for gun rights and author of the books More Guns, Less Crime and The Bias Against Guns. Much of the video involves Lott posing questions about Lankford’s peer-reviewed study, but not answering those questions.

Stossel provides a summary for the clip in an editorial about the segment, where Lott’s statements are characterized as disputing the “myth” that the United States has the most mass shootings of any country in the world:

A few years ago, much of the media claimed that the U.S. has “the most mass shootings of any country in the world.” President Barack Obama added it’s “a pattern now that has no parallel anywhere else.”

CNN and the L.A. Times wrote about “Why the U.S. Has the Most Mass Shootings.” (“The United States has more guns.”)

But the United States doesn’t have the most mass shootings, says Lott. It’s a myth created by University of Alabama associate professor Adam Lankford, a myth repeated by anti-gun media in hundreds of news stories.

“Lankford claimed that since 1966 there were 90 mass public shooters in the United States, more than any other country,” says Lott. “Lankford claimed ‘complete data’ were available from 171 countries.”

But how could that be? Many governments don’t collect such data and even fewer have information from before the days of the internet.

A shooting in say, India, would likely be reported only in local newspapers, in a local dialect. How would Lankford ever find out about it? How did he collect his information? What languages did he search in?

He won’t say.

“That’s academic malpractice,” says Lott.

The just over five-minute long segment disputing a peer-reviewed study arguably constituted a “Gish Gallop,” a rhetorical strategy involving the raising of multiple unresolved debate points without resolution to both confuse viewers and ensnare an opponent in insurmountably numerous questions. RationalWiki describes the tactic as follows:

The Gish Gallop (also known as proof by verbosity) is the fallacious debate tactic of drowning your opponent in a flood of individually-weak arguments in order to prevent rebuttal of the whole argument collection without great effort. The Gish Gallop is a belt-fed version of the on the spot fallacy, as it’s unreasonable for anyone to have a well-composed answer immediately available to every argument present in the Gallop … the Galloper need only win a single one out of all his component arguments in order to be able to cast doubt on the entire refutation attempt. For this reason, the refuter must achieve a 100% success ratio (with all the yawn-inducing elaboration that goes with such precision). Thus, Gish Galloping is frequently employed (with particularly devastating results) in timed debates. The same is true for any time- or character-limited debate medium, including Twitter and newspaper editorials.


Gish Gallops are almost always performed with numerous other logical fallacies baked in. The myriad component arguments constituting the Gallop may typically intersperse a few perfectly uncontroversial claims — the basic validity of which are intended to lend undue credence to the Gallop at large — with a devious hodgepodge of half-truths, outright lies, red herrings and straw men — which, if not rebutted as the fallacies they are, pile up into egregious problems for the refuter.

There may also be escape hatches or “gotcha” arguments present in the Gallop, which are — like the Gish Gallop itself — specifically designed to be brief to pose, yet take a long time to unravel and refute.

However, Gish Gallops aren’t impossible to defeat — just tricky (not to say near-impossible for the unprepared). Upon closer inspection, many of the allegedly stand-alone component arguments may turn out to be nothing but thinly-veiled repetitions or simple rephrasings of the same basic points — which only makes the list taller, not more correct (hence; “proof by verbosity“). This essential flaw in the Gallop means that a skilled rebuttal of one component argument may in fact be a rebuttal to many.

John Lott’s Rebuttal

Lott previously self-published a refutation of the Lankford study in August/September 2018 [PDF] to the open-access site SSRN (formerly known as the Social Science Research Network.) SSRN does not offer peer reviews, and we were unable to locate any subsequent version of that work in any other publication.

In his abstract, Lott writes that Lankford’s disinclination to answer his queries “raises real concerns about Lankford’s motives.” However, Lott is himself an advocate of gun rights, a position that inherently involves motives. Additionally, Lankford’s work passed peer review, suggesting that his methodology was audited and found to be credible by other researchers. Moreover, the abstract of Lott’s paper points out potential inconsistencies in Lankford’s research, not substantiated failings:

Lankford claims to have “complete” data on such shooters in 171 countries. However, because he has neither identified the cases nor their location nor even a complete description on how he put the cases together, it is impossible to replicate his findings.

It is particularly important that Lankford share his data because of the extreme difficulty in finding mass shooting cases in remote parts of the world going back to 1966. Lack of media coverage could easily lead to under-counting of foreign mass shootings, which would falsely lead to the conclusion that the U.S. has such a large share.

Lott posits that Lankford undercounted mass shootings in countries other than the United States, concluding that such a misstep would “falsely lead” to inaccurate numbers. But he adds that Lankford’s lack of response to Lott’s queries makes that conclusion “impossible” to reach. To be clear, the holes in research described by Lott most certainly would call Lankford’s findings into question, but Lott himself is unable to corroborate his own suspicions. Lott concludes his paper by contradicting himself, claiming that Lankford “refused” to share research that Lott indicates Lankford did share with other media outlets. Finally, Lott references his own paper to claim that gun ownership is not correlated with mass shootings and reiterates findings that he simultaneously maintains are impossible to locate:

After compiling this data for the 15 years from 1998 through 2012, the last fifteen years studied by Lankford, it is clear that he missed an enormous number of cases. We have found about fifteen times more shooters in 15 years than Lankford claimed to find in 47 years. But however one counts these cases, the United States is well below the average country regarding either the frequency or murder rate from these attacks or their deadliness.

This data not only has implication for how the United States compares to other countries but also to previous claims about what might be responsible for these attacks. For example, Lankford’s claim that higher rates of gun ownership are associated with more mass public shooters completely disappears when this more complete data on mass public shooters is used (Lott, 2018).

The final line of Lott’s paper speaks directly to the conclusion of Lankford’s study, which stated:

The United States and other nations with high firearm ownership rates may be particularly susceptible to future public mass shootings, even if they are relatively peaceful or mentally healthy according to other national indicators.

A larger editorial point made in the video as well is that a “media myth” has convinced Americans that mass shootings are a larger problem than they actually are, an claim that in part hinges on math in which opinion is a massive part of the equation as well.

Prior Controversies and Debates Involving John Lott

Lott’s circular referencing is not unique to the 2018 paper excerpted here. In 2014, Scientific American published a rebuttal to his critique on a piece they published about gun violence. In that rebuttal, the author of the original article notes that Lott not only cites his own work, but that the cited material is largely irrelevant to the topic at hand. She additionally maintains Lott cherry-picks his source material and ignores or elides all information that does not support his predetermined conclusion:

John R. Lott, Jr., is wrong in his claims. He asserts “two thirds of the peer-reviewed, published literature shows that concealed carry laws help reduce crime.” This figure comes from a 2012 paper Lott himself wrote for the Maryland Law Review. In it he asserts that 18 peer-reviewed studies show right-to-carry laws reduce violent crime but only 11 suggest a different result.

But his two-thirds claim is false. Many of these 18 supposed pro-carry studies are off-topic. One is a paper by Lott on gun storage laws that has nothing to do with concealed carry. A second paper investigates how abortion relates to crime, a third concerns laws that prevent minors from owning guns—again, irrelevant to concealed carry. Lott also includes the second edition of his own book as one of these 18 peer-reviewed studies.

In total, one third of his pro–concealed-carry citations refer to his own work. Not only does Lott inflate the number of studies that support his thesis, but he also completely omits many peer-reviewed studies that belong on the other side.

Lott previously came under fire after a blogger uncovered his admitted engagement in astroturfing and sock puppetry relating to his work. A 2003 piece on reported that an individual named “Mary Rosh” was in fact a covert identity of Lott’s:

Meanwhile, several of the bloggers who had been writing about the controversy — a group that included me — drew the ire of someone called Mary Rosh. Rosh, who identified herself as a former student of Lott’s who had long admired his fairness and rigor, said that it was irresponsible to post links to the survey debate without calling Lott first. This sounded odd, not only because bloggers very seldom do that kind of background research before posting a link, but because Lott had made precisely the same criticism several times in e-mails to bloggers covering the story.

A Google search revealed that Rosh had for several years been a prolific contributor to Usenet forums, where she regularly and vociferously defended the work of Lott. On a whim, I compared the I.P. address on Rosh’s comment to the one on an e-mail Lott had sent me from his home. They were the same.

I posted all of this, and to his credit Lott confessed. “The MaRyRoSh pen name account,” he explained, “was created years ago for an account for my children, using the first two letters of the names of my four sons.”

The controversy was over a dispute in which critics said that Lott had fabricated the results of a survey. In a parallel to Lott’s accusations of Lankford, critics pressed him to substantiate his methodology on late 1990s and early 2000s research. Lott maintained that a “hard drive crash” had destroyed his records, and that he was therefore unable to substantiate his claims about his purported findings. The dispute led to personalities who espoused the same point of view to call the existence of the survey into question:

Lott cannot provide details of the survey, saying he lost all his data in a hard drive crash. When asked for funding records or other evidence, Lott said he paid for the project himself, used volunteers students, had them make calls from their own phones, and did not discuss the survey design with anyone else. Lott says he can’t remember the names of the volunteers. Meanwhile, nine published surveys showed between 21% and 67% of gun uses required shooting (far more than the 2% Lott suggests).

In fact, the only evidence Lott can provide is that a couple colleagues vaguely remember him mentioning such a survey.

All this has led even extreme conservatives like Michelle Malkin to suggest Lott never conducted the survey. But even if Lott did conduct such a survey, it was unethical of him to cite other sources for the number and continue to publicly cite its results after all of the original evidence had been lost.

Mother Jones has alleged that Lott also publishes questionable immigration-related material, adding that he has “a long history of terrible and misleading research.” In June 2017, Pacific Standard published a piece claiming that Lott’s long-held assertion that high rates of gun ownership lead to lower crime rates had been thoroughly debunked in a National Bureau of Economic Research (NBER) study:

The NBER analyzed 33 states that adopted right-to-carry laws — which allow anyone who meets minimal requirements to obtain concealed carry permits — between 1977 and 2014. They found that, 10 years after the laws were passed, violent crime rates actually increased by up to 15 percent. And while it’s true that, nationally, violent crime rates have fallen as right-to-carry laws have proliferated, those drops were not evenly distributed: The decrease in violent crime in states that didn’t adopt right-to-carry laws was four times greater than in those that did.

Comparing Lankford’s Research with Lott’s

At the time Lott self-published his refutation of the Lankford study, the Washington Post fact-checked his claims in a piece titled “Does the U.S. lead the world in mass shootings?”

Lott has a controversial reputation. Jaclyn Schildkraut, associate professor at the State University of New York at Oswego and co-author of “Mass Shootings: Media, Myths, and Realities,” said her review of Lott’s report suggests he “drastically inflated the number of international shootings and minimized the context of the U.S. mass shootings picture.” She said his count of 43 incidents in the United States was too low; her count for the same period, with a different definition for mass shooting, is 188. Lott said he used the same definition as indicated by Lankford in his report.

In assessing the data, the article provided extensive detail from its exchanges with Lott. Many of the conversations involved the inclusion of acts of terrorism, such as a 2008 attack in Mumbai that resulted in 168 deaths. Lankford did not include the Mumbai incident in his totals, and Lott did. Consequently, the paper requested Lott adjust his tallies to leave out the Mumbai attack and other instances of terrorism from tallies:

When The Fact Checker spotted the Mumbai attack on Lott’s list, we asked Lott to remove terrorism cases from the totals for the four countries listed by Lankford — the Philippines, Russia, Yemen and France. Our hope was to provide as much an apples-to-apples comparison as possible.

Without terrorism cases, Lott’s count of shooters fell dramatically. In the Philippines, the number of shooters fell from 120 to 11, in Russia from 65 to 21 and in Yemen from 65 to 3. Only France did not have a significant decline, going from 5 to 4. This is for the 1998-2012 period, and with the exception of Russia, the number of shooters is lower than Lankford’s calculations for 1966-2012.

Lankford found 54 mass shootings outside the United States between 1966 and 2012, at a yearly rate of 1.149. In contrast, Lott located 21 mass shooters outside the United States at a rate of 1.4 annually. But part of the difficulty in formulating a firm comparison was because of the differing time period — 1998-2012 versus 1966-2012.


Stossel produced and distributed a segment of Stossel TV called “Mass Shooter Media Myth” or “Media Hype Questionable Gun Control Study,” premised on the notion that Lott had nullified Lankford’s peer-reviewed and widely referenced 2016 study, “Public Mass Shooters and Firearms: A Cross-National Study of 171 Countries.” We were unable to determine when Stossel’s clip first aired, but the National Rifle Association (NRA) promoted Lott’s counter-research in September 2018.

Lott is a well-known advocate for gun rights, but he has separately been embroiled in research-related controversies including repeated allegations of cherry-picking as well as an admission that he has used pseudonymous sockpuppet accounts in order to attempt to influence public opinion. Against that backdrop, Lott’s claim that Lankford’s research was flawed should be taken with a grain of salt.

On the other hand, Lankford appears to have specifically avoided engaging with Lott to any degree about his criminology study, to the point where he declined to assist fact-checkers in comparing his study with Lott’s paper. Nevertheless, Lankford’s work was examined and reviewed prior to its publication in the journal Violence and Victims in January 2016. Lott contends that Lankford was opaque about his methodology, which appears to be accurate in that he seemed to aggressively avoid engagement with Lott specifically during Lott’s efforts to disprove Lankford’s findings.

But that does not itself suggest that Lankford somehow gamed the peer-review process or that others found significant flaws in his work. Lankford did at times provide additional detail to organizations such as the New York Times, and his unwillingness to parse his findings seemed to be a broader unwillingness to be drawn into an academic dispute with Lott and/or draw the ire of the NRA. In November 2017, Lankford discussed his methodology and classifications at length with an interviewer.

In Stossel’s own editorial about the segment, he says that he is unsurprised that Lankford adopted a position of stonewalling Lott, describing the latter as “pro-gun.” But Stossel also says viewers should “remember the wisdom of the Second Amendment” in his piece, and describes Lankford as a “gun control advocate.” However, this appears to be a misnomer. We were unable to find any information describing Lankford as anything but a criminology professor, and unless Stossel neglected to state otherwise, he seemed to infer a political position based on Lankford’s research alone:

I’m not surprised that Lankford didn’t reply to Lott’s emails. Lott is known as pro-gun. (He wrote the book “More Guns, Less Crime.”) But Lankford also won’t explain his data to me, The Washington Post or even his fellow gun control advocates.

Stossel was clearly aware that the Washington Post had fact-checked and adjusted Lott’s data set to remove cases of terrorism and crimes not typically classified as a “mass shooting.” But it’s not clear he approached the segment with those adjustments in mind, and he clearly espoused an editorial view in line with Lott’s.

By any reasonable deduction, Lankford has stonewalled Lott’s attempts to draw him into a debate over the 2016 study. But that does not discount the fact that Lankford’s research passed the peer review process prior to its publication in a criminology journal. By contrast, Lott self-published a paper to a repository in August/September 2018, and his work does not appear to have withstood the same pre-publication vetting. Lott also appeared in a five-and-a-half minute segment about his position, which was largely editorial in nature.

When Lott issued his rebuttal, the Washington Post endeavored to fact-check both his work and Lankford’s. Lankford once again declined to be involved with the work, resulting in a somewhat lopsided comparison. The Post made several adjustments to Lott’s work and revised his totals downwards, but was unable to apply similar adjustments due to Lankford’s non-involvement. It appeared that Lott went on to appear in Stossel’s video after the Washington Post evaluated his assessments.

Although Lott has repeatedly asserted that Lankford’s work was flawed, he also said that he was unable to reproduce the findings without access to the underlying data. We found no other reasons to specifically doubt the findings, apart from Lott’s desire to do so. It is possible that Lankford somehow erred in his work, but Lott’s primary and somewhat circular position was that Lankford’s refusal to engage him in academic debate was indication his work was flawed. The possibility remained that Lankford simply made a decision to not link his work with Lott’s in the public discourse, and that Lankford has simply remained resolute in his decision not to engage with Lott even if he brings the Washington Post with him.