Statistical Deception: The Case of William English
Statistics imply precise measurement; this gives them an air of authority.
Case in point: The "toning" footwear Reebok began selling in 2009. Reebok claimed that merely wearing the shoes strengthens hamstrings and calves by 11%. One TV commercial announced that they tone the buttocks "up to 28% more than regular sneakers, just by walking." Reebok sold more than 5 million pairs the next year.
Reebok now says that their shoes provide "support", but you won't see promises of a 28% firmer butt. In 2011, the company agreed to a $25 million settlement with the F.T.C. over false advertising. Those 11% and 28% statistics appear to have come from an unpublished study, commissioned by Reebok, that relied on five participants and weak methods. 11% and 28% sound authoritatively precise, but the figures turn out to be worthless.
The fallout from Reebok's deceptiveness seems minimal (loss of revenue; disappointed customers). In this newsletter I'll be discussing a pair of shoddy, unpublished, deceptively-presented studies by the same author that may have weightier consequences, because they conclude that being armed makes private citizens safer.
Deception is a strong word, but I'm sticking with it, because it's on display everywhere from the statistical minutiae in these studies to an opinion piece published by the author in The Wall Street Journal today.
If you're a gun rights advocate, don't let my liberal biases chase you away. This newsletter doesn't address the gun control debate or attempt to persuade you of anything. This is simply a case study of one scholar's deceptiveness with data. You don't want this guy on your team.
Some background
I first learned of the scholar in question from an investigative report by Mike McIntire and Jodi Kantor, "The Gun Lobby’s Hidden Hand in the 2nd Amendment Battle", published in The New York Times this June 18. The "hidden hand" in the title is a Georgetown University political economist named William English.
McIntire and Kantor reported that Dr. English, whose name has shown up in scores of lawsuits targeting gun control restrictions, receives money from gun rights organizations but refuses to disclose who funds his research. English is not a recognized scholar in the field though. His only two gun-related papers are unpublished and poorly received by experts. Nevertheless, the work has enjoyed some credibility in legal settings, including Supreme Court cases such as the 2022 Bruen decision.
McIntire and Kantor nicely summarized some methodological limitations in one of English's papers. I'll expand on their critique, then discuss more serious flaws in the other paper. I will also share contextual details, not discussed in their article, that shed more light on the extent of English's deceptiveness.
You'll see a gradual progression here from grumbling about methodological choices to broad accusations of dishonesty. Regular readers may be surprised by the heat – I'm usually pretty impartial – but English's deceptions are staggering. As I'll explain at the end, the Wall Street Journal article that appeared today is one of the most mendacious pieces of writing I've seen from a scholar in quite some time.
The National Firearms Survey
On July 16, 2021, Dr. English posted a paper, "The 2021 National Firearms Survey", to the Social Science Research Network (SSRN), a large database of scholarly papers in press, under review, or simply in preparation.
For this survey, English hired a company called Centiment to gather data on U.S. residents 18 and up who own firearms. 16,708 owners were identified; 15,450 completed the survey.
31.1% of the survey respondents claimed to have used their firearms in self-defense at some point in their lives. Based on the number of gun owners in the U.S. and guesses about length of gun ownership, English estimated that Americans use firearms to defend themselves about 1.67 million times per year. This statistic has held sway in courtrooms and is often cited by gun rights advocates, in spite of the fact that most experts estimate a much lower number (typically under 100,000).
1.67 million times per year would be just over once every 10 seconds of every day, on average. If English is right, gun ownership makes people a lot safer.
The survey also yielded higher-than-usual estimates concerning relatively lethal technologies. 30% of respondents reported owning an AR-15 type rifle, and 48% reported owning magazines containing 10 or more rounds. No wonder the National Firearms Survey has been so popular among the gun lobby.
Here are three reasons not to trust these data:
1. Biased questions.
English described the survey question about self-defense as follows:
"Have you ever defended yourself or your property with a firearm, even if it was not fired or displayed...?"
Here's the original question, which McIntire and Kantor retrieved from the Harvard Dataverse:
"Many policymakers recognize that a large number of people participate in shooting sports but question how often guns are used in self defense. Have you ever defended yourself or your property with a firearm, even if it was not fired or displayed...?"
English also reported that respondents were asked "Have you ever owned an AR-15 or similarly-styled rifle?, but here again they failed to include the full question:
"Some have argued that few gun owners actually want or use guns that are commonly classified as 'assault weapons'. Have you ever owned an AR-15 or similarly-styled rifle?...Answering this will help us establish how popular these types of firearms are."
English's paper does not contain the full versions, nor is it noted that the reader can find them on the Harvard Dataverse website.
These are not the first gun-related questions on the survey. One by one, these questions make it increasingly obvious that the survey is a tool for pushing back against gun control advocacy. The problem with the two questions quoted here that the additional framing, which isn't needed, subtly invites respondents to provide the desirable response.
This is a minor issue, but it's the first of many examples of English's deceptiveness. As reported in his paper, the questions are perfectly suitable. The full versions, which he conceals from the reader, reflect, at minimum, bad survey methodology.
2. Semantic ambiguity.
Consider again that question "Have you ever defended yourself or your property with a firearm, even if it was not fired or displayed...?" Roughly a third of people who said yes indicated that they "verbally told someone" they had a gun, leading McIntire and Kantor to suggest that an overly broad definition of self-defense was assumed. I agree, but I also see a deeper issue with the semantics of the question.
Some people would say that owning a gun is a way to defend themselves. Even if they've never used the gun or even mentioned it to someone else, keeping it handy is, in their minds, a form of self-defense. The NRA and other gun rights organizations routinely refer to being armed as a type of defense, which is to say that "defense" takes on the meaning of the word "protection".
This is not a misuse of the term, since "defense" can refer to an action as well as the capability or means for taking such an action. A nation can be said to have military defenses, just as the body can be said to have immunological defenses, even if neither one is ever attacked. In the same way, the mere ownership of a gun can be construed as a form of self-defense. Thus, on English's survey, a gun owner might think: For the last sixteen years I've been defending my property with the gun under my pillow, so yes, I have defended my property with a firearm.
I can't claim to know what survey respondents were thinking. My point is simply that the question is ambiguous, a flaw that's exacerbated by the biased framing I mentioned earlier. Both flaws would tend to amplify the percentages English reported and inflate the extrapolated values. (A better survey would delete the framing – and clarify that merely owning a firearm does not constitute "self-defense".)
3. Deceptive presentation.
In this paper and elsewhere, English refers to extrapolated statistics as if they were actual findings or facts. For instance, the section that introduces defensive use data opens with a bullet point list of findings. Here are first three entries:
• 31.1% of gun owners, or approximately 25.3 million adult Americans, have used a gun in self-defense.
• In most cases (81.9%) the gun is not fired.
• There are approximately 1.67 million defensive uses of firearms per year.
These are neither findings nor facts. They're extrapolations from English’s sample of just over 15,000, or about one fifth of 1% of gun owners in the U.S. (They're not even very trustworthy extrapolations, because English presents almost no information on the representativeness of his sample. He says it's representative, but he doesn't show that. Simply including data from all 50 states, plus a few demographic details, is not even close to being sufficient.)
The problem with deceptive presentations is that some people will be deceived. A quick Google search reveals that the 1.67 million estimate has morphed into an empirical fact in some quarters. In English’s 2022 interview with the Washington Examiner, for instance, we read that
"English confirmed that 81.4 million [Americans] own guns, [and] a third of them have used a weapon to defend themselves or their property in 1.6 million incidents per year..."
It's unclear whether to blame English or the journalist here, but elsewhere English himself helps perpetuate the mistake. For instance, roughly 5 minutes into a podcast for The Reload, we hear the following:
Interviewer: "But your survey found, um, that there were more than 1.67 million defensive gun uses per year in the United States. Is that right?"
English: "Yeah, that's right."
(Again, more credible estimates suggest an annual average of less than 100,000.)
The right-to-carry study
On July 16, 2021, the same day the National Firearms Survey appeared, English posted a separate paper to SSRN looking at the impact of right-to-carry laws on crime rates. I'll refer to this as the RTC paper.
("Right-to-carry" refers to the legal right to carry a loaded, concealed gun in public, either with or without a permit, so long as all licensing requirements are met. State-level rules as of 2024 are briefly summarized here.)
Although not as influential as his survey, the RTC paper has been cited by the NRA, the Firearms Policy Coalition, The Reload, and other conservative groups, and it's the one Samuel Alito quoted from in his concurrent opinion for the Bruen decision.
In essence, the RTC paper is a response to a 2019 study led by Dr. John Donohue at Stanford. Donohue and colleagues showed that that the passage of RTC laws leads to an increase in violent crime. For instance, among 33 states that adopted RTC laws after 1977, violent crime rates were 13 to 15% higher overall 10 years after the laws were passed than would be expected had they not been adopted.
Using different methods, English found no association between RTC laws and violent crime. This is the finding that Samuel Alito quoted, and it's happy news for the gun lobby, because it contradicts the notion that more guns in public spur more crime. However, English's RTC paper is even more deeply flawed than his National Firearms Survey.
Earlier this week I reached out to English, Donohue, and three other sources about this paper. Everyone but English replied, and I'm grateful that Donohue in particular confirmed some of my reservations about the data while calling my attention to other issues.
A key challenge for all researchers in this area is that it's impossible to know exactly how many guns are being carried in public at any one time. Some sort of approach to estimation is needed. What English did was to count the number of carry permits issued by each state per year, then look at associations between permit counts and crime rates.
You could quibble that having a permit doesn't tell us how often a person actually brings their gun with them when they go out, but still, it seems reasonable to assume that more permits tend to mean more public carry. Even so, there are at least two reasons for not only mistrusting the data but simply dismissing it altogether:
1. Inaccurate permit counts.
(a) English began counting permits whenever a state passed an RTC law. This means, in effect, that a state would be coded as issuing zero permits prior to passage of the law. The problem with this approach, as Donohue remarked in an email to me, is that "many states have large numbers of permits in place before RTC adoption."
This alone undermines the credibility of the findings. Recall that English found no association between permits and crime. The obvious explanation is measurement error: Any states where permits were granted before the passage of RTC laws would create the kind of noise in the data that masks significant associations.
2. Underestimation of gun presence.
Counting permits is less than ideal, even if counted accurately, because some states have passed laws allowing permitless carry. No surprise that when they do, fewer people apply for permits. In these states, a sharp decline in permits doesn't mean that fewer guns are being brought to public places. Rather, public carry increases, because it's now more convenient. English did nothing to address this source of error, but it's a critical one, because it biases the relationship between permits and crime toward non-significance.
To illustrate, consider Alaska, which passed a law in 2003 allowing anyone 21 or older to carry a concealed firearm in public without a permit. As Donohue noted in his email, "In Alaska violent crime skyrocketed with permitless carry – while the number of no longer needed permits declined sharply." In other words, when public carry in the state increased, so did violent crime. More guns in public, more crime.
For states like Alaska, English's data might seem to imply that fewer guns in public leads to more violent crime. What his data would actually show is that fewer permits are associated with the increase. For Alaska, fewer permits after 2003 actually meant more guns in public, and more guns spurred more violent crime.
In short, combining data from states like Alaska with that from other states introduces error that works against detecting a significant relationship between permit rates and crime. (Think of it as a balancing effect. After RTC laws are passed, we have two kinds of states: Alaska-type states, where fewer permits are associated with more crime, and other states, where more permits are associated with more crime.)
Donohue identified many other flaws in English's methods, some highly technical, others so obvious I'd expect any reasonably alert high school student to notice them (e.g., the majority of the data had to be imputed, or made up, because it was not available from individual states). I think I've said enough to illustrate why the RTC findings have zero credibility.
So far I've only discussed methodology. Are there signs of deception in the writing? I would say yes, beginning with the first sentence of the abstract:
"Over the last 30 years, a majority of US states adopted Right-to-Carry (RTC) laws at the same time that crime rates dramatically decreased."
This is a classic maneuver: (a) Note that two phenomena occur around the same time. (b) Imply that one caused the other. As we all know, correlation doesn't prove a causal relationship. (Nobody agrees yet anyway on why crime rates have dropped since the 1990s; University of Chicago criminologist John Roman has sussed out roughly 35 different rigorously-argued hypotheses.)
English's paper also misrepresents the literature and draws heavily on a critique of Donohue and colleagues' 2019 study, while failing to note the latter's published reply. This isn't uncommon in academia, but it looks worse in the context of the broader pattern of deception on English's part. (If you're interested, a particularly clear discussion of the impact of RTC and no-permit laws on violent crime can be found here.)
I've been attributing deceptive intent to someone I've never met, so I want to dig deeper now into the evidence, and share with you a detail that, to my knowledge, has not been aired publicly yet.
Misrepresented intentions
I've been referring to English's two studies as "unpublished", which could mean that an author is still working on the paper and hasn't yet attempted to publish it. English claims that the studies are intended for a book he's preparing, and in many contexts he refers to the studies as "reports" or "findings" without acknowledging that they're unpublished. (In his interview with The Reload, he doesn't correct the interviewer's mistaken references to the work as "published"). But there's a further detail of importance that English has never mentioned.
A source I interviewed for this newsletter told me that they had reviewed the RTC paper twice, for two different journals. In each case, the paper was rejected. According to my source, English didn't even revise the paper before the second submission.
In short, a paper cited by gun rights organizations, presented in legal settings, and quoted from by a Supreme Court justice turns out to have failed the peer review process. Not once but twice.
According to McIntire and Kantor at The New York Times, English indicated in a 2019 deposition that "without tenure, he was reluctant to go public with firearms research because it was a subject “some people find controversial.” Asked whether Georgetown administrators had questioned him about it, he said, “They don’t know about my interests in that.”
Dr. English is still pre-tenure, according to his Georgetown web page and the Wall Street Journal article that appeared today. But he is clearly not reluctant to go public. He has been trying to publish at least one of these papers, and succeeding would certainly garner attention from the university during tenure review, not to mention journalists who cover studies on such important topics. Meanwhile, he has spoken publicly on occasion about his work.
Tellingly, English only speaks to conservative outlets – as far as I know, only The Reload, the Washington Examiner, and The Wall Street Journal. He has ignored requests for interviews from The New York Times, The Trace, and (of course) Statisfied.
The story is about to get uglier.
The amicus curiae brief
In 2022, for the landmark Bruen case, English submitted an amicus brief (the one that Samuel Alito quoted) in collaboration with the Center for Human Liberty. Such briefs are submitted by individuals who are not party to a case but have an interest in the outcome and can share something of relevance.
At the outset of this brief, English introduces himself by citing the two papers I've discussed here and then describing himself as "a scholar committed to data-driven firearms policy research." This is about as deceptive as one can be without actually lying, given that the papers are unpublished and English has no peer-reviewed publications dealing directly with firearms.
As for the substance of the brief, I'll just share two of the many problematic statements.
–English asserts without qualification that "there are, conservatively, an average of 1.67 million defensive gun use incidents per year", as if this were a fact rather than an estimate, derived from an unpublished paper, that conflicts with established findings.
–English also states without qualification that "right-to-carry laws and associated growth in carry permits have no statistically significant effect on murder rates, firearm murder rates, non-firearm murder rates, or overall violent crime rates." This too is not a fact, but rather a finding reported in English's twice-rejected, still unpublished, methodologically flawed paper.
I could go on, but the substance of the brief is more than 30 pages. Suffice to say there is much deception, and very little supporting evidence cited other than his own unpublished work and another paper, also unpublished, by Gary Kleck, the source of a now-discredited claim from the early 1990s that Americans use guns in acts of self-defense 2.5 million times per year.
The Wall Street Journal article
Today, June 27, English responded to his critics, including Mike McIntire and Jodi Kantor at The New York Times, in an article for the Wall Street Journal entitled "Antigun Activists Ambushed Me". Here English repeatedly describes his research as "objective" while accusing his critics of trying to "cancel" him and dismissing substantive criticisms of his work as "dishonest."
Clearer examples of dishonesty can be seen in Mr. English's own writing in this article:
1. "The Times article insinuated that I hid my funding, compensation and expert-witness work. But my funding has been fully disclosed, in accordance with academic practice, in every journal article accepted for publication."
Perhaps it's true that English has disclosed funding in every article accepted for publication. But the only gun-related papers he's made public – the two I discussed here – have never been accepted for publication. He has never disclosed funding for those papers. The New York Times writers were absolutely correct in noting that English failed to reveal his funding sources.
2."The [New York Times] article took issue with the wording of my survey, but the questions were peer-reviewed before being fielded..."
This is one of the reddest of red herrings. (a) English doesn't mention in the paper that the questions were vetted by anyone, but in any case (b) the term "peer-reviewed", in this context, could simply mean that he got a neighbor to look at the questions. This is not peer review in the conventional sense (i.e., an entire article being reviewed by experts for a scholarly journal). It's hard to imagine a more misleading way of saying that someone looked at your survey before you administered it.
3. "[The New York Times] tried to cast doubt on the survey’s sampling, but it was a representative sample of 54,000 Americans conducted by a professional survey firm..."
Saying that a sample is representative doesn't make it representative. The New York Times journalists were exactly right for casting doubts. Although all 50 states were represented, and some demographic information was provided, the paper falls short of conventional, rigorous demonstrations of representativeness.
4. "The [New York Times] article dismisses as implausible my finding that Americans use guns to defend themselves 1.67 million times a year, but that point lies squarely within the range of previous findings."
That statistic only falls within the range of previous findings when an estimate of 2.5 million is included as the upper limit. As I mentioned earlier, that estimate has been thoroughly discredited owing to multiple errors. 1.67 million is indeed highly implausible given the current state of the peer-reviewed literature.
There are just a few examples of deceptiveness in the article. As with English's other writings on the topic, it seems like there are always more...
Conclusions
When I drafted this newsletter, I thought I'd be using English's work to illustrate some of the ways statistics can be misleadingly calculated and interpreted. I didn't think I would end up launching a personal attack on the man. But when I saw the Wall Street Journal piece this morning, I changed my mind, because it became clear to me that he's a mean-spirited, dishonest scholar willing to deceive readers about a topic of profound public importance.
The tipping point for me was a few of his closing remarks:
"The Times and other outlets are signaling that they will cancel academics who state inconvenient facts.... Many journalists carry water for these causes by running poorly sourced articles larded with dishonest accusations... [But] If these are the strongest criticisms that can be made of my survey after years of digging, it should make us more confident in the results."
English offers no facts, inconvenient or otherwise. What is offered instead consists of bad data in a pair of unpublished studies written by someone with no other publications on the topic. McIntire and Kantor at The New York Times showed excellent sourcing and summarized accusations that many other experts have levied, including two I interviewed for this newsletter. The journalists' methodological critique of the National Firearms Survey was cogent, and still other concerns could be raised. English's RTC paper, rejected by peer-reviewed journals at least twice, is even more flawed. Meanwhile, English continues to be profoundly deceptive about his firearms data. If you still view the the man as honest and his research as credible, then allow me to sell you some 2009-era Reebok EasyTone sneakers, and you can check whether walking in them makes your butt 28% firmer.
The moral of the story? However authoritative statistics may seem, a peek behind the curtain may diminish some of that authority once you've seen how and why they were created. The practical challenge that remains is to persuade genuine authorities, like the U.S. Supreme Court, to look behind the curtain too and not give the William Englishes of the world credibility unless they've earned it.
Thanks for reading!