Baby Wipes and Vaccine Boosters
In this newsletter, I'll be writing about a brand of baby wipes and an article on vaccine boosters that made headlines this week. Why this odd pairing of topics? Because I want to illustrate a concept that I call "stasthetics".
Stasthetics refers to principles that guide how we respond to statistics. Just as aesthetics shape our experiences with art, so stasthetics shape our experiences with statistical data.
What does that mean exactly? I'll start from the beginning.
When you encounter a work of art, a variety of reactions are possible (beautiful, depressing, trite, thought-provoking, morally challenging, etc.). Aesthetics explains how these reactions are produced. Artists understand aesthetics on some level, even if they can't articulate the underlying principles, because they aim to create something you find beautiful, or thought-provoking, or whatever, and sometimes they successfully produce these effects. The principles they make use of aren't written down (except in scholarly works), and you're not necessarily conscious of them as you experience each work of art.
Analogously, when you encounter a statistic, you may find it more or less informative, persuasive, intimidating, scary, suspicious, or irrelevant. Stasthetics describes how these reactions are produced. People who cite a statistic may or may not have statistical expertise, but they understand stasthetics well enough to use stats in ways that achieve certain effects. The underlying principles aren't written out (at least not until I finish my book), and you're not necessarily conscious of them as you encounter each statistic.
Something else that aesthetics and stasthetics have in common is that the intended effect isn't always achieved. Just as an artist might try to create something beautiful and inspiring that you end up considering awful, so a person who cites a statistic to make their argument more persuasive might end up intimidating you, or cause you to question the relevance of the statistic.
To illustrate how stasthetics works, I'll talk first about WaterWipes baby wipes. (Why this product? Because I'm sitting next to my granddaughter, who's sleeping quietly now after generating more poop than a herd of rabid elephants. In short, I have time to ponder this now-empty package of wipes on the table.)
The front of the WaterWipes package includes the following statement: "99.9% Water & a drop of Fruit Extract". Consider how this statement was vs. might've been worded:
Original: "99.9% Water & a drop of Fruit Extract".
Alternative 1: "99.9% Water & 0.1% Fruit Extract".
Alternative 2: "Mostly Water & a little Fruit Extract".
WaterWipes is marketed as "the world's purest baby wipes", as noted on the front of the package. Stasthetics tells us that including the "99.9% Water" statistic is desirable, because it increases the credibility of the statement about purity. In other words, the original statement is preferable to Alternative 2, because "99.9%" is more precise and thus more persuasive than "Mostly".
Stasthetics also tells us that purity is a topic that tends to call for stats. This isn't true of every topic. If we're at a bar and you're complaining that the ice in your drink has melted, it's natural to say that your drink is "mostly water". (It would be strange, and not very believable, if you said "my drink is about 87.5% water.") On the other hand, when a company claims that something is pure, or "purest", it seems both natural and helpful to buttress that claim with a statistic, because we assume the company measured. This is part of why the original statement seems preferable to Alternative 2, where no stats are cited.
Another reason the original statement beats Alternative 2 is that the 99.9% statistic implies that such careful manufacturing that the water content will be the same from package to package. Careful, consistent manufacturing is reassuring to those of us responsible for wiping precious little butts, and so the precision of that 99.9% is another reason why the original statement is preferable to Alternative 2.
As for Alternative 1, stasthetics tells us that the "0.1%" is one stat too many – it makes the statement sound too "sciency". If you need to know how much fruit extract is in the wipes, you can infer it from the 99.9% figure. (But, really, you don't need that number, because the point of providing stats in the first place was to convince you of the purity of the product, and that 99.9% stat does the necessary work.)
A separate issue is that the 0.1% stat is potentially confusing to parents who are math-wary (and sleep-deprived), since it's well known that some people struggle with percentages that aren't whole numbers. (0.1% is a tenth of one percent, not one percent.) Stasthetics tells us that stats can be intimidating or confusing to lay people, so you need to choose your stats carefully when you're trying to communicate with them (and get them to buy your products).
Since the 0.1% figure seems less than ideal, the manufacturers used the term "drop" instead. "Drop" may have been chosen because it's more evocative than phrases like "a little". (This is an issue of aesthetics rather than stasthetics. "A drop" sounds gentle; "a little" sounds vague.)
Owing to the way stasthetics work, stats don't always have their intended effect. When I saw the original statement on the WaterWipes package, my first thought was that although the 99.9% stat tells me something about the product, it's not informative enough. If you want to convince me of the purity of your product, you should assure me that the ingredients are unadulterated. Tell me that the water you used was sterile. Tell me that the package is airtight and prevents bacteria from entering. (99.9% tap water in a leaky package would be kind of gross...)
In sum, the manufacturers of WaterWipes cited a statistic to clarify a statement about purity, thereby making the statement more credible. The statistic they cited implies carefulness in the manufacturing of the product, thereby making the product seem safer. At the same time, the manufacturers limited themselves to one statistic, because the additional stat they could've provided might have been off-putting or confusing to some consumers. Meanwhile, at least one consumer (me) found the statistic they did provide to be informative but insufficient.
I've said a lot about WaterWipes. You may be silently begging my granddaughter to wake up soon so that I can move on. The purpose of providing so much detail was to illustrate that what motivates people to use stats, and how we respond to these stats, can be complicated, and you can see these processes at work even in something as seemingly trivial as the verbiage used to sell baby wipes.
So far I've presented stasthetics as what underlies our experience of statistical data. It's also present when stats are missing. In other words, we've learned to expect certain kind of statistical information in certain contexts.
If you have a background in research, you're familiar with technical examples of what I have in mind (failure to report or disaggregate data; missing power analyses; significance testing without effect sizes; etc.). If you're not a researcher, you’ll still find yourself in situations where you expect statistics and would consider it surprising, if not problematic, to find them missing. A simple example would be a pre-election poll. Even though who wins is more important than the margin of victory, you want these polls to tell you more than just who's currently in the lead. What you want to know is the actual percentages of voters who favor each candidate (and, the sample size and margin of error). You know that polls are unreliable for many reasons, so the stats matter.
The example I want to focus on in more depth is an article about vaccine boosters published this Monday. The article made the front page of major news websites (New York Times, CNN, Fox, MSNBC, etc.) and continues to be cited in stories as recently as today, because it was published in a leading medical journal (The Lancet) by an international team of experts (including FDA and WHO scientists), and because it presents a politically controversial recommendation (COVID-19 vaccine boosters should not be used in the general population). Whether or not you agree with that recommendation, the article is unusual because statistics are not reported in places where one would strongly expect to see them.
The authors call their article a "review", but it's not a review in any ordinary sense of the term. Rather, it's a "position paper" – i.e., a set of opinions. Virtually none of the studies supporting these opinions are summarized in the article. (There is a table, in a separate appendix, that lists 93 studies, but few details are provided.)
The premise of this "review" is that the decision to administer COVID-19 boosters to the general population should be "evidence-based and consider the benefits and risks for individuals and society." That seems reasonable. Unfortunately, the authors don't summarize the evidence, though I believe they should've done so, because other experts (and the Biden administration) do favor boosters.
The logic of the "review" is essentially this: Administering COVID-19 boosters to the general population could be beneficial, but we shouldn't do it yet, because: (a) the vaccines currently used in the U.S. work so well that the benefits of boosting would be small, (b) there's no clear evidence yet that boosters have enduring effects, and (c) boosters may have adverse side effects.
I'll address these claims in reverse order.
Regarding (c), no statistics on side effects are cited; the authors merely note that such effects are possible. No doubt the risk is greater than zero, but stasthetics indicates that in this context, merely noting the possibility of risk without providing estimates is not only under-informative but arguably irresponsible. We can't evaluate risk if we don't know how much of it we face. If data on risk are lacking, that should be acknowledged. Either way, you have to say something, because the audience for these articles includes folks who make decisions about public health policy and practice.
Regarding (b), the authors refer to a study from Israel, published this Wednesday in the New England Journal of Medicine, which shows that the benefits of Pfizer boosters last for at least 12 days among people ages 60 and older. Although this is good news for booster advocates, I agree with the authors that it's still too early to know how long boosters will be effective (and whether they would help younger people). This is the only part of their argument I have no quarrel with.
Regarding (a), the authors acknowledge that the effectiveness of current vaccines wanes over time. However, they stress that effectiveness diminishes for mild disease but not for severe disease. (What is "severe" disease? The authors only note that it’s not defined the same way across studies.) Stasthetics tells us that in a paper like this, written by experts but understandable by journalists, policymakers, and the rest of the educated lay public, some statistical data should be presented. How much does effectiveness diminish over time? All the authors provide are four figures which are informative but nonetheless completely irrelevant to their argument. The figures show that vaccines are highly effective, and that the effectiveness is slightly higher for "severe" disease than for "mild" disease. However, the figures contain no interpretable time-related information. (One figure distinguishes between vaccine efficacy measured "early" versus "later", but this distinction is defined vaguely as "more recently" versus "less recently" since vaccination.) In short, owing to the absence of relevant statistical data, we can glean nothing about declines in vaccine effectiveness over time. You can actually get better data about declining effectiveness from good journalists (see, for example, this article published last month), from the CDC, or from Pfizer's FDA briefing document that was released to the public today.
There's actually no scientific consensus yet on the extent of decline in vaccine effectiveness (or on how much of it is due to changes in the immune response of vaccinated people, versus changes in public behavior (e.g., less masking), versus the emergence of more virulent strains of COVID-19). Even so, the authors make the strong claim that vaccine effectiveness declines somewhat for mild disease, but "not substantially" for severe disease. Stasthetics says that statements like this are invalid (and perhaps irresponsible) without the relevant stats.
Here's the authors' conclusion: "Current evidence does not, therefore, appear to show a need for boosting in the general population, in which efficacy against severe disease remains high." I dislike that statement. Partly because of the stasthetic violations I've discussed. And partly because it seems politically biased. Clearly there's some need for boosting. You could argue coherently that we shouldn't do it, because the benefits would be small and short-lived, because only people with mild cases of COVID-19 would benefit, and/or because we should focus on vaccinating unvaccinated people, but these arguments don't change the fact that a third vaccination would provide at least some benefits to at least some people in the general population.
Finally, stasthetics helps delineate the boundary between what stats do versus do not tell us. For example, stats do tell us that the vaccines used in the U.S. are effective, that boosters offer additional protection, and that adverse side effects are uncommon. However, stats do not tell us yet how much the benefits of vaccines and boosters will decline over a period of years (we need to wait to find out). Stats do not tell us whether the likelihood of adverse side effects is low enough to be acceptable (that's an ethical issue). Stats also do not answer more subtle questions, such as how and when to divide available vaccines across initial vaccinations vs. boosters (this is an ethical issue that's tied to statistical models of which strategies would have the greatest benefits. It's tricky because the models are complicated, and because there are different ways of defining "benefits.")
So, should the general population have access to boosters? I'm not sure. Will we have access? Well, some of us will…maybe. The Biden administration had been planning a roll-out of Pfizer boosters starting next week, but how this looks and who will receive their third shot will depend on the FDA's authorization (they're voting tomorrow) and, possibly, on CDC recommendations due next week. Stay tuned…
Thanks for reading!