The recent release and widespread propagation of the Santa Clara County “Stanford” study regarding the prevalence of COVID-19 has me concerned about the ability of the Americans to execute critical thinking skills and properly analyze the information they consume. In this article, we’ll explore exactly why this paper should be labeled junk-science and filed in the trash can—it’s complete garbage.
You can’t listen to a conversation or scroll through your social media feeds without being bombarded by (well-intentioned?) folks touting the Santa Clara coronavirus study and how it “proves” both the higher prevalence of the virus and the much lower actual case fatality rate—of course, “equivalent to the flu.”
However, I would suggest that folks are being duped, hood-winked, and manipulated by sensationalized and misleading headlines and conclusions—fueled by biases and agendas.
I raised concerns about this study during our live stream when it was first released—noting several red flags that were raised for me.
Now, having had time to fully digest both the study and the growing volume of peer reviews… those concerns have proven to have been well-founded.
The problems with this coronavirus antibody study revolve around two variables: Methodology Issues and Researcher Issues.
Let’s examine the evidence and why I contend that this study is worthless at best and dangerous at worst—given its proven ability to negatively influence the public’s decision-making process regarding matters of life and death.
Methodology Issues with the Santa Clara Covid-19 Prevalence Paper
As I stated, there are a plethora of significant problems with the methodology employed with this study. Specifically, it utilized bad controls, questionable tests and testing parameters, and improper (non-randomized) sampling.
This resulted in a study that produced conclusions that were the product of (1) a statistical error and (2) bias (enrichment)—meaning it was not representative of the Santa Clare population, let alone that of California or the United States.
Control Issues with the Stanford Covid-19 Antibody Study
The researchers indicate that specificity and selectivity parameters were obtained via testing on 37 positive and 30 negative control samples.
This is way too small. In particular, specificity is absolutely critical when testing a small sample size—as was the case with this study (only 3,300 individuals).
In fact, a note was buried at the very bottom of the study, stating, “If test specificity [is] less than 97.9%, our SARS-CoV-2 prevalence estimate would change from 2.8% to less than 1%, and the lower uncertainty bound of our estimate would include zero.”
That’s a massive and highly significant change—one that would render the study useless. As such, everything depends on the test’s specificity—or ability to accurately avoid false positives.
Again, the test kits must be extremely accurate (able to identify false positives and negatives) given the small sample size of this study.
In this case, the controls were absolutely not enough to accurately confirm either of these parameters with a high degree of confidence—parameters that, in turn, directly impact the prevalence range.
Smriti Mallapaty writes in an article published on Nature:
“To ensure a test is sensitive enough to pick up only true SARS-CoV-2 infections, it needs to be evaluated on hundreds of positive cases of COVID-19 and thousands of negative ones, says Michael Busch, an infectious-diseases researcher and director of the Vitalant Research Institute in San Francisco, California… But most kits have not been thoroughly tested.”
Again, the study only tested a whopping 37 positive and 30 negative control samples!
Moreover, the paper itself noted that the manufacturer’s kit performance data reported 2 false positives out of 371 true negatives.
This means that, for a sample size of 3,300 tests (the number in this study), there could be 50 false positives.
Remember, the actual study found 50 positives within the sample! That means that based on the manufacturer’s own data (1) all of them could have been false positives, (2) the sample prevalence could have been zero, and (3) the study is nothing more than a statistical-error waste of time!
Test Kit Issues with the Stanford Coronavirus Study
At the time of study, there was no FDA-approved Covid-19 antibody tests available for clinical use.
Instead, the researchers used a test kit purchased from Premier Biotech (a Minnesota firm) but manufactured by Hangzhou Biotech—a Chinese lab test vendor. And we all know how well the Chinese test kits and PPE have worked!
The problem is that researchers from hospitals and universities in Denmark had already done specificity and selectivity tests on nine such test kits—including the one selected and used by the study. They rated this test kit as the worst of all the kits tested!
In fact, they found this test kit to only have an 87% specificity—a very far cry from the 99.5% claimed in the paper!
This is critical to understand because this study becomes completely invalid and useless if the specificity is off by only a tiny fraction!
After all, the paper itself admitted that at even a 97.9% specificity, prevalence would be less than 1%.
Further complicating things is the fact that with 1.5% of the sample testing positive, the minimum specificity you could have—and produce the numbers claimed in the study (though the data has still not been released for review)—would be 98.5%.
At that level, you would expect 3,300 people to produce 50 false positives (1.5%). This calls into question the claim of even a 1% prevalence with a 97.9% specificity.
So, (1) the data, statistical analysis, and conclusions are dubious at best and (2) we are left with a prevalence uncertainty range of between 0% to 4.2%—which leaves us where we started before the study (aka nowhere).
Sampling Problems with the Santa Clara Covid-19 Prevalence Study
When it comes to sampling, there are two major problems with the study’s methodology, both centered on the means of solicitation. This is critical because a non-random sampling has the potential to be both highly skewed (enriched) and not representative of the broader population.
Solicitation Flaws—A Biased, Non-Representative Sample
First, the researchers used biased language to solicit participants—announcing that they were going to be providing free testing for Covid-19 antibodies and requesting people to come get tested.
Here’s an actual example of a Facebook ad that ran for the study:
Again, you MUST have a random test. By using specific language that identified too much information, it failed to remain unbiased.
The fact that at least some people more than likely signed up in order to get free testing likely enriched the sample.
Participants likely signed up because it was nearly impossible to get tested at this time in California. Many believed they were infected but simply couldn’t get tested. This solicitation provided a FREE pathway for concerned citizens to finally get the testing they were desperately seeking to answer the question looming over them: did I have Covid-19?
As would be expected, many jumped on this as an opportunity to get tested when there was no other way for them to do so—and for free. Actual participants have stated this to be the case.
Who would want a test?
Again, those who had experienced known virus symptoms, but weren’t critical enough to be tested. This produced an underlying bias in the sample selection and artificially enriched the data—stacked it in favor of those who were infected.
Recruiting Flaws—Facebook Used as the Solicitation Channel
Second, because Facebook was used to deliver the solicitation via social media, the study was highly susceptible to recruiting. Meaning, the invitation was likely shared within Facebook communities (including private ones) filled with folks who had a reason to believe they may have had the virus (e.g., “I had the symptoms but couldn’t get tested” support groups).
In other words, private groups with a high prevalence of potentially positive individuals served as recruiting pathways. That potential recruiting again renders this study non-randomized and likely biased (enriched).
For example, if I saw the ad on Facebook and knew I had a friend who believed they had been infected but were unable to ever get tested… I’d share the solicitation with them and let them know this would be a great way to finally determine if they really had the virus or not!
What the Methodology Flaws Mean for this Coronavirus Antibody Paper
Whether it was enriched via poor sampling (e.g., recruiting and non-random solicitation), false positives, or a combination of both, the outcome is the same—it is a worthless data set for statistical analysis and fails to be a representative sample of the broader population.
Andrew Gelman, a professor of statistics and political science and director of the Applied Statistics Center at Columbia University, aptly summarizes the merit of this bunk study when he writes, “I think the authors of [this] paper owe us all an apology. We wasted time and effort discussing this paper whose main selling point was some numbers that were essentially the product of a statistical error.”
He adds, “I’m serious about the apology. Everyone makes mistakes. I don’t think the authors need to apologize just because they screwed up. I think they need to apologize because these were avoidable screw-ups. They’re the kind of screw-ups that happen if you want to leap out with an exciting finding and you don’t look too carefully at what you might have done wrong.
In the end, the methodological failures of this study mean it has a lower prevalence boundary of zero percent (0%) and provides absolutely zero value in determining the true prevalence level of Covid-19 in Santa Clara County, the State of California, or the United States as a whole.
Researcher Issues with the Stanford Coronavirus Prevalence Study
Beyond the numerous and critical problems with the study’s methodology, there are even more concerning issues with the researchers themselves.
Before we dive into the details, please note a few specific authors of the paper itself (highlighted):
We’ll get back to these in a just a second…
Wall Street Journal Article Number One
On day the study was released, the Wall Street Journal ran an article entitled “New Data Suggest the Coronavirus Isn’t as Deadly as We Thought.”
It was a pointed (i.e., persuasive opinion) piece that aggressively promoting this study. It was also written entirely in third person (e.g., “a study” “the researchers” “a Stanford team”)—with zero indication the author was involved with or connected to the study.
However, it was written by none other than Andrew Bogan—an author of the Santa Clara Covid-19 prevalence paper!
Again, absolutely no declaration of this important connection was made anywhere in the WSJ article—including in the bio line. This represents an undisclosed critical conflict of interest and a shameful case of editorial misconduct by the Wall Street Journal. Readers—especially of a persuasive opinion essay—deserve to be informed of that.
The article was a propaganda piece, shamelessly promoting the importance and merit of the study—namely, its foregone conclusion that case numbers in the US are far greater and the infection fatality (or case fatality) rate is basically the same as the flu.
That’s a political or ideological agenda… not an empirical case supported by the science.
Wall Street Journal Article Number Two
Furthermore, Bogan’s article directly refers to an earlier Wall Street Journal article (in the opening paragraph) as clear support of the study’s conclusion among the scientific “expert” community.
However, when you research this prior article, you find it was the shameful and scientifically absurd opinion piece we discussed and shredded back on March 24th!
It was an opinion piece entitled “Is the Coronavirus as Deadly as They Say?”
This junk-science opinion piece made the absurd claim that—based on the positive cases in the NBA at the time—the “fatality rate from Covid-19 [is] orders of magnitude smaller than it appears.”
The authors made this emphatic and declarative statement after admitting that the NBA players were not representative of the broader population. They then immediately and shockingly asserted that it’s “true” nonetheless if we overlook that pesky little detail and were to assume (pretend) that the sampling was representative of the larger US population!
That’s the equivalent of measuring the heights of a small sample of Japanese men in Tokyo to arrive at a mean of 5.3 feet and then arguing that the mean height of American males if 5.3 feet… telling the audience that you know the sample is not representative of the targeted population but, hey, just ignore that and pretend it was???
This may not be a high crime when it comes to the mundane topic of average height for men… but it most certainly is when the message you’re sending deals with a life and death matter such as the prevalence of an infectious virus in the middle of a global pandemic!
The authors of this piece went as far out on a professional credibility limb as they possibly could have—with zero scientific (empirically-sound data) to support their position. They were ridiculed across the scientific community for the amateurish representation pseudo-argument. It was viewed as laughable within the profession.
Who were those authors?
Well would you look at that… it was Erin Bendavid and Jay Bhattacharya—the lead author and principal author of the Stanford paper!!!
Massive Credibility Gap and Biased Researchers
So, we have one author of the Stanford paper covertly promoting the study (hailing it as a scientific revelation) in the Wall Street Journal, while simultaneously supporting his assertion by calling on two other (also undisclosed) authors of the paper as “experts” who validate the study???
Shamefully unbelievable! We go from “Is the virus as deadly as they say?” to “The virus Isn’t as deadly as we thought” (funny how the titles sound like a series)—all written by authors of the very paper that is advanced as support for the underlying presupposition!
So, let’s get this straight…
A makes a ridiculous claim—predicated on an absurd argument (viz., bad representation) and a dubious presupposition (viz., Covid-19 is no worse than the flu)
B then refers to A as experts
B argues that A is correct after all because study C “proves” it
Study C was authored by A and B
Are you kidding me??? It’s all self-affirming, circular logic! (and the WSJ was none the wiser to it nor helped connect those dots for its readers?)
These researchers were clearly biased (they wanted it to be true) in order to redeem their professional reputation via “confirming” their absurd and non-backed claims of prevalence in the first place.
This patently-obvious bias and hidden agenda infects the entire study and renders it scientifically untrustworthy, illegitimate, and worthless—nothing but junk-science and agenda-fueled propaganda.
And, unfortunately, Stanford now has a black eye for being attached to the bogus study.
And yet, Americans everywhere are embracing this study as “proof” the Covid-19 pandemic is overblown and no worse than the flu!
Why? For the same reason—it confirms and supports their bias. That bias is predicated on a closely held belief that the coronavirus is no worse than the flu and a deep desire to simply return to life as “normal.” Neither of these is tethered to any shred of current, empirically supported data.
It is a belief system—one being manipulated and perpetuated by those with an agenda.
Sadly, folks are pricing-in risk and making personal decisions based on this junk science—decisions that can directly impact their life, the lives of their family, and the lives of others in their community.
We simply do not know what the prevalence of Covid-19 is. We absolutely need to figure it out. We all agree on that. That knowledge could prove invaluable in helping guide our long-run response to the pandemic.
Again, we know the actual infection numbers are bigger than the officially recognized cases—we have a known testing problem. But, again, we don’t know how much bigger they are.
However, fake prevalence numbers from bunk studies are useless and dangerous. We must do it with legitimate science and peer-reviewed research.
We can largely invalidate the fundamental claim of the study (viz., the IFR is between 0.1 and 0.2 percent—or roughly the same as the flu) by just doing some simple back of the envelope math regarding New York City.
We know we currently have 10,657 deaths in New York City (as of 4/21/20). With an estimated population of 8.7 million people, that would already put us at an infection fatality rate (IFR) of 0.12%—if every single New York City resident was infected.
And we know that number is only going to continue to rise—the deaths aren’t going to end today.
Now, if we figure based on even half of the city being infected, that IFR would climb to 0.24%… already above the study’s projection of 0.1-0.2%… without any additional future deaths.
However, I think we could reasonably question whether every other New Yorker is infected. The reality is probably closer to 5-10% (higher than most places due to the population density of the city).
If that’s the case, then the IFR would be more around 1.2% to 2.4%—six to twelve times higher than the paper’s maximum estimate and 12-24 times higher than the flu.
However, that’s based on all infections—including fully asymptomatic folks. The fatality rate among those with actual symptoms (mild to severe) would be even higher, though we still don’t know what percentage of infected individuals are asymptomatic.
Moreover, this entirely ignores the growing mountain of evidence indicating the potential for serious long-term consequences from the SARS-CoV-2 virus—such as permanent lung, heart, kidney, neurological, prostate, testicular, and other damage… even to those only having “mild” symptoms. All of which represents serious risks not conveyed via a fatality rate.
And, never mind that we have zero idea regarding any potential for immunity (e.g., how much, to what strains, or for how long), which inescapably opens the door for the very real possibility of additional waves (with potentially more aggressive mutations) or even an annual battle against this coronavirus—much the same as with influenza.
Folks need to stop clinging to beliefs and desires that are simply not grounded in or supported by scientific fact. We need to stop allowing ourselves to be manipulated and hoodwinked by those with agendas—on both sides of the political and ideological divides.
This is a virus—a very serious one that we know very little about. You don’t want to get it… period.
We must begin to exercise critical thinking skills and sound analysis when judging information that we encounter. And we must stop regurgitating information to others—presenting it as “fact,” when we have no idea how sound it is or how to appropriately apply it to our current situation.
After all, that information—if acted upon by others—has the potential to harm and even kill them. We must take that seriously and practice some semblance of personal responsibility when determining our own actions and when attempting to influence the actions of others—including our friends, family, and fellow citizens.
Here are links to all the sources cited above (as always) for your own study and analysis…
Nature Article Covering the Study: https://www.nature.com/articles/d41586-020-01095-0
The Santa Clara Paper: https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v1
Test Kit Information & Paper Review: https://www.extremetech.com/extreme/309500-how-deadly-is-covid-19-new-stanford-study-raises-questions
Dr. Andrew Gelman’s peer-review: https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaws-in-stanford-study-of-coronavirus-prevalence/
Additionally, here is an excellent video from Dr. Chris Martenson where he reviews (shreds) the Stanford paper (starts at 5:44)—I highly recommend following him on YouTube: