# How to Calculate Covid-19 Fatality Rate: How Dangerous Is the Virus?

Social media is full of foolish memes touting that only 0.01% of Americans have died and, therefore, the fear over the coronavirus pandemic is overblown. Putting aside the fact that “fear” is a strawman argument, the negligent promotion of this misinformation is both unhelpful and a reflection of a complete lack of mathematical and statistical understanding. Let’s take a minute to examine the facts, set aside our ideological belief system, and discover what the actual current fatality rate is for Covid-19.

**How NOT to Calculate a Covid-19 Fatality Rate**

First, let’s take a look at how NOT to calculate a Covid-19 fatality rate. The source of these bogus memes (and their underlying foundation) lies in a failure to understand the difference between a **survival rate** and *developing* **fatality rate**.

A survival rate is calculated by dividing the total deaths by a given population. However, this can only be calculated *after* the given event is **complete**.

For example, you can’t calculate the survival rate for a soldier in a war after a single battle—the war must be complete.

As the coronavirus pandemic is far from over, it is absolutely useless and, worse, dangerous and counterproductive when used as the basis for health decisions.

Yes, if we calculate the US “survival” rate for Covid-19 as of Friday, May 29^{th}, it would be 99.7%. However, this doesn’t mean the fatality rate of the virus is 0.03 percent! That is an absurd conclusion to draw from that math!

Once the pandemic is over (if we ever truly eradicate the virus), then we could calculate an accurate survival rate. Furthermore, if 100% of the population become infected during the pandemic, then—and only then—the inverse of that survival rate would equal the fatality rate.

Otherwise, the two terms and measures are not synonymous nor a useful measure of risk.

**A Better Way to Calculate the Coronavirus Fatality Rate**

A better way to calculate the coronavirus *fatality rate* would be to utilize the **case fatality rate**.

This is done by dividing the total number of deaths by the total number of positive cases.

This provides us with a **best-case fatality rate**—meaning it “assumes” that all unresolved cases will result in recoveries and not deaths. As such, it provides us with a minimum fatality rate or “floor.”

When we do this for the US, we arrive at a case fatality rate of **5.8 percent**.

However, there is an inherent problem with this method—one that we must account (or adjust) for on the back end when modeling it. It inherently fails to capture and reflect *unidentified* cases, which are highly likely to exist.

We’ll explore this more later when we finalize our fatality rate calculation and model its impact.

For now, what’s important to know is that a case fatality rate is a valid method—one that provides us with a minimum. We will use a modified version of this option to frame our final fatality rate range for the virus.

**The Preferred Method for Calculating the Covid-19 Fatality Rate**

The preferred method for calculating the fatality rate for the Covid-19 virus is to use what I refer to as an **outcome-based fatality rate**.

There are only two possible outcomes from an infection: recovery or death.

By dividing total deaths by the total number of *resolved* cases (deaths + recoveries), we are—in fact—actually calculating the real fatality rate of the virus.

When we do this for the US, we arrive at a *current* outcome fatality rate of **17.2 percent**.

However, as with the case fatality rate method, this approach is *not without its issues*. The more resolved cases you have in the data, the more accurate the rate will be. Early in the pandemic, this rate will be *artificially high*. Over time, as more cases resolve, it will **naturally decline until it reaches and stabilizes at the true fatality rate**.

So, the case fatality rate will naturally rise as the outcome-based fatality rate naturally declines. The question becomes: *Where will they meet? What will the true fatality rate be?*

**Overcoming Limitations to Arrive at an Accurate (True) Fatality Rate**

As we noted, there are some inherent limitations with the outcome fatality rate method when working with small sample sizes (early data). To overcome this, we can **utilize statistics and modeling**.

The case fatality rate will provide as with a *minimum* (floor). The outcome fatality rate will provide us with a *current rate* (ceiling). Statistics and modeling allow us to project or forecast **where the two will likely meet**.

It is important to note that we are dealing with a probabilistic system—*we can’t know or calculate the true fatality rate until the pandemic is over*. However, this will allow us to build a much tighter range—meaning it will put us not only in the ballpark but in the infield.

The more data we can use, the better the results will be. Next, we’ll look at what data we utilized to calculate the fatality rate for Covid-19.

**Data Sets We Used to Calculate the Coronavirus Fatality Rate**

To calculate our coronavirus fatality rate, we utilized **three data sets** (or samples).

First, we used a data set consisting of US states. We selected the **top 23 states**—those with a significant number of total deaths (n => 885).

Second, we used a data set consisting of countries. We selected **12 countries from North America and Europe**—those with what we considered to be reliable data.

Obviously, none of the data is perfect. However, these countries have strong healthcare systems, good reporting structures, and have demonstrated a reasonable level of transparency. Clearly, the goal was to avoid countries that have distorted their numbers (e.g., China and Russia) and countries that are current hotspots but with overwhelmed health systems (e.g., Brazil and Mexico).

Third, we used a **combined data set**. Based on the notion that our states are themselves much like small countries, we removed the overall US data from the country set and added our 23 selected states.

Once we crunched the data, we found a *strong correlation* between the **outcome fatality rate** and the **percentage of recovered cases**. As noted earlier, as more cases resolve, the current fatality rate will naturally become more accurate—meaning it will decrease until it levels off at the true fatality rate.

By using a **power-based best fit formula**, we were able to calculate a projected fatality rate for 100 percent resolution for each sample.

**Analyzing the Data to Determine a Fatality Rate Range**

At this stage, we can consolidate the numbers to start building our **probable range for the Covid-19 fatality rate**.

When we do, we get the following:

We summarized the data in three buckets:

- Best Case or Case Fatality Rate (CFR)
- Outcome Fatality Rate (OFR)
- Best Fit (Using Statistically Derived Equations)

We find that the data is relatively consistent across the data sets and we apply an average to arrive at our *starting* point:

- Best Case CFR 7.63%
- OFR 16.7%
- Best Fit 6.13%

Next, we perform an adjustment to the best fit.

We know that our combined sample has an actual percentage of recovered cases of 43.54 percent, with an actual (current) OFR of 16.14 percent. When we check this against the best fit equation, we produce an OFR of 12.15 percent. Meaning we are actually trending slightly higher (+3.99%).

We apply this adjustment to the best fit case fatality rate—arriving at **10.12 percent**.

All of this allows us to now **establish our fatality rate range of potential outcomes**:

Now, while we added a roughly 4% adjustment to the best fit rate (10.12%), we can logically anticipate that this variance will shrink as more cases resolve. Thus, it almost serves as a maximum target in our range.

As such, we project the case fatality rate for the coronavirus will eventually stabilize between our minimum and this maximum best-fit target—or at around **7 to 8 percent** (the true fatality rate).

This is *70 to 80 times the fatality rate of the flu* (0.1%) and a far cry from the numbers being assumed and touted by many folks out there.

However, we now need to model this to see what the ultimate impact will be for our country—as well as make some final adjustments to that model based on the additional constraint we mentioned early in our article.

**Modeling the Impact of Covid-19 on the US Population**

If we take our case fatality rate range as is and model this from a *very conservative* position (meaning we go with the **minimum of 6%**), this is the projected outcome for our nation:

You can decide for yourself how much you think the virus will spread through our population before we get a vaccine (if we ever get a vaccine & if that vaccine is highly effective).

Regardless, *these numbers should be extremely concerning to you!*

However, our modeling has a range as well and this variant will serve as our **ceiling**—*call it a best worst-case scenario*.

As we mentioned, we have to contend with another unknown variable—**current prevalence**.

While we have been aggressively testing, we clearly have not been able to conduct enough widespread testing to get a firm grasp on just how prevalent the virus is now in the population. Furthermore, our testing methodology has skewed the testing towards positive cases. This means we were more likely to have tested folks who were positive rather than a purely randomized, non-biased sample.

All of this means—as many have correctly noted—that (1) there are likely far more cases out there than we have identified and (2) these cases are likely asymptomatic (or at least very mild) and will likely resolve as recovered cases.

So, the million-dollar question is: *What is the actual prevalence?*

This is important because we must adjust our fatality rate to take this into account.

There have been a few studies done using serology (testing for antibodies). However, most have been complete garbage and junk science—especially the Stanford study that everyone loves to quote. To learn more about why this study is worthless, check out our article Bias & Agenda: Stanford Covid-19 Prevalence Study Is Absolute Junk Science?

In the end, this study really only told us that the prevalence of the virus was somewhere between zero and 4 percent—*thanks Captain Obvious!*

Furthermore, we have some initial prevalence data from NY City. However, due to the major outbreak experienced there and the city’s high population density (neither of which is representative of the vast majority of the country), this data produces an extremely skewed prevalence result (high) and is not very useful.

Most experts put the current US prevalence at around **3 to 4 percent**.

To be highly conservative, we will use **5 percent** and adjust our fatality rate to match.

Since we are being extremely conservative in our use of the current prevalence, we will use the original (best fit) target of 10.12 percent as the starting point.

When we adjust for the expected higher prevalence, we calculate an **adjusted case fatality rate of 1.06 percent**—*still 10 times that of the flu*.

Furthermore, we will just round this to **1 percent**. This will provide us with a very conservative, best-case scenario (floor) for our modeling:

Again, you can decide for yourself how much the virus spreads before we can eradicate it—*if we can ever truly eradicate it*.

This exercise provides us with two key outputs (a fatality rate range and a death range) that we can personally use to both **better understand and respond to the current pandemic crisis**.

The total deaths will likely fall between these two models.

Interestingly, the best-case scenario is actually *worse* than our prediction of 700,000 to 1.2 million deaths from a little over two weeks ago.

That projection (watch video below) was based on an entirely different approach and methodology—*modeling the Spanish Flu*.

As the data has continued to come in, we are finding that this pandemic has the potential to actually be worse than the Spanish Flu—*something that ought to get your attention and activate your “Spidey” senses**.*

Interesting as well is the fact that the CDC has *reduced* their projection (under increasing political pressure from the administration) to 500,000 deaths. They continue to go in the opposite direction of the data and science.

To learn more about the new and “improved” CDC scenarios and models you can read their report, as well as view their current short-term forecast and their previous short-term forecasts.

As we have frequently noted on our broadcasts, the action of the CDC was based on absurdly low-ball estimates of both transmissibility and fatality rate—numbers that nearly all medical and research experts strongly disagree with. We will hit these numbers with less than a 20% spread and assuming a very conservative 1% fatality rate!

We fully expect to see the CDC continue to ratchet up their projections over time. Remember, these are the same models that very recently predicted we would hit 90,000 deaths around August 1st. We now find ourselves at 106,000 deaths at the end of just May!

Even now, in their current short-range forecast, they note that we are “likely to exceed 115,000 by June 20.”

First, that’s a far cry from 90,000 by August 1st. Second, thanks again Captain Obvious. Third, they are still waaaaay low-balling their forecasts. We forecast–*on the very conservative end of our range*–exceeding 125,000 by that date… and it could be worse.

Of the third-party national forecasts (many by leading universities) that the CDC uses to generate its ensemble model/projection, the number is as high as around 140,000!

Now, we fully understand their focus on (1) preventing fear and panic and (2) attempting to assure folks it’s safe to return to normal life out of a political need to “restart” the economy.

However, on the first point, we couldn’t disagree more with not providing truthful and accurate information to the people—even if it is “scary.” We have an unalienable right to life and that requires that we have accurate, truthful and actionable information with which to make healthy personal decisions. It’s akin to the argument that folks are better off dying slowly in a simmering pot of water than realizing they just jumped into a boiling pot!

I would personally rather know I just jumped into boiling water, so I can jump right back out—shock and fear be damned! Better to suffer a slight burn but be able to recover than die a slow but unnoticeable death.

On the second point, the economy will “recover” (whatever that will ultimately look like) naturally on its own. We cannot force that—it must happen organically.

We are a consumer-driven, debt-based (live beyond your means) economy. We won’t return to “normal” until individuals feel comfortable with returning to 100% of their pre-Covid activities… something we don’t believe will ever happen. It will require consumer-driven demand, not government edicts.

**Conclusion**

As we have seen, promoting an ultra-high *survival* rate for the Covid-19 pandemic is as pointless as it is dangerous and counterproductive.

Instead, we should utilize a blend of case fatality and outcome fatality rates, combined with statistics and modeling.

While coronavirus continues to spread, these will reflect a moving target. However, they will (1) provide us with a realistic range of possible outcomes and (2) move towards the ultimate “true” fatality rate.

Based on this approach, we find that the fatality rate of the virus is **between 7 and 8 percent** (70-80 times the flu).

Furthermore, even when we adjust for the unknown variable of prevalence (very conservatively), we arrive at a fatality rate of **1 percent**—*ten times that of the flu*.

This translates into far more deaths than most Americans are psychologically prepared for—*a pandemic that could surpass even the infamous Spanish Flu!*

And none of this takes into account the potential for additional waves of increasing severity, mutations that could increase the fatality rate, or the degree to which we may or may not acquire immunity.

The third point is especially concerning. It means that many of those who encountered asymptomatic or mild infections in the first wave (spring) could be at risk of re-infections in a second wave this fall. Those re-infections could result in far worse outcomes—exposing the middle-range of the age distribution spectrum to significantly increased fatality rates.

*What’s the answer?*

We continue to advise folks to (1) take this virus seriously and (2) continue prepping!

Despite all the hopeful and well-intended declarations that this crisis is over and we’ll be back to normal in no time… the data, the virus, and the laws of nature are declaring (to those willing to listen) a very different reality.

This is not over. It is going to get worse—*potentially much worse*. As it does, civil unrest will continue to escalate, and the medical system will be strained to the breaking point.

Just because it hasn’t happened overnight… don’t succumb to short-term thinking.

We must think strategically, keeping the big picture in mind and understanding that we are dealing with a textbook case of **chaos theory** (the unpredictable interaction of an infinite number of unknown variables over a long timeframe).

As such, we can have very little clarity as to how this may ultimately play out. Declaring the pandemic all but over and constraining potential outcomes is both foolish and dangerous.

Don’t buy the misguided hype and junk science being peddled by politically motivated leaders and ignorant sheeple regurgitating lies and misinformation.

Add to this the certainty of increasing civil unrest as both the pandemic and economy continue to worsen—despite the “promises” of the ruling elites and biased experts—and things are going to get bumpy… *really bumpy*.

**Don’t take your foot off the gas… keep prepping my friend!**