New Flu Review 2: How do you measure lethality?

04.11.2013 |


Editor’s Note: You may hear about fatality rates or percentages when media report on new and dangerous flu strains, and often times the reports are conflicting. In this post, Barrett Slenning, an epidemiologist at NC State, explains how these fatality rates are calculated, and why the numbers may fluctuate. A previous post on H7N9 flu can be found here.

In the previous post I mentioned that H7N9 has killed around one-third of the people infected…here’s why we need to be really careful with such statements.

Early in most outbreaks we measure the Case Fatality Rate or ‘CFR’ (i.e., [the number of fatal cases] / [total number of all cases]), and it turns out to be higher than the number that we eventually calculate weeks to months later after the epidemic has run its course.  Part of the reason for the discrepancy is due to how we run surveillance programs.

If, as it appears China is running it, the medical surveillance is strictly being done on hospital cases, it will probably overestimate the true CFR.  This is because by going to hospitals you are automatically selecting the most ill patients to watch – the ones that are so sick they sought medical help. Not surprisingly, really sick people tend to die at a higher rate than do just moderately sick people.  But since the moderately sick people did not seek medical help, a hospital based system will never know they existed.

The methodological problem is that the group being monitored is not representative of the population you want to know about.  We saw this in late April/early May of 2009 when the H1N1 flu outbreak began in Mexico City.  The Mexican surveillance was hospital-based, and found a high CFR.  At about the same time, California was reporting cases through active surveillance that looked for people who were sick, whether they sought medical help or not.  The California system reported a CFR for H1N1 only a fraction of what was coming out of Mexico, and that caused a huge uproar in the media.

Both monitoring systems were correct, it was just they were looking at different underlying ‘at risk’ populations; they were asking different questions.   People and political decision makers were really confused, however, and some of them made dumb choices as a result.

But it just comes down to being really careful in how you look at these things: The group you decide to study just tells you about that group, and only that group.  If you really want to talk about a different group, then you need to look at that other group instead.

Leave a Reply