Below is an excerpt from an interesting article from the NIH website. The article was written in 2008 and discusses the difference between absolute risk and relative risk. It states that when determining the usefulness of a treatment it is the absolute risk reduction that should be considered moreso than the relative risk reduction. Here is the excerpt which explains each term and states that absolute risk reduction is "the most useful way of presenting research results to help your decision making".
Here is the excerpt:
"How do you interpret the results of a randomised controlled trial? A common measure of a treatment is to look at the frequency of bad outcomes of a disease in the group being treated compared with those who were not treated. For instance, supposing that a well-designed randomised controlled trial in children with a particular disease found that 20 per cent of the control group developed bad outcomes, compared with only 12 per cent of those receiving treatment. Should you agree to give this treatment to your child? Without knowing more about the adverse effects of the therapy, it appears to reduce some of the bad outcomes of the disease. But is its effect meaningful?
This is where you need to consider the risk of treatment versus no treatment. In healthcare, risk refers to the probability of a bad outcome in people with the disease.
Absolute risk reduction (ARR) – also called risk difference (RD) – is the most useful way of presenting research results to help your decision-making. In this example, the ARR is 8 per cent (20 per cent - 12 per cent = 8 per cent). This means that, if 100 children were treated, 8 would be prevented from developing bad outcomes. Another way of expressing this is the number needed to treat (NNT). If 8 children out of 100 benefit from treatment, the NNT for one child to benefit is about 13 (100 ÷ 8 = 12.5)."
Here is the link:
https://www.ncbi.nlm.nih.gov/books/NBK63647/
The reason I wound up reading this article is because someone sent an article to me regarding the efficacy of COVID vaccines which pointed out that the 95% figure we have heard is the relative risk reduction, not the absolute risk reduction. The article states that the relative risk reduction of the COVID shots ranges anywhere from 67% (AstraZeneca/Oxford and Johnson & Johson) to 94-95% (Moderna and Pfizer/BioNTech). It further states that the absolute risk reduction ranges from .084% (Pfizer) to 1.2-1.3% (AstraZeneca/Oxford, Moderna, Pfizer/BioNTech).
Now, I'm not sure what to make of all of this information. From what I recall the vaccine manufacturers are reporting the relative risk reduction figures (their calculations are all over the Internet and should not be tough to verify). It should not be hard to calculate the absolute risk reduction - the formula is provided above. If the absolute risk reduction is accurately reported above, why is the vaccine being pushed for those who are not in high risk categories?
MiD and XU_Lou - any thoughts?
Principal