Relative risk

From HandWiki
Short description: Measure of association used in epidemiology
Illustration of two groups: one exposed to treatment, and one unexposed. Exposed group has smaller risk of adverse outcome, with RR = 4/8 = 0.5.
The group exposed to treatment (left) has half the risk (RR = 4/8 = 0.5) of an adverse outcome (black) compared to the unexposed group (right).

The relative risk (RR) or risk ratio is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group. Together with risk difference and odds ratio, relative risk measures the association between the exposure and the outcome.[1]

Statistical use and meaning

Relative risk is used in the statistical analysis of the data of ecological, cohort, medical and intervention studies, to estimate the strength of the association between exposures (treatments or risk factors) and outcomes.[2] Mathematically, it is the incidence rate of the outcome in the exposed group, [math]\displaystyle{ I_e }[/math], divided by the rate of the unexposed group, [math]\displaystyle{ I_u }[/math].[3] As such, it is used to compare the risk of an adverse outcome when receiving a medical treatment versus no treatment (or placebo), or for environmental risk factors. For example, in a study examining the effect of the drug apixaban on the occurrence of thromboembolism, 8.8% of placebo-treated patients experienced the disease, but only 1.7% of patients treated with the drug did, so the relative risk is .19 (1.7/8.8): patients receiving apixaban had 19% the disease risk of patients receiving the placebo.[4] In this case, apixaban is a protective factor rather than a risk factor, because it reduces the risk of disease.

Assuming the causal effect between the exposure and the outcome, values of relative risk can be interpreted as follows:[2]

  • RR = 1 means that exposure does not affect the outcome
  • RR < 1 means that the risk of the outcome is decreased by the exposure, which is a "protective factor"
  • RR > 1 means that the risk of the outcome is increased by the exposure, which is a "risk factor"

As always, correlation does not mean causation; the causation could be reversed, or they could both be caused by a common confounding variable. The relative risk of having cancer when in the hospital versus at home, for example, would be greater than 1, but that is because having cancer causes people to go to the hospital.

Usage in reporting

Relative risk is commonly used to present the results of randomized controlled trials.[5] This can be problematic if the relative risk is presented without the absolute measures, such as absolute risk, or risk difference.[6] In cases where the base rate of the outcome is low, large or small values of relative risk may not translate to significant effects, and the importance of the effects to the public health can be overestimated. Equivalently, in cases where the base rate of the outcome is high, values of the relative risk close to 1 may still result in a significant effect, and their effects can be underestimated. Thus, presentation of both absolute and relative measures is recommended.[7]

Inference

Relative risk can be estimated from a 2×2 contingency table:

  Group
Intervention (I) Control (C)
Events (E) IE CE
Non-events (N) IN CN

The point estimate of the relative risk is

[math]\displaystyle{ RR = \frac{IE/(IE + IN)}{CE/(CE + CN)} = \frac{IE(CE + CN)}{CE(IE + IN)}. }[/math]

The sampling distribution of the [math]\displaystyle{ \log(RR) }[/math]is closer to normal than the distribution of RR,[8] with standard error

[math]\displaystyle{ SE(\log(RR)) = \sqrt{\frac{IN}{IE(IE + IN)} + \frac{CN}{CE(CE + CN)}}. }[/math]

The [math]\displaystyle{ 1 - \alpha }[/math] confidence interval for the [math]\displaystyle{ \log(RR) }[/math] is then

[math]\displaystyle{ CI_{1 - \alpha}(\log(RR)) = \log(RR)\pm SE(\log(RR))\times z_\alpha, }[/math]

where [math]\displaystyle{ z_\alpha }[/math] is the standard score for the chosen level of significance.[9][10] To find the confidence interval around the RR itself, the two bounds of the above confidence interval can be exponentiated.[9]

In regression models, the exposure is typically included as an indicator variable along with other factors that may affect risk. The relative risk is usually reported as calculated for the mean of the sample values of the explanatory variables.[citation needed]

Comparison to the odds ratio

Risk ratio vs odds ratio

The relative risk is different from the odds ratio, although the odds ratio asymptotically approaches the relative risk for small probabilities of outcomes. If IE is substantially smaller than IN, then IE/(IE + IN) [math]\displaystyle{ \scriptstyle\approx }[/math] IE/IN. Similarly, if CE is much smaller than CN, then CE/(CN + CE) [math]\displaystyle{ \scriptstyle\approx }[/math] CE/CN. Thus, under the rare disease assumption

[math]\displaystyle{ RR = \frac{IE(CE + CN)}{CE(IE + IN)} \approx \frac{IE \cdot CN}{IN \cdot CE} = OR. }[/math]

In practice the odds ratio is commonly used for case-control studies, as the relative risk cannot be estimated.[1]

In fact, the odds ratio has much more common use in statistics, since logistic regression, often associated with clinical trials, works with the log of the odds ratio, not relative risk. Because the (natural log of the) odds of a record is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with the type of treatment would be the same in logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different.[citation needed]

Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are more than 10 times higher than the odds with B.[citation needed]

In statistical modelling, approaches like Poisson regression (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate and thus leads to a relative risk. Logistic regression (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio.[citation needed]

Bayesian interpretation

We could assume a disease noted by [math]\displaystyle{ D }[/math], and no disease noted by [math]\displaystyle{ \neg D }[/math], exposure noted by [math]\displaystyle{ E }[/math], and no exposure noted by [math]\displaystyle{ \neg E }[/math]. The relative risk can be written as

[math]\displaystyle{ RR = \frac {P(D\mid E)}{P(D\mid \neg E)} = \frac {P(E\mid D)/P(\neg E\mid D)}{P(E)/P(\neg E)}. }[/math]

This way the relative risk can be interpreted in Bayesian terms as the posterior ratio of the exposure (i.e. after seeing the disease) normalized by the prior ratio of exposure.[11] If the posterior ratio of exposure is similar to that of the prior, the effect is approximately 1, indicating no association with the disease, since it didn't change beliefs of the exposure. If on the other hand, the posterior ratio of exposure is smaller or higher than that of the prior ratio, then the disease has changed the view of the exposure danger, and the magnitude of this change is the relative risk.

Numerical example

  Example of risk reduction
Experimental group (E) Control group (C) Total
Events (E) EE = 15 CE = 100 115
Non-events (N) EN = 135 CN = 150 285
Total subjects (S) ES = EE + EN = 150 CS = CE + CN = 250 400
Event rate (ER) EER = EE / ES = 0.1, or 10% CER = CE / CS = 0.4, or 40%
Equation Variable Abbr. Value
CER - EER absolute risk reduction ARR 0.3, or 30%
(CER - EER) / CER relative risk reduction RRR 0.75, or 75%
1 / (CER − EER) number needed to treat NNT 3.33
EER / CER risk ratio RR 0.25
(EE / EN) / (CE / CN) odds ratio OR 0.167
(CER - EER) / CER preventable fraction among the unexposed PFu 0.75

See also

References

  1. 1.0 1.1 "Proportions, odds, and risk". Radiology 230 (1): 12–9. January 2004. doi:10.1148/radiol.2301031028. PMID 14695382. 
  2. 2.0 2.1 Carneiro, Ilona. (2011). Introduction to epidemiology. Howard, Natasha. (2nd ed.). Maidenhead, Berkshire: Open University Press. pp. 27. ISBN 978-0-335-24462-1. OCLC 773348873. https://www.worldcat.org/oclc/773348873. 
  3. Bruce, Nigel, 1955- (29 November 2017). Quantitative methods for health research : a practical interactive guide to epidemiology and statistics. Pope, Daniel, 1969-, Stanistreet, Debbi, 1963- (Second ed.). Hoboken, NJ. pp. 199. ISBN 978-1-118-66526-8. OCLC 992438133. https://www.worldcat.org/oclc/992438133. 
  4. Motulsky, Harvey (2018). Intuitive biostatistics : a nonmathematical guide to statistical thinking (Fourth ed.). New York. pp. 266. ISBN 978-0-19-064356-0. OCLC 1006531983. https://www.worldcat.org/oclc/1006531983. 
  5. "Reporting of attributable and relative risks, 1966-97". Lancet 351 (9110): 1179. April 1998. doi:10.1016/s0140-6736(05)79123-6. PMID 9643696. 
  6. "Relative risk versus absolute risk: one cannot be interpreted without the other". Nephrology, Dialysis, Transplantation 32 (suppl_2): ii13–ii18. April 2017. doi:10.1093/ndt/gfw465. PMID 28339913. 
  7. "CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials". BMJ 340: c869. March 2010. doi:10.1136/bmj.c869. PMID 20332511. 
  8. "Standard errors, confidence intervals, and significance tests". StataCorp LLC. https://www.stata.com/support/faqs/stat/2deltameth.html. 
  9. 9.0 9.1 Szklo, Moyses; Nieto, F. Javier (2019). Epidemiology : beyond the basics (4th. ed.). Burlington, Massachusetts: Jones & Bartlett Learning. pp. 488. ISBN 9781284116595. OCLC 1019839414. 
  10. Katz, D.; Baptista, J.; Azen, S. P.; Pike, M. C. (1978). "Obtaining Confidence Intervals for the relative risk in Cohort Studies". Biometrics 34 (3): 469–474. doi:10.2307/2530610. 
  11. Armitage, P; Berry, G; Matthews, J.N.S, eds (2002). Statistical Methods in Medical Research. 64 (Fourth ed.). Blackwell Science Ltd. 1168. doi:10.1002/9780470773666. ISBN 978-0-470-77366-6. 

External links