Reducing the effect of random error

In most areas of medicine, major breakthroughs are rare and we may generally anticipate that even the best new treatments will result in only modest improvements in outcome. Such benefits may, of course, be extremely important to individuals and could, in common diseases, have considerable impact on public health. Unfortunately, achieving the large numbers required is not always straightforward and many individual RCTs are too small to detect moderate but potentially worthwhile differences between treatments. A typical two-arm trial with 200-300 events is capable only of detecting absolute benefits in excess of 15 per cent, in other words improving outcome from 50 to 65 per cent or more. To detect a 5 per cent difference would require more than 2000 events (see also Section 5.4). Although there have been some notable successes in conducting large-scale trials in cancer (e.g. [10,11]), they remain very much in the minority. It seems likely that for many practical and political reasons, trials involving thousands of patients will be infeasible in many circumstances. Thus, if the underlying true effect is modest, in any group of trials addressing similar questions and with a few hundred events in each, by chance alone a few trials may have demonstrated statistically significant positive or negative results, but most will be inconclusive. However, combining the results of each of the trials in a meta-analysis might give sufficient statistical power to answer the questions reliably. The increased numbers of patients reduce the random error and narrow the confidence intervals around the result. This provides a more reliable and precise estimate of the size of any observed effect, and enables us to distinguish more easily between a real and null effect. Furthermore, narrowing the confidence intervals reduces the range of plausible treatment effects and so can aid clinical decision making considerably, especially where treatment effects are modest. For example, if we have an estimated hazard ratio of 0.85 with 95 per cent confidence interval 0.7-1.05, then we would be likely to be understandably cautious about adopting the new treatment. Although it suggests a 15 per cent relative benefit, the confidence interval is consistent with as much as a 30 per cent benefit, but also with a 5 per cent detriment. If, however, increased numbers in the trials narrow the intervals around the same estimated hazard ratio of 0.85 to 0.8-0.9, then we might adopt the new treatment more readily as the confidence intervals suggest a range between a 20 per cent benefit and a 10 per cent benefit. If the observed treatment effect is large, then we are less likely to need such tight confidence intervals before adopting treatment. If we have a hazard ratio of 0.5 in favour of the new treatment with confidence intervals 0.3-0.8, in most circumstances we would adopt the new treatment on this estimate and tightening the confidence interval would not affect clinical decision making. One use of meta-analysis is therefore to narrow confidence intervals such that similar clinical decisions would be reached based on the hazard ratio at either the lower or upper confidence limit.

0 0

Post a comment