Is the result or conclusion falsely negative

A trial report may have described all the essential aspects in full detail. The result is reported as 'negative' or 'not significant.' Is that all you need to know? Almost certainly not. Firstly, the authors may simply be stating that the trial has failed to find as significant the size of difference it was designed to detect. Was the size of difference reasonable - would a smaller difference still be clinically relevant? If so, the trial may simply have been underpowered to detect the size of difference which some may consider important, and an appropriate conclusion may be that a larger trial is needed. However, such a conclusion, like any other, needs to consider the weight of evidence. How close was the observed difference to the desired difference? As discussed previously, this kind of judgement is much easier to make if an estimate of the treatment effect is given.

Secondly, recall that the power of a study (1-^) is the probability of detecting a specified difference at a specified significance level, if it really exists. At the commonly used power level of 90 per cent, each trial of a truly effective treatment has a one in ten chance of being falsely negative, in other words of the play of chance alone reducing the size of difference observed. Is it possible to tell if a single, adequately powered, trial represents a false negative? Sadly no, or at least not with 100 per cent confidence, which explains why it is so important to design trials with the highest possible power. Some guidance is given by considering the trial's 'internal and external validity.' For example, are the results with respect to the main outcome measure consistent with results for other outcome measures within the same trial? For a study with survival as an outcome measure, are endpoints such as recurrence or response available? Do they show the same trends? This may give an indication of internal validity. If other relevant trials are available, a comparison with them will indicate whether the result observed in a single trial is in some sense an outlier.

One final danger with a 'negative' trial is that it maybe reported as having demonstrated that two treatments are equivalent. Such a conclusion needs very careful investigation. A non-statistically significant difference alone is insufficient evidence of equivalence. One needs to consider the confidence interval for the treatment effect, and determine whether it truly excludes the possible differences which would lead one to conclude that in fact there is a clinically relevant difference between treatments. If the 95 per cent CI includes such differences, then the estimated treatment difference in the trial does not differ significantly from these relevant differences (at the 5 per cent level) and a conclusion of'equivalence' would be inappropriate.

0 0

Post a comment