What size of difference between treatments is important

This is perhaps the key question to be addressed in sample size calculations but also the one for which there are fewest rules and generally accepted guidelines. It is necessary to consider two aspects:

(1) the size of difference that might reasonably be expected with the experimental treatment,

(2) the smallest difference thought clinically worthwhile.

Both are difficult to quantify; the 'expected' difference is always unknown (or you would not be doing the trial), while the 'clinically worthwhile' difference requires a complex assessment of the pros and cons of an experimental treatment to be balanced and condensed into a single figure, and one which may well change during the course of a trial as experience is gained. It is, though, necessary to consider both issues. In general, one should aim to detect the smallest difference (1 or 2); however, if 1 is much smaller than 2, perhaps the trial should not be done?

In the 1970s and early 1980s many cancer trials focused more on (1) -but overestimated enormously, typically looking for differences of 20 per cent or more in long-term survival rates. As yet, we are aware of no systematic review of randomized trials in solid tumours that has shown an absolute survival improvement of more than 10 per cent. While this does not mean that future treatments will not prove more successful, it does suggest scepticism is valuable; the targeted size of difference to detect has the greatest impact of all on the required sample size, as Fig. 5.4 illustrates.

In fact, as a rule of thumb, an inverse square law applies such that as the size of difference you wish to detect halves, the number of patients required to detect it (with the same error rates) increases four-fold. Thus, in the example above, a trial aiming to detect a 20 per cent absolute difference requires only around 250 patients, while one aiming to detect a 10 per cent difference would require around 1000, with 4000 for a 5 per cent difference.

Fig. 5.4 Number of patients required to detect different treatment effect sizes.

Note: Assumes control group survival rate of 50 per cent, 90 per cent power and 5 per cent significance level (2-sided).

Absolute difference in 5-year survival (%)

Fig. 5.4 Number of patients required to detect different treatment effect sizes.

Note: Assumes control group survival rate of 50 per cent, 90 per cent power and 5 per cent significance level (2-sided).

0 0

Post a comment