Include all randomized trials

It is widely acknowledged that RCTs with statistically significant results are more likely to be published than those with non-significant results (publication bias). This arises from both editorial policy and because investigators themselves often do not submit negative or inconclusive results for publication [33]. As a consequence of selective publication, the medical literature can give a skewed or biased picture of the evidence on a particular question. Thus, to obtain a balanced view, it is important that all the randomized evidence is considered, irrespective of whether or not it is published.

Some would argue that unpublished trials should not be included in a systematic review as they have not been subject to peer review and are therefore unreliable. However, for IPD reviews, this criticism can be countered by obtaining the trial protocol, and by careful checking of the IPD supplied by the trialist. In fact, checking IPD should be considerably more rigorous than is possible during peer review of a manuscript. Checking unpublished summary data provided by trialists may be more difficult and will depend on the level of detail of the information supplied. Care must therefore be taken with this type of data, which has neither the benefit of peer review nor the extensive data checking that is possible with IPD. It should be noted that publication of a well-written paper in a high profile journal does not necessarily guarantee the quality of the trial data.

Any meta-analysis that relies on the published literature alone will always be at risk of bias towards the positive which will be reflected in the review and, potentially, could lead to unjustified conclusions and to inappropriate decisions about patient care, health policy and future clinical research. To deal with this, even when a researcher is unable to obtain and/or review information from unpublished sources, they should identify and list as many of such trials as possible. This then gives some indication as to the possible effect that these might have on an analysis of published trials alone. For example, we would have more confidence in the results of a meta-analysis of published trials including 5000 patients and lacking 500 patients from unpublished sources than in one of 2000 patients but where a further 1500 patients from unpublished trials were missing.

Conversely, an ongoing issue is what to do about trials for which IPD are not available, for whatever reason (e.g. loss or destruction of data, or unwillingness to collaborate). If unavailability is related to the trial results; for example, if trialists are keen to supply data from trials with promising results but reluctant to provide data from those that were less encouraging, then ignoring the trial could bias the results of the IPD analysis. If a large proportion of the data have been obtained, we can be relatively confident of the results, but with less information we need be suitably circumspect in drawing conclusions. Sensitivity analysis combining the results of any unavailable trials (as extracted from publications) and comparing these with the main IPD results can be a useful aid to interpreting the data (Section 11.5.7). As for other types of systematic review, IPD meta-analyses should clearly state what trials were not included and the reasons why.

0 0

Post a comment