Sensitivity analyses

Sensitivity analyses can be done to test the robustness of the main results of the metaanalysis. These are often done by running analyses with and without the inclusion of certain trials, and then comparing the results. They can be used to test assumptions made in the meta-analysis, for example decisions made about the eligibility of certain trials or types of trials. If results are changed substantially, depending on whether or not a particular trial, or group of trials, with questionable eligibility are included in the meta-analysis, the decision regarding eligibility will need thoughtful justification. The final results will also need to be treated with suitable circumspection. When using data extracted from publications, sensitivity analyses can be used to explore the influence of trial quality by carrying out analyses with and without trials of dubious quality. Where results comprise one or two large trials and a number of smaller trials, they can also be used to see to what extent the result is driven by the largest trial. Likewise, sensitivity analyses can be done to assess the potential impact of publication bias by removing the smallest trials. Although sensitivity analyses are usually done as described above, in many cases it maybe preferable and more informative to look at differences between trials in the form of subset analysis, particularly when looking at issues of eligibility. It may also be informative to use the order of trials on the forest plot to explore possible patterns of trends. For example, ordering trials by intended dose may give an insight to issues relating to inadequate/adequate dosing.

0 0

Post a comment