Several methods exist for bias adjustment of meta-analysis results, but there has been no comprehensive comparison with non-adjusted methods. We compare 6 bias-adjustment methods with 2 non-adjusted methods to examine how these different methods perform.
Methods:
We re-analyzed a meta-analysis that included 10 randomized controlled trials. Two data-based methods: i) Welton’s data-based approach (DB) and ii) Doi’s quality effects model (QE) and 4 opinion-informed methods: i) opinion-based approach (OB), ii) opinion-based distributions combined statistically with data-based distributions (O+DB), iii) numerical opinions informed by data-based distributions (OID [num]), and iv) opinions obtained by selecting areas from data-based distributions (OID [select]) were used to incorporate methodological quality information into the meta-analytical estimates. The results of these 6 methods were compared with 2 unadjusted models: i) the DerSimonian-Laird random effects model and ii) Doi’s inverse variance heterogeneity (IVhet) model.
Results:
The 4 opinion-based methods returned the random effects model estimates with wider uncertainty. The DB and QE methods returned different results and aligned with the IVhet method with some minor downward bias adjustment.
Conclusion:
Opinion-based methods seem to just add uncertainty rather than bias adjust.