Publication bias and related types of small-study effects threaten the validity of systematic reviews. The existence of small-study effects has been demonstrated in empirical studies. Small-study effects are graphically diagnosed by inspection of the funnel plot. Though observed funnel plot asymmetry cannot be easily linked to a specific reason, tests based on funnel plot asymmetry have been proposed. Beyond a vast range of funnel plot tests, there exist several methods for adjusting treatment effect estimates for these biases. In this article, we consider the trim-and-fill method, the Copas selection model, and more recent regression-based approaches. The methods are exemplified using a meta-analysis from the literature and compared in a simulation study, based on binary response data. They are also applied to a large set of meta-analyses. Some fundamental differences between the approaches are discussed. An assumption common to the trim-and-fill method and the Copas selection model is that the small-study effect is caused by selection. The trim-and-fill method corresponds to an unknown implicit model generated by the symmetry assumption, whereas the Copas selection model is a parametric statistical model. However, it requires a sensitivity analysis. Regression-based approaches are easier to implement and not based on a specific selection model. Both simulations and applications suggest that in the presence of strong selection both the trim-and-fill method and the Copas selection model may not fully eliminate bias, while regression-based approaches seem to be a promising alternative.