Randomized trials of complex public health interventions generally aim to identify what works, accrediting specific intervention 'products' as effective. This approach often fails to give sufficient consideration to how intervention components interact with each other and with local context. 'Realists' argue that trials misunderstand the scientific method, offer only a 'successionist' approach to causation, which brackets out the complexity of social causation, and fail to ask which interventions work, for whom and under what circumstances. We counter-argue that trials are useful in evaluating social interventions because randomized control groups actually take proper account of rather than bracket out the complexity of social causation. Nonetheless, realists are right to stress understanding of 'what works, for whom and under what circumstances' and to argue for the importance of theorizing and empirically examining underlying mechanisms. We propose that these aims can be (and sometimes already are) examined within randomized trials. Such 'realist' trials should aim to: examine the effects of intervention components separately and in combination, for example using multi-arm studies and factorial trials; explore mechanisms of change, for example analysing how pathway variables mediate intervention effects; use multiple trials across contexts to test how intervention effects vary with context; draw on complementary qualitative and quantitative data; and be oriented towards building and validating 'mid-level' program theories which would set out how interventions interact with context to produce outcomes. This last suggestion resonates with recent suggestions that, in delivering truly 'complex' interventions, fidelity is important not so much in terms of precise activities but, rather, key intervention 'processes' and 'functions'. Realist trials would additionally determine the validity of program theory rather than only examining 'what works' to better inform policy and practice in the long-term.