Multiple imputation (MI) is now well established as a flexible, general, method for the analysis of data sets with missing values. Most implementations assume the missing data are ;missing at random' (MAR), that is, given the observed data, the reason for the missing data does not depend on the unseen data. However, although this is a helpful and simplifying working assumption, it is unlikely to be true in practice. Assessing the sensitivity of the analysis to the MAR assumption is therefore important. However, there is very limited MI software for this. Further, analysis of a data set with missing values that are not missing at random (NMAR) is complicated by the need to extend the MAR imputation model to include a model for the reason for dropout. Here, we propose a simple alternative. We first impute under MAR and obtain parameter estimates for each imputed data set. The overall NMAR parameter estimate is a weighted average of these parameter estimates, where the weights depend on the assumed degree of departure from MAR. In some settings, this approach gives results that closely agree with joint modelling as the number of imputations increases. In others, it provides ball-park estimates of the results of full NMAR modelling, indicating the extent to which it is necessary and providing a check on its results. We illustrate our approach with a small simulation study, and the analysis of data from a trial of interventions to improve the quality of peer review.