But predictive analytics are not as straightforward as they might seem. Any attempt to predict which groups might be at risk of disease or injury or other unwanted outcomes will involve making both correct and incorrect predictions about individuals. In addition to forecasting true positives and true negatives, systems will also forecast false positives and false negatives. The trick, of course, is to devise ways of reducing the numbers of false positives and false negatives, but even fairly accurate predictors can generate quite a lot. And that raises important practical and ethical questions about how to treat ‘at risk’ groups where a sizeable proportion of members are not actually at risk. If we could be sure of predicting all and only those children who would be abused or neglected, then we could justify strong intrusive interventions to stop maltreatment occurring, but we cannot justify doing that to families who have just been caught up because a predictive system is not very accurate.
Studies of risk prediction models for child maltreatment indicate that some can be effective in identifying at risk groups, but some predictive models have been found to be much more accurate than others (Begle et al 2010). A recent meta-analysis (van der Put et al 2017) concludes that twenty seven different risk assessment instruments have been found to have “a moderate predictive accuracy”. Analysis also revealed that the instruments were better at predicting the onset of maltreatment than its recurrence.
A New Zealand study (Vaithianathan et al, 2012) found that the use of a predictive algorithm (PRM) applied to children under age of two had fair-to-good, strength in predicting maltreatment by age five, which compared with the predictive strength of mammograms (screening) for detecting breast cancer in the general population. But the authors note that were PRM used to identify families requiring preventive treatment, twenty-seven families would need to take up a the programme in order to avoid one child experiencing maltreatment. They conclude that a full ethical evaluation of the model would be necessary before implementation. They also believe that there is a need for extreme caution before implementing mandatory policies for high risk families and that it is preferable if scores are used to engage high risk families in voluntary rather than mandatory services.
Similar points are made by Dr Art Caplan, head of the Division of Medical Ethics at the NYU School of Medicine. In addition he stresses that unless there is an effective program to help parents learn how not to be abusive, then simply forecasting the likelihood of maltreatment will bring “stigma and penalty” to the children and families involved without bringing help. He argues that it does no good to have knowledge of a bad outcome unless something effective can be done to prevent it.
I am not completely against developing predictive instruments, but I am completely against developing them behind closed doors and without the proper analytical and ethical scrutiny that is required to justify their use. No predictive model should be used without a public audit of its accuracy. And no predictive model should be used behind closed doors without being subject to the oversight of an ethical committee.
Local authorities and their private sector contractors should not be allowed to spend large sums of public money on systems of this type without being prepared to demonstrate publicly that they do more good than harm.