We need to talk about machine learning

By Michael Sanders | 06 March 2019

Talk of machine learning and artificial intelligence for most people, depending on their age and preferences, conjures up either the Stanley Kubrick classic 2001: A Space Odyssey, one of the various entries of the Terminator franchise, or I, Robot.

Machine learning exploits the huge power of modern computers to run millions of calculations quickly and to uncover patterns in data that no human-led analysis could ever find. The models these produce can have high levels of accuracy. But because of their complexity – they can contain hundreds of variables, often interacting with each other in complicated ways – they can be a ‘black box’, so we cannot easily identify the source of the predictions. There is also an ever-present risk of bias in the data, or the way that it is being propagated by the model, which can have serious negative consequences.

Alongside these concerns, there are many in governments, both local and national, who see the potential of predictive analytics to improve the accuracy of risk assessment by considering complex data quickly.

A previous study conducted by the Behavioural Insights Team, which looked at the risks of escalation for children’s social care cases that had been assessed but where no further action had been taken, found that machine learning performed better than traditional statistical techniques, so there is some basis for this belief.

First, because the lack of transparency about how these tools are being used makes it hard for the children and families potentially affected by them (as well as society at large) to know what is happening or why.

My second concern is about effectiveness. Machine learning uses a lot of computing power to try to make its predictions, and can potentially find patterns that both humans and simpler forms of statistical analysis can miss.

However, there are details that people like social workers will pick up which a machine won’t see – things like tone, inflection, or context that it is hard to contain even in the richest dataset. Similarly, in a lot of cases, more basic analysis will perform just as well.

Without transparency about how these tools are being used, how well they work, and how biased they are, the potential for harm – or at least, a lot of wasted money – is enormous. That is why at the What Works Centre for Children’s Social Care we are hoping to work with local authorities over the next year to understand the ethics and the effectiveness of these techniques and to share what we find publicly. We urgently need a debate on this, and we are keen to be part of the conversation.

Michael Sanders is the executive director of the What Works Centre for Children’s Social Care. He was previously chief scientist at the Behavioural Insights Team

comments powered by Disqus
Social Care
Top