Robots and Risk – what could possibly go wrong?

By David Foster | 12 June 2019

The 4th Industrial Revolution (4IR) is upon us. What was once science fiction is increasingly becoming science fact. Machine learning, Artificial Intelligence (AI), predictive analytics, data lakes and robotics are becoming part of the language of public service innovation.

In Southend, Pepper the robot is supporting vulnerable people with conversation and interaction. In the remote Highlands of Scotland, ‘Internet of Things’ sensors are being used to monitor sheltered housing and keep their inhabitants safe. Robot surgeons carry out complex and lifesaving procedures. The driverless car is almost a reality. The city centre ‘digital street’ with sentient skills exists.

There are truly inspiring and remarkable projects in place or in planning, opportunities to reshape the provision of services and create a new landscape for engagement with the community.

But, this is a different kind of innovation to traditional automation. Described as more like ‘gardening’ than ‘engineering’, the 4IR blends a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

And the very uncertainty which makes 4IR so exciting, brings huge challenges and risk.

How can you maintain good governance, accountability and assurance, when the technology has a mind of its own, and ethics and morality may not feature in its decision making? And that’s before we get to the potential impact on employment, careers and roles in society.

Take face recognition technology. We’re becoming accustomed to this. Pass through Gatwick Airport, your face is scanned and used to match you at stages in the clearance process. Technology now exists to use facial recognition to pay for goods and services rather than bank cards.

Once upon a time, we only gave our fingerprints to the police when under suspicion, now we put them into phones and tablets on a daily basis. CCTV systems can match a face as it progresses through a town centre.

Controversy rages over China’s plans to use facial recognition to monitor the entire population with a system of credits and debits. San Francisco has banned the use of facial recognition technology. Should the UK follow? How does this square with some of the principles of public life – objectivity, accountability, openness, honesty?

Ethical decision making using predicative analytics is a particular issue. A survey carried out by Zurich Municipal suggests that residents are uncomfortable about public bodies using advanced analytics to make decisions. Bias has become a significant area of concern: whether algorithm, sample, prejudicial or measurement bias.

The public are more comfortable with the use of AI on non-sensitive issues like traffic management (think The Italian Job), but baulk at their use with children’s and adult social services. And yet this is a key area of application.

Who understands the ‘black box technologies’ which are being used to make the decisions? Who is accountable for the decision making? Are those charged with oversight and ethical choices skilled and competent in their roles? Is human decision making ‘in the loop’, ‘on the loop’ or ‘off the loop’?

What’s the contingency plan? Some will justifiably argue that resident surveillance using advanced sensors and other technologies provides 24/7 levels of service (think Alexa), something which could not be achieved with humans alone. But what happens when the technology fails – will there still be the human backup in an era of increased austerity and budget pressures?

And data. Lakes of it. Often sensitive and personal information, frequently given by the public with great reluctance, which the public body is honour-bound to protect. Increasingly, there is a move towards cloud computing, and commercial, shared web services. We see what happened with Facebook and Cambridge Analytics. How comfortable are we with the safeguards and protection required, at the least, by GDPR regulations? The potential for massive data loss exists, with huge public disquiet, lives ruined and devastating consequences for the failing public body.

Recently, attention has turned to the formal management of these risk challenges. The big management consultancies, having spent the last few years extoling the virtues of these new technologies and encouraging organisations to participate, are now producing papers warning about the risks.

Too few risk managers in the public services are actively engaged in the innovative projects, which are springing up all over their organisations. ‘Old school’ categories of risk are being used to define the risk exposures which the 4IR creates. It needs a rethink.

Managing risk, in the age of machines, will be the next big challenge for the public sector risk manager and heads of governance. It will profoundly change their world.

David Forster is head of risk proposition at Zurich Municipal

comments powered by Disqus
Management Ethics AI
Top