| Aktuelles | Blog | Responsible Robotics – What could possibly go wrong?

Responsible Robotics – What could possibly go wrong?

Powerful technologies are being developed that have the potential to transform society. The need for Responsible Innovation is growing. The RoboTIPS project aims to develop a trustworthy system for social robot accident investigation. This includes technologies such as an Ethical Black Box for robots as well as social processes.


Imagine that your elderly relative lives at home with her assistive care robot, which is tasked to help her with day-to-day activities. Then one day you receive a call to say that your relative has been found unconscious on the floor, with the robot bumping aimlessly to and fro.

Happily, your relative is found to be fine – but what happened and how can you find out?

This is one possible scenario for the work that I was delighted to present to the Bavarian Research Institute for Digital Transformation (bidt) in September. Powerful technologies are being developed that have the potential to transform society and hence investigators in all fields are under growing pressure to consider and reflect on the motivations, purposes and possible consequences associated with their research. This pressure comes not only from the general public, civil society and government institutions, but also of course from the media. Hardly a day goes by when we do not hear about the negative effects of a technological innovation on society. It is becoming impossible (and it is also undesirable) for developers and designers to ignore what is happening societally as a consequence of certain innovations.

Almost two decades ago, Responsible Innovation (RI) initiatives across policy, academia and legislation emerged which are responding to the fact that many of the problems we face are a legacy of our previous failures to consider potential negative impacts from innovations. Moreover, the public and media are increasingly expressing concerns over the negative consequences of innovation, whilst at the same time technologies continue to become more potent.

Responsible Innovation (RI) is defined as ‘doing science for society’

RI is more than just a vague idea. It is a practical method that focuses on anticipatory governance, inclusion, reflection and responsiveness. Its core aim is to involve all relevant stakeholders, including the public, encouraging anticipation and reflection on the consequences of scientific and technological innovations. This is the nub of its definition as ‘doing science and innovation with and for society’. It includes society very much ‘upstream’ in the processes of research and innovation – that is at the point where the innovation is first conceived – to align outcomes with societal values.

Crucially, this is not a once-and-for-all box-ticking exercise, but an ongoing, iterative process, looking at how new technologies meld with old and how people adapt, considering also how we as researchers and innovators adapt to emerging knowledge of technology use in the world. In this way, RI is a space for creativity, for confidence and even for serendipity. It is not predefining what are or are not the ‘right’ impacts from research, but it is providing us with a framework which can help us to decide what those impacts might be and how we might realise them.

Of course, many challenges remain in terms of how to embed responsibility into processes of technological design and development. Furthermore, as the pace of innovation continues to accelerate, the tension between profit and responsibility also grows stronger.

Social robots form the RoboTIPS project’s focus

The case study I mentioned at the outset is part of my RoboTIPS project, which picks up some of these challenges. RoboTIPS is a collaboration between Oxford and Bristol Robotics Lab. In our investigations we are focussing on social robots – broadly speaking, robots which interact with humans on a daily basis (for example driverless cars or autonomous vehicles, companion robots, toy robots and so on).

We have defined Responsible Robotics as:

The application of Responsible Innovation in the design, manufacture, operation, repair and end-of-life recycling of robots, that seeks the most benefit to individuals and society and the least harm to the environment.

The research will draw on expertise from the Bristol Robotics Lab, examining how to develop accountable and explainable systems relevant to a range of stakeholders, particularly in the event of technology failure, so as to develop trustworthy systems.

Learning from the safety system of the airline industry

Examining this idea of trustworthy systems, we looked to another industry that (until recently at least) has enjoyed a significant level of public trust: the airline industry. Commercial aircraft are so safe not just because of their good design, but also tough safety certification processes. When things do go wrong, there are robust, social processes of air accident investigation. We suggest that trust in and acceptance of air travel, despite its catastrophes, is in part bound up with aviation governance, which has cultural and symbolic importance as well as practical outcomes. A crucial aspect of the former is rendering the tragedy of disaster comprehensible through the process of investigation and reconstruction.

Returning to our original example of a malfunctioning care robot, although this is as yet a fictional scenario it could happen today. In that event, a user would currently be reliant on the robot manufacturer’s goodwill to discover what went wrong. It is also entirely possible that neither the robot nor the company are even equipped with the tools and processes to facilitate an investigation. It is startling to discover that although these social robots are currently interacting with humans in unplanned-for contexts, there are currently no established processes for robot accident investigation.

Development of an Ethical Black Box (EBB) for robots

Hence in our 2017 paper, Professor Alan Winfield and I argued the case for an Ethical Black Box (EBB). Our proposition is very simple: all robots (and some AIs) should be equipped with a standard device which continuously records a time stamped log of the system’s internal state, key decisions and sampled input or sensor data. ​In effect, this is the robot equivalent of an aircraft flight data recorder. Without such a device, finding out what the robot was doing and why in the moments leading up to an accident is more or less impossible. ​Accordingly, in RoboTIPS we are developing and testing a model EBB for social robots.

However, air accident investigations do not rely solely on evidence from an aircraft’s flight recorder. There are social processes of reconstruction that need to be perceived as impartial and robust. Consequently, in this way it may be possible to provide some form of closure so that aviation is not enduringly tainted in the public’s consciousness. We anticipate very similar roles for investigations into robot accidents.

Robot accident investigation is essential

Significantly, it is not the black box on its own that forms the safety mechanism; it is its inclusion within a social process of accident/incident investigation. Any investigation into a robot accident will draw on EBB information and also information from human witnesses and experts to determine the reason for an accident, together with lessons to be learnt from such an incident.

Thus, we aim to develop and demonstrate both technologies and social processes (and ultimately policy recommendations) for robot accident investigation. Furthermore, the whole project will be conducted within the framework of Responsible Research and Innovation; it will, in effect, be a case study in Responsible Robotics.

The team will work with business designers and partners to co-develop requirements for the EBB whilst at the same time discovering how these designers understand responsibility in their practices.

The potential impact of this work is extensive. The EBB can change how we develop products not only in social robotics, but also potentially beyond into other fields. This work could lead to new opportunities for companies to design and manufacture standard ‘black boxes’ for each social robot class.

Transparency can increase trust in technologies

It is our fundamental contention in undertaking this work that if we can increase the transparency of how such technologies make decisions, being seen to take users obligations and lived experiences seriously in the design of these tools, then we will increase trust in the technologies. The reverse though is also true. For instance, if a company does something societally unacceptable, it could have an adverse effect not only on the company but on the whole area of development.

Ultimately, if things do go wrong, then the responsibilities throughout the chain of creating, commissioning and deploying social robots will take centre stage, albeit retrospectively. The proposed case studies form a vehicle for understanding what these chains of responsibility will look like when a harmful incident takes place and provide an unparalleled opportunity to simulate ‘disaster’ safely so as to understand how its consequences should be managed.

The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.