Policy Recommendations: Autonomous Robots

Making autonomy in robotics and law

This report documents the EPINET projects investigations into assessments of ethical, legal and societal aspects of autonomous robots in the European Union. Its main objective is to assess the state-of-the-art in assessments in this domain, especially focusing on the ways in which these may interact with or become integrated into main research and innovation networks, including the making of research and policy agendas.

These recommendations are aimed at, and relevant to, different groups and networks involved in robotics at European and national levels. At one level of policy action there are the many advisory and expert groups involved in the making of robotics agendas, such as the ELS Topic group of euRobotics, follow-ups to ISTAG (information society advisory group), the SPARC PPP, and advisory bodies to DG Research (Robotics unit), DG Connect and DG Health. Also relevant are the expert groups participating in the shaping of Horizon 2020 ICT programme (societal challenges and LEIT) and the European Institute of Innovation and Technology, national research councils and their related advisory bodies. Our recommendations are especially relevant to so-called cross-cutting actions in Horizon 2020. Next, our recommendations are also directed to national and EU legislators and regulators charged with adapting to and accommodating the actions of policy makers and the robotics community. Finally, our recommendations are directed to the technology assessment community, including those dedicated to Responsible Research and Innovation, integrated ELSA and impact assessors.

Background and approach taken

Recent policy agendas and research trends are openly pushing for "smarter", more dynamic and more autonomous robotics systems (e.g. European Commission, 2008; EUROP, 2009; Robot Companions for Citizens, 2012). Future robots are expected to help address the grand societal challenges for Europe, in particular, those of an ageing population, sustainable healthcare and welfare. Such developments raise a number of questions across sector domains and disciplines, and among the potential and real users of robotics systems services. This becomes especially evident when seen in the light of parallel efforts towards Responsible Research and Innovation, according to which it is expected that different actors come together in ways conducive to more responsible, sustainable and socially robust innovation policies (von Schomberg 2011, Stilgoe et al 2012, European Commission 2012).

EPINET chose to study the implications of more autonomous robots to be used for care and companionship. A main reason for this is that this policy agenda finds itself at the cross-roads of several of the complicated issues emanating from present-day and near-future robotics. This recommendation does not deal with particular (ethical, societal) issues, although it builds on an extensive mapping of many such issues. Rather, recommendations are made on the level of interactions between sectorial domains (ie. Science, law, politics); on the level of disciplinary and interdisciplinary interactions; on the level of integration of assessments into main policy and innovation networks.

Our policy recommendations are based on the three stages of EPINET research: first, a prior mapping of main issues relevant to the assessment of autonomous robotics from different disciplinary perspectives (law, ethics, TA and STS); next, an “embedding phase” in which these results and the issues to which they refer, are discussed in a workshop forum along with different epistemic networks implied in the making and governance (including assessments) of autonomous robots in the EU (link workshop report). Finally, there is an “integration phase”, analyzing and subsuming the results from the previous two phases, and issuing in policy recommendations as well as more academically oriented analysis (see Rommetveit et al. 2013).

The specific goals of this research line have been to:

  1. Provide assessments of autonomous robots from the perspectives of different TA methodologies.
  2. Provide guidelines for good governance of autonomous robots in the context of EU policies.
  3. Provide recommendations for improved integration of TA methodologies in the field of autonomous robots.

Research questions with respect to autonomous robots:

  • Who is involved in the construction of autonomous robots, why and how did they become active?
  • How do they interact as an epistemic network?
  • What role do politically engaged activist groups play in shaping the emergent robotics technology?
  • How do robotics protagonists establish a communal imagination of its future large-scale application, its use and its users?
  • How is this imaginary materialised and what actions, both in terms of product design and discursive promotion, are undertaken that expand the epistemic network?
  • How do the scientists, positioned as insiders or outsiders of mainstream roboticists, frame their work and capacity in the area?
  • How do imagined future regulatory hurdles shape research and innovation in the field(s) of robotics?
  • How do TA practitioners (or others doing similar work) engage with roboticists engaged in the construction of next generation, autonomous robotics?
  • Can new relationships be gleamed between contexts of assessments, policy and innovation?

Vision and policies

Broad policy visions serve to open up new spaces of action for a great number of technology-centered innovator networks. However, roboticists struggle to make sense of the autonomous robots vision in ways that are also socially relevant. The broad vision renders difficult specification of purpose, among roboticists and regulators alike. A problem voiced was that policy makers and publics have too high expectations of what robots can do. There is a need, expressed by members of the roboticist community, to make clear to the public that there are many things that robots cannot do and that there are many problems that cannot be solved. In so far as socially relevant and responsible innovation are desirable goals, therefore, the visions need to be made much more purpose specific and tied to the social, environmental and human purposes that they are intended to serve. Participants from both robotics and ethics described the deployment of autonomous robots to address the challenges of ageing and care in Europe as poorly considered.

Recommendation: There is a need for better articulations and elaborations of the guiding visions of robotics in accordance with specific societal purposes.

Public-private conglomerates of policy, industrial and academic actors increasingly coordinate robotics development across Europe. So far, a relatively minor set of actors from industry, academic research and high-level EU policymaking has shaped the visions, policies and agendas set for robotics in the European Union. If robots are to occupy greater parts of living, working and public spaces, there is an urgent need to include the citizens, and the professional and non-professional communities, that will be users or concerned parties of robotics applications. As implied in principles of public engagement as well as recent principles of Responsible Research and Innovation, there is a great need to include wider networks of actors in the making of policy agendas (such as roadmaps) and into innovation networks that shape to implement these agendas. There is a need to include Europe’s youth in more bottom-up modes of innovation. Standard-setting in robotics should also be counted as main relevant sites of policy making and public innovation, and as such relevant for inputs from wider publics and user communities, as well as ethical and societal considerations.

Recommendation: Strategies for more bottom-up innovation should be included into agenda setting, regulation and implementation of robotics in the European Union.

Whereas the push towards technological, scientific and industrial innovation is strong, sufficient mechanisms are not in place to deal with the societal and organisational challenges arising in and around robotics innovation networks. A much repeated theme, in different variations, pertains to the lack of mediating institutions between the robotics community, society and publics. There are also great needs for improved communications with law. Technology assessors and people working in Responsible Research and Innovation have decisive roles to play here.

Recommendation: There is a need to promote innovation also in the public-institutional domains and institutions in which robotics innovations and developments take place.

Regulation and legislation – across sectors

Studies of some epistemic networks as well as feedback from legal (and ethical) practitioners, point towards pressures on law from both science and industry. These pressures could be of several kinds: industry pushing too hard for certain laws to be passed; lawyers being challenged by having to rely on roboticists as their main source of knowledge about current and future research trends; lawyers working in intransparent conditions where abstract enticing visions rather than many practical real-world problems seem to be informing decision-making. These last 2 points lead law to be pushed towards the speculative. On the side of the sciences we observe a strong need to go along with official demands on scientific research in order to obtain funding.

Recommendation: Institutional mechanisms must be developed to safeguard the autonomy of science and law in robotics innovation.

Ethicists and lawyers, but also roboticists, are concerned about the increasingly complex chains of different actors involved in designing, making, training and using robots. These problems will become more visible as more robots develop learning capacities and social skills, and move into unstructured everyday living environments. Moreover, as robots will move more freely, they will also cross sectorial domains, providing difficult tasks for regulators, lawyers and ethicists. There are, as of now, no sufficient solutions for how to place responsibility, neither in terms of organisational measures or ethical and legal principles.

Recommendation: There is a need for new legal solutions, or for better application of old principles to the new cases. There is also a need for thinking holistically about responsibility within the community and networked structures within which robotic systems will function.

Within the Suggestion for a Green Paper on legal Issues in Robotics by the euRobotics platform, interdisciplinary collaboration was aimed to get together around the common goal to “avoid ethical, legal and societal issues becoming barriers”. The result is that the main emphasis is on the development of the machines, whereas societal issues that should be the goal of robotics are instead seen as something to be removed. This contrasts with the document Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics produced by the Robolaw project. This document articulated a much more human oriented approach significantly based on the requirements of democracy and human rights. Similar (human centric) approaches have been argued by the Roboethics roadmap and by ethicists.

Recommendation: There is a need to strengthen the human-centric approach to regulation, legislation and assessments in robotics innovation networks.

Cross-disciplinary assessments & collaborations

Autonomy in ethics and law generally refer to the capacity, of individuals or groups, for self-determination. For roboticists it refers more to the capacity of machines to operate independently of human interventions. In practice however the categories easily blend into each other. There is a great need to communicate carefully. As argued by one roboticist after our workshop: ”What became very clear to me pertains to the use and misuse of words, how they work across fields and barriers, and this is even more problematic when approaching the public. There is a great need to involve ethicists and lawyers. I learned a lesson: we need to be careful about how we use words”. This said, we also observed how actors are capable of communicating across disciplinary boundaries, given time, opportunity and proper facilitation. Communication across domains and disciplines is difficult, but not impossible.

Recommendation: There is a need to develop reflexive capacities about the different meanings and uses of the term autonomy, in order for better collaborations to take place.

The concept of RRI was unknown to most roboticists, but regarded favourably. Multi and Interdisciplinarity came out as a key point for Technology Assessment. One roboticist argued that a completely multi-disciplinary approach is needed to get at responsible research and innovation. Ethicists remarked how technology assessment has a big role both as a tool for law and in making things more sustainable. TA should be given more relevance with the involvement of experts from different disciplines. Different interdisciplinary and organisational measures also need to be implemented. A salient example here would be ‘best practices’ in the General Data Protection Regulation. This includes approaches toward self-regulation, made mandatory through law (like establishing an ethical code of conduct including engineers, philosophers and lawyers), or ethicists or technology assessors working for companies (which is required for privacy by design and default). Requirements were also highlighted for distinguishing between technology specific (exceptional) requirements and more generic approaches also taking into account developments relating to other technologies.

Recommendation: Responsible Research and Innovation provides a valuable intake to negotiations of social ethical and legal issues (ELS) in EU robotics and should be included in roadmaps, agenda setting and funding schemes.

As robots are expected to become increasingly interactive and operational in unstructured and complex environments, existing mechanisms for legal and ethical regulations are insufficient. Privacy and data protection are strong concerns that keep coming up. For instance, reliance upon standard solutions (ie. Google) and the transition to cloud computing increase concerns about privacy and data protection. In order to meet such concerns we observe demands for privacy and data protection, but also morals and ethics, to be coded into robotic systems and applications. Whereas such measures may seem necessary, great care should be taken in fostering realistic expectations about the extents to which ethics, law and societal concerns can be effectively coded into machines and communication platforms. It is important that law and engineering are not the sole responsible actors in carrying out such actions, and that autonomous systems are accompanied by proper organizational and community structures that can care for their sustainable and responsible integration. For such purposes, also other forms of expertise, such as TA and RRI and public engagements, are relevant.

Recommendation: There is a need for realistic assessments about the degrees to which engineering can substitute for ethics, law and governance. There is also a great need to develop the necessary interdisciplinary and cross-sectorial capacities required for the engineering of ethical, legal and societal concerns into robots.

 

Download the Policy Recommendations