MEPs vote on robots’ legal status – and if a kill switch is required

MEPs vote on robots’ legal status – and if a kill switch is required

- in Tech Update

MEPs are to vote on the first comprehensive set of rules for how humans will interact with artificial intelligence and robots.

The report makes it clear that it believes the world is on the cusp of a “new industrial” robot revolution.

MEPs will decide whether to give robots legal status as “electronic persons”.

Designers should make sure any robots have a kill switch, which would allow functions to be shut down if necessary, the report recommends.

Meanwhile users should be able to use robots “without risk or fear of physical or psychological harm”, it states.

robotImage copyrightISTOCK
Image captionWill future robots require their own legal status as electronic persons?

The report suggests that robots, bots, androids and other manifestations of artificial intelligence are poised to “unleash a new industrial revolution, which is likely to leave no stratum of society untouched”.

The new age of robots has the potential for “virtually unbounded prosperity” but also raises questions about the future of work and whether member states need to introduce a basic income in the light of robots taking jobs.

Robot/human relationships raise issues around privacy, human dignity (particularly in relation to care robots) and the physical safety of humans if systems fail or are hacked.

The report acknowledges that there is a possibility that within the space of a few decades AI could surpass human intellectual capacity.

This could, if not properly prepared for, “pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species”.

It turns to science fiction, drawing on rules dreamed up by writer Isaac Asimov, for how robots should act if and when they become self-aware. The laws will be directed at the designers, producers and operators of robots as they cannot be converted into machine code.

These rules state:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given by human beings except where such orders would conflict with the first law
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws

Meanwhile robotic research should respect fundamental rights and be conducted in the interests of the wellbeing of humans, the report recommends.

Designers may be required to register their robots as well as providing access to the source code to investigate accidents and damage caused by bots. Designers may also be required to obtain the go-ahead for new robotic designs from a research ethics committee.

The report calls for the creation of a European agency for robotics and artificial intelligence that can provide technical, ethical and regulatory expertise.

It also suggests that in the light of numerous reports on how many jobs could be taken by AI or robots, member countries consider introducing a universal basic income for citizens provided by the state.

The report also considers the legal liabilities of robots and suggests that liability should be proportionate to the actual level of instructions given to the robot and its autonomy.

A robot holds a newspaper during a demonstration during the World Economic Forum (WEF) annual meeting in Davos, on January 22, 2016.

“The greater a robot’s learning capability or autonomy is, the lower other parties’ responsibilities should be and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be,” it says.

Producers or owners may, in future, be required to take out insurance cover for the damage potentially caused by their robot.

If MEPs vote in favour of the legislation, it will then go to individual governments for further debate and amendments before it becomes EU law.

[Source:- BBC]