Group 2 Created using Figma

Press Enter to see results or Esc to cancel.

A Long Way to Robot Domination: Will AI Have Human Rights?

The humankind lives in interesting times. The whole world might be searched and found on Google, blockchain technology helps building self-regulated societies and, hopefully, government-independent currencies, and robots are taking over routine tasks.

When people think about robots, the first things that, probably, come to their minds are Star Wars or Bladerunner, where AI has a full metal physical body (or at least a chrome-plated metal skeleton). Frankly speaking, that’s not necessarily true. Siri, Alexa, BigDog, or even self-driving cars can be considered robots, too. So, it’s possible that soon enough robots will be able to perform the majority of our work from driving trucks to writing bestselling novels. In this case, the humankind should figure out how to define robots rights before they ask for ones themselves.

Artificial Intelligence vs Intelligence Augmentation Debates: Why is It Important?

Since 1950s, scientists and enthusiasts were kind of divided into two camps: those who wanted to replace people with robots, and those who wanted to enhance human abilities. Effectively, the former supported the concept of autonomous AI, which would be capable of acting indistinguishably from a human being (i.e. able to learn abstract concepts, solve complex problems etc.) The latter insisted that robots, AI, and any other technological tools were to expand humans skills and potential, which became known as Intelligence Augmentation (IA). This debate is widely known and discussed among roboticists, AI researchers, coders, and data scientists but it’s still unclear which approach is better, if any.

Spoiler alert, there is no correct answer to this question. Both the AI and IA approaches have their own pros and cons. Still, in most cases computer intelligence has a certain personality, at least a perceived one. Due to this fact people often think they’re communicating with a living person while in fact they’re talking to a neural network or some other AI-based agent. Moreover, AI-way enthusiasts believe that robots, neural networks, and other similar technologies will eventually replace people in the majority of their jobs. With certain personality features, robots are more likely to be treated as equal members of society. In turn, it might lead to the development and enforcement of robotics rights.

However, there is another point of view. From the IA position, robots and other intelligent machines are more likely to be used as tools for reaching goals and accomplishing tasks. Thus, robots are treated as simple instruments like pliers, tractors, or Google search engine. So, giving those hi tech agents any rights is just as reasonable as asking a PS4 whether it approves of your gaming style.

Considering all the aforementioned, it all comes down to whether robots actually get any personality, and whether it would be recognized as such by the authorities. It will also depend on economic, social and other factors, so currently there’s no way to make a reasonable enough prediction of what robotic rights might be, if any.

Status of Robots Today: Do They Really Need Rights?

The controversy between the AI and the IA stems from another question: are robots our partners or servants? People consider Siri or Alexa their partners due to the fact that they have a perceived personality, voice responses, and even can laugh and thus scare the living hell out of their owners. For that reason, people treat machines that have human-like traits with simulated emotions as their partners and are likely to think they have to have certain rights, just like people.

At the same time, robots without any personal traits and other computer agents like self-driving cars or Google Assistant seem to be treated more like servants. In situations where people aren’t able to feel someone as a human individual, AI and robots will more likely play a role of servants or tools crafted to increase abilities and life quality of human beings. Still, if a self-driving car would start asking you which route you prefer, you’d hardly be able to treat it just like a machine.

It’s worth saying that the AI theory isn’t limited only to servants/partners debates. Some in the AI community believe that robots will eventually dominate human beings, as in The Matrix, while others think that eventually robots and people will merge into something cyber-organic as in “We are the Borg, resistance is futile.”

Nowadays, most robotic products are quite primitive, which means they don’t understand nothing but the functions defined by their program, and don’t actually realize their own existence, let alone the need to have some rights. It is likely to take much time for technologies to evolve to the level where robots will understand the possibility of having rights as well as understanding their necessity in terms of their own needs.

The second issue is that society isn’t fully ready to admit that robots made from metal and code can have rights just like humans do. All changes take time to become routine but usually this process isn’t straightforward.

Robots Become Citizens

In October 2017, fembot Sophia was made the honorary citizen of Saudi Arabia without any specific regulation on the robotics rights. By doing so, Saudi Arabia became the first country that granted citizenship to a robot. Seems pretty cool, but how can Sophia exercise her rights? And, at the same time, how can we grant citizenship and related rights to a robot if it can be bought and sold by people just like a commodity? Is it legal, after all?  

According to John Frank Weaver, a Boston-based attorney, Sophia was sort of entitled to use the rights specified in the International Covenant on Civil and Political Rights and the Universal Declaration of Human Rights. Particularly, Mr. Weaver said the fembot “has all of the rights the Declaration identifies.” What does it mean?

  • According to Articles 23 (“Everyone who works has the right to just and favourable remuneration”) and 17 (“Everyone has the right to own property”), Mr. Hanson (the creator of the robot) is obliged to pay Sophia for the work that she performs and let her accumulate property. It’s quite likely that’s not going to happen.
  • Similarly, Article 13 also states that “everyone has the right to leave any country, including his own, and to return to his country.” Again, it seems unlikely that Sophia will be able to enjoy this right and take off to the United States for a holiday. It can be explained by the fact Sophia is now tethered to her citizenship and can’t freely travel to other countries without conducting respective legal procedures.

It’s worth saying that Mr. Weaver believes that “no courts or commissions under international law have jurisdiction to enforce those rights, even if Sophia wanted to.”

Regulating something new and complex is always a difficult task. Authorities have to be sure they do everything right and enforce fair and effective laws. Robotics laws definitely can regulate many questions of liability, and entitle AI and other computer agents to have intellectual property, or maybe civil rights.

More specifically, a regulatory framework might stipulate who is considered the author if a robot creates a work of art, or, for instance, whether a driverless car can be held liable for an incident. Moreover, such new laws would create a basis for further technological and legal researches which, in turn, would allow one to build a solid regulatory framework. In the long run, such a framework could prevent possible violations of robots’ rights, provided they are viewed as citizens.

Developing the Legislation

Currently even the US and EU regulators haven’t formed a clear position as to how robots should be regulated, even though they do work in this direction. The EU is serious about granting robots the status of “electronic persons”, which may provide robots with civil rights and obligations, at least according to the European Parliament’s report.

The report itself isn’t a legislative proposal, though, but merely a set of recommendations for authorities, which they are free to ignore. The document, however, provides a definition for a robot. According to the report, “a smart autonomous robot” has the following characteristics:

  1. It acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity), and trades and analyses data.
  2. It is self-learning (optional criterion).
  3. It has a physical support.
  4. It adapts its behaviours and actions to the environment.

The recommendations also include a proposal to create a special register for advanced robots, and “establish criteria for the classification of robots with a view to identifying the robots that would need to be registered.” Additionally, the report proposes to enhance and enforce “The Code of Ethical Conduct for Robotics Engineers” that would invite roboticists and AI researchers to respect human dignity, the right to privacy, be opposed to any discrimination, and so forth.

Is there, however, any point in regulating something that is not evolved enough to require any regulatory framework? It is unlikely robots have realized they exist, and if they have to have any rights, they cannot realize they are entitled to them.

The regulations, however, don’t necessarily have to apply to robots. Animal rights that are recognized almost globally are applicable more to humans than animals: they regulate how people should interact with animals, even though the animals themselves are hardly aware that they have said rights. Similarly, robotic rights at this phase may involve not the rights for robots but some rules for people and their interaction with artificial entities. This issue, however, isn’t well developed.

Conclusion

At the moment robots and AI are still primitive. They don’t realize the concept of rights and don’t know why and how to use them, unless humans program machines to have personal interests. Besides, human society isn’t psychologically ready to grant machines any rights in the short run.

Giving rights to robots entails more questions than answers. When a robot creates a composition, who will be considered the author: the robot or its creator? If robots are treated as people, is it ethically correct to do experiments on AI or make it work longer than 8 hours a day? Should robots be paid for work? How can robots exercise their civil rights? Can they have a vacation? And, of course, can a human and a robot get married if they fall in love with each other?

Today, robots don’t write complaints or sue people as they work only as prescribed by their software and hardly have any conscience. Thus, people don’t actually have real reasons to start regulating robots at the moment. The aforementioned issues might slow down the technological advancement but would probably create solid grounds for further legislative initiatives.

Still, the development of the legislation and the key principles for regulation of robots is important in the long run. Technological advancement is accelerating, and soon enough the humankind might come to require a properly written legislation to interact with electronic persons.


Join our Telegram channel to stay tuned on the recent developments in regulation of new technologies, and be the first to read expert opinions and editorials.