This blog post explores robot autonomy, emotions, and ethical relationships with humans, examining the ethical issues arising from technological advancement.
Today, robotics technology is utilized across diverse fields such as healthcare, manufacturing, and education. This technological advancement is significantly transforming our daily lives, and robots are now establishing themselves not merely as simple machines, but as entities that interact with humans. In the near future, the emergence of intelligent robots, beyond simple robots performing repetitive tasks, will raise various issues. This signifies the potential for robots to evolve beyond mere tools into entities capable of autonomous judgment. In this process, we must deeply consider the role and ethics of robots.
Particularly, the issue of robot autonomy is becoming a crucial discussion alongside technological advancement. Discussions include whether robots should only follow human commands, whether they should possess the autonomy to refuse commands if those commands are wrong, whether we should treat robots as tools we use, or whether we should recognize them as independent entities like humans. For example, in the medical field, when a robot performs surgery, we need to discuss whether it is right for the robot to unconditionally follow the doctor’s commands, or whether it should sometimes have the authority to refuse commands for the patient’s safety. These issues extend beyond mere technical challenges, evolving into ethical questions that redefine the relationship between humans and robots.
Consequently, there is a need to establish and concretize the concept of ‘robot ethics’. Robot ethics involves discussions about the various questions arising from human-robot interactions. The advent of robots will bring significant changes across society, leading to new risks that did not exist before. For instance, various intelligent robots are being weaponized and developed for military purposes. If these robots are equipped with the capability to autonomously identify and preemptively attack perceived enemies to effectively eliminate adversaries, situations where human lives are threatened by robots could arise. As such scenarios are already being actively researched in some nations, regulations and ethical discussions on this matter are urgently needed. By proposing and applying the concept of robot ethics, we can prevent these risks and mediate potential social conflicts arising from robotics.
The first concept robot ethics must encompass is the ethical norms humans, as users and manufacturers of robots, must uphold. Those who manufacture robots must create them with a sense of responsibility, thoroughly discussing whether the intended purpose is legitimate and what consequences could arise if the robot is misused, thereby minimizing the potential for abuse. Users, in turn, must use robots in accordance with their intended purpose and must not mistreat or modify them for other purposes. For example, in the near future, unmanned delivery robots could replace couriers for parcel delivery. While their original purpose is to enhance human convenience, consider the horrific consequences if terrorist groups were to equip these robots with bombs instead of parcels.
Second, there are principles robots themselves must adhere to. Since robots are developed to improve inconveniences in human life, they must never harm humans, oppress them, or violate human dignity. Establishing principles for robots to follow is a crucial aspect of robot usage. If these principles are poorly defined, robots could harm people or even inflict damage on all of humanity. These concerns are frequently explored in science fiction films. For example, the movie “I, Robot” depicts a society where intelligent robots are commercialized, functioning to obey human commands and serve their convenience. In this society, robots must act according to the following ‘Three Laws of Robotics’ under all circumstances.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In the film, ‘Viki,’ a robot who controls other robots, orders her subordinate robots to detain humans. At first glance, this order appears to violate the first law of robotics, as it harms humans. However, ‘Viki’ claims she issued this order for the sake of humanity, prioritizing it over individual humans. She argues that for humanity’s progress, it is necessary to first control and reorganize humans who engage in actions endangering humanity, such as environmental pollution and war. The principles established to effectively control robots contained a blind spot, which instead caused significant harm to humanity. These points are not limited to the movie alone; they prompt reflection on how crucial it is to define robot principles accurately and concretely.
Furthermore, we must contemplate the possibility of robots developing emotions and self-awareness. The pace of robotics advancement is accelerating, and personified robots will likely emerge in our reality in the near future. At that time, careful consideration is needed regarding how we will communicate with these intelligent robots and what principles should be established to protect human autonomy and dignity.
Finally, robot ethics must include ethical norms for situations that may arise in the relationship between robots and humans. As technology advances, robots will evolve to look more human-like, speak, and even feel emotions. Consequently, robots will be deployed not only in production sectors requiring simple tasks but also in social and emotional professions such as kindergarten assistants, hospice workers, and guides. At this point, norms addressing ethical issues—such as whether robots should be treated as equals to humans working in the same field, or whether forming emotional bonds with robots is appropriate—must be established. The film “AI” presents such ethical dilemmas. In this movie, the protagonist robot ‘David’ possesses a human-like appearance, feels emotions just like humans, and is adopted into a family to take the place of their son who has become a vegetative state. “AI” poses the question to its audience: “If a robot capable of human-like emotions emerges, should we include it within the category of humans?” by showing human attitudes toward ‘sentient robots’ and the resulting emotional wounds these robots endure.
A society where robots significantly impact our daily lives is approaching, and robots will be utilized across various fields. Consequently, the moral and legal responsibility for actions performed by robots will become crucial. Robotic ethics must be established, addressing the relationship between robots and human society’s various norms, as well as issues like cultural discrimination. At this juncture, norms for humans as users and manufacturers, norms for robots themselves, and norms governing the relationship between humans and robots must be incorporated into robotic ethics in a concrete and proper manner. Although highly intelligent, thinking, and emotion-capable robots like those in “I, Robot” and “AI” have not yet appeared, preemptively defining the relationship between humans and robots will enable the prevention and control of potential future risks arising from robotics.