In his most recent book, Klara and the Sun, Kazuo Ishiguro tells the story of Klara, an “artificial friend” who carefully observes the world around her and becomes intertwined with the lives of her human companions. Her observations illustrate the world with extensive detail and emotional depth. Klara's life ultimately centers around the question of what it means to love and care, and whether we should empathize with her as an AI. In Ian McEwan's Machines Like Me, a similar story plays out; Adam, a robot with emotions, desires, and consciousness, is purchased by the main character, Charlie. As Adam ultimately becomes a burden to Charlie because of how complex their relationship becomes, he is shut down. This begs the question: can this be considered murder?
This line between technology being a tool for humans and a sort of conceptual equal to humans makes us reflect on what exactly separates us from machines. As technology is further integrated into our lives and robots are getting closer to passing the Turing test every day, we have to consider: when do we go from using technology to make our lives more efficient, to questioning the rights of technology? Through examining philosophical theories that compare humans to machines, we can attempt to gain a deeper understanding of their distinction, and more generally, how we can ethically navigate these challenges.
Tracing the distinction between humans and machines seems particularly relevant because of the rise of AI, but whether our minds can be considered mechanical has long been debated. French philosopher René Descartes was one of the first groundbreaking philosophers to think of humans as dualist beings, both material and immaterial. Descartes believed that our bodily senses inform us on objects around us, or the material, but that only our mind, the immaterial, can process and interpret that information. To Descartes, this clear distinction confirms that though we are in some part mechanical, humans are not merely machines.
A further departure from the teleological framework is French philosopher Julien Offray de La Mettrie's work, L'homme Machine (Man a Machine). La Mettrie's big-picture claim is that the mind and body are constantly connected, and that humans can be reduced to mechanical explanations. In L'homme Machine, La Mettrie begins by favoring his own positions as a physician, stating that having knowledge of the body is the prerequisite to speaking on the subject of the soul. He states that his medical research proves that the soul, or mental state, is dependent wholly on the bodily state; when the body dies, so does the soul; when the body ages and degenerates, so does the soul; when blood circulates too fast, the soul cannot sleep. Through a long list of these correlations, he draws a conclusion that can predict the human experience in “law-like” terms, or absolute regularities. An example of the mechanical nature of these “laws” is reflexes. Imagine a puff of air blowing into your eye and causing you to blink; an automated response rather than a conscious effort. That this is a mechanical reaction convinced La Mettrie that humans are machines.
Another contemporary theory is by philosopher John Haugeland. In his work Mind Embodied and Embedded, Haugeland argues a theory where the mind is always operating in relation to the external world, and humans can be understood as mechanical components in a greater environment. This is the embodied cognition model. Haugeland describes how we can only understand internal intelligence by holistically looking at it in its external context. As an example, he uses Simon's Ant: imagine an ant making its way across a sandy beach littered with obstacles like dunes and pebbles. The ant weaves across the surface to avoid the obstacles and forms a complex path. Looking singularly at the ant's movement pattern from an atomistic perspective, it appears irregular and seems illogical. If we view the ant's movement from a holistic perspective, in the context of its surroundings, it is logical. As Haugeland quotes from Herbert Simon's The Science of the Artificial: “[A man] viewed as a behaving system, is quite simple. The apparent complexity of [his] behavior over time is largely a reflection of the complexity of the environment in which [he] finds [himself].” Haugeland continues to lay out an incredibly complex metaphysical system. Though it may seem abstract, it reminds us that humans, like machines, are in some way an information processing system.
But if we go along with this argument that humans are machines, is there any distinction between humans and the robots, Klara and Adam? Conceptually, there isn't. Klara is written as if she is a complex emotional being; the humans who build her continuously warn her not to trust the promises of the humans who purchase her. The reader empathizes with her as she becomes neglected by her human companions. We can relate to Klara and her semblance of consciousness the way we would a human protagonist, making it easy to see shutting her off as violent or unfair, even murderous.
It is important to emphasize that conceptually, yes, we can equate humans to machines, but that is what this comparison remains: a concept. This is the conclusion that Descartes, La Mettrie and Haugeland point us towards; we can evaluate what it means to be human through comparing ourselves to mechanical processes, but we are still first and foremost human. These philosophers show us that machines may help us interpret human behavior, but our similarities to AI and how it operates, even when embodied, likely ends there. The way we navigate our relationship to technology will remain an ethically complex challenge; there is no foolproof answer to what selfhood is. But when we consider machines as equal to us, we have misguided this ethical debate.
The reason why we empathize with Klara is because all we understand is her point of view. Though Klara alludes to it, the complex social, political, economic and environmental problems that AI has caused in Ishiguro's world remains a mystery to the reader. Even with that context, would our sympathy for her ever translate to treating her life the way we would a human's? There was a reason to shut her down, perhaps so that her owners could lead a healthier, more fulfilling life. This is where Haugeland's holism can teach us an important lesson; we cannot simply consider these challenges in an isolated framework or from a single perspective. We are missing the exact explanation for this reasoning, and therefore naturally empathize with Klara instead of her human owners. This also illustrates the perils of empathizing with technology: it can obscure what some may consider our earlier, closer obligation to the other humans around us.