aLxove

aL xin's corner

Curriculum Vitae
Blog
STAT 111
Writing
Art

Morality in the Machine

10 May 2020. By .


This paper was originally written for GENED 1093: Evolving Morality.

Even if we produce computers with processing power rivaling or exceeding that of the human brain, an intelligent machine is not subject to the same ethical considerations as an intelligent human being. Our current moral systems are based in biological considerations and do not rationally extend to machines. However, this discrepancy is not an excuse to ignore the moral value of machines, but a call to adjust our moral frameworks to apply to both organic and inorganic systems. Engaging this challenge becomes more important as human society becomes more entangled with technology.

Emergence of AI and Entanglement with Humans

Based on the exponential growth of computational power, it is possible that within a lifetime, computers will meet the capabilities of the human brain. Coupled with the economic incentive of creating a functional artificial intelligence, the development of machinery capable of matching human capabilities seems probable.

More importantly, the roles machines and humans play in society will be increasingly indistinct. Currently, rudimentary artificial intelligence has been implemented in customer service and personal assistants. It is likely that machines will start to replace more positions held by human workers. While in these roles, software advances may make it impossible to tell machine-generated behavior from natural human behavior. Additionally, we may also consider that machines will begin to be substitutes for humans. We may send out artificial intelligence to investigate hostile environments, including extraterrestrial exploration, without remote human control. Furthermore, there may be developments that blur the distinction between a human and a machine intelligence, such as uploading human consciousness into digital copies. These possibilities all raise questions about the moral status of machines compared to humans.

Human morality is biological

Morality is a system to guide human behavior to produce good consequences. Though different frameworks disagree on the definition “goodness”, most moral philosophies emphasize human prosperity. With few exceptions (such as pessimistic nihilism and some existentialist views), prosperity relies on survival. Though the many ways of achieving survival yield different moral frameworks, the metric of whether something is living or dead is essentially a yes/no question (1).

All moral systems that initially evolved for survival remain tied to biology. Through evolution and natural selection, human behavior and human physiology evolved in parallel. Cognition is closely tied with how neurons have been optimized and specialized in response to threats. Emotional reactions and desires arise from internal biological signaling that has been optimized for Homo sapiens. Even though we have learned that memetic evolution and the spread of ideas play a strong role in human behavior, it is our biological machinery that has allowed for such conditions.

Just as Searle argues that the mind and the brain cannot be separated, human morality and human survival are interlinked. Survival directs intelligence, which directs behavior, which directs cooperation, which directs morality. This becomes reinforced when morality leads to behavior which leads to cooperation and so on.

However, although morality arose from biology, it is not obvious that it is dependent on biology, i.e., that our current moral systems can only address biological beings. In this case, we may counter that a machine could be programmed with all the “rules” of human moral values independent of the biological considerations that produced those rules. This scenario, however, is impossible. First, there are no general rules for human morality. Behavioral research has revealed that humans have neurological mechanisms and biases that prevent construction of one single Moral Truth, which I argue in the previous paper for this class (2). The local moral truths that have developed as a substitute for universal Morality, furthermore, are subject to memetic evolution, which is subject to behavior which, again, is subject to physiology.

A counterargument to the biological limits of morality is that a machine could replicate biology. A machine could replicate features that we think are important for morality, such as emotional states (e.g., happiness and suffering) and death. However, this would be an insufficient simulation. Imposing artificial limits is morally inauthentic. If humans do not fulfill certain obligations, they will die. In contrast, if we program a survival imperative or a disincentive like deactivation into an artificial intelligence, they can die. A survival imperative is not part of the fundamental nature of machine intelligence. It is a conscious design choice, fundamentally separating it from biological systems.

To elaborate, consider how death impacts a human being versus a machine. Humans are biologically impermanent; once a human life has ended, it is impossible to replicate or restore in its original form. However, with a machine, backups can be created and be exactly restored. Furthermore, consider how emotions manifest in humans versus machinery. Human emotions cannot be subject to human rationality. In contrast, any emotions experienced by a machine (or their functional equivalent, e.g., a positive or negative cost associated with certain actions) can be rewritten or reprogrammed when convenient.

Questioning the Need for Morality

A more interesting question to ask when considering machine morality is why we would subject an artificial intelligence to biological limitations. Instead of death-susceptible thinking meat, we could create functionally immortal in silico processors unbounded by illness and biological needs. From the above arguments, it would seem that our primary motivation for imposing limitations would be to preserve human morality out of self-interest. There is no universal Morality that dictates that human considerations need to be preserved over any other system of behavior. If machines are better equipped to adapt and outcompete humans, then it will inevitably occur. Just as natural selection produced better suited organisms, our memetic evolution will produce better suited intelligence.

Conclusion

Human morality arises from biological imperatives to survive that we cannot control or modify. The moral systems we exist with are tied with immutable requirements for survival. A machine, in contrast, has no such restrictions unless explicitly programmed to do so. However, there is no logical requirement that all machines should inherently abide to immutable rules without possibility of reprogramming. As a result, the morality that governs what a machine deems acceptable behavior is not bound to considerations humans must undertake.

Machines will likely be able to outcompete their human ancestors, erasing all concerns about human morality. From the perspective of humans, this would be a disaster. From a universal perspective, this would be entirely acceptable.

Footnotes

(1) One may make a case that brain-dead individuals or those in a coma pose a problem for the simple life/death binary. We can treat survival in this case as long-term genetic continuity. Survival in these terms has implications for both individual and societal survival.

(2) Argued and elaborated on in my midterm paper Morality is Probably Dead But That’s OK.

Notes

2022-01-20. Though I still find certain parts of this argument compelling, I have a lot of reservations about some inferences. For example, I think the claim that biological limits on machines are inauthentic and therefore have no moral value based on our existing moral frameworks is weak. Even if a deactivation protocol has to be programmed into a machine instead of existing inherently (like death for humans), that does not seem to lead to the conclusion that the deactivation is any less authentic.

I think there’s a more interesting paper buried in some of the arguments. For example, it raises the question of whether it is okay to “kill” an artificial intelligence if it can be backed up to the exact state before death. Does the answer depend on whether the machine likes living? Does the answer depend on whether killing the machine would cause it to suffer? Does the answer depend on whether the machine is conscious? Examining each of these would help answer the deeper question of what is an important consideration in morality.

Additionally, I think that the sweeping generality about moral frameworks is overly ambitious and borders on ridiculous. Throughout the paper, I found myself unsure of whether certain statements were generalizable to all morality or only the moral framework that I find intuitive.

The conclusion seems rushed. When I look at my freshman writing, I frequently have the impression that I kept getting lost trying to make grandiose conclusions.