aL xin's corner
Curriculum VitaeThis paper was originally written for GENED 1093: Evolving Morality.
Through research on human behavior and psychology, we can reasonably conclude that a universal morality, or Moral Truth, does not exist. However, by breaking away from a single Platonic ideal we allow for flexibility in the development of local morality, or moral truth, an arrangement more applicable to humanity.
A Moral Truth entails a clearly defined ideal of good behavior that would be acceptable and applicable to all. Even if we accept the assumption that Moral Truth cannot be directly observed by humans (as Kant suggests), and therefore we would not be able to definitively disprove its existence, we can still conclude that humans are incapable of accessing Morality. In effect, this is the same as disproving Moral Truth. Alternatively, if we assume that morality can be defined by humans, we cannot access the second requirement of universality.
Attacking Moral Truth from either stance relies on similar research. We have gathered increasing amounts of empirical evidence for Hume’s position that reason cannot be used to derive ethics and that rationality is subservient to passion. The brain is capable of creating fluent post hoc rationalizations for spontaneous events, illustrated in the split brain studies of the 1960s described by Wolman. Patients whose brain hemispheres were separated to treat epilepsy behaved as if sensory input from either side of their body went to separate individuals. Their left sides (right hemispheres) could perform tasks based on instructions invisible to their right side (left hemispheres). When asked for an explanation, the left hemisphere, which contains the language processing center, would effortlessly construct rationalizations that were believable but empirically incorrect (Wolman). Spontaneous post hoc explanation likely also applies to intuitions about moral dilemmas.
This alone would not kill Morality if intuition was consistently correct. Unfortunately, research done by Amos and Kahneman on the framing effect shows otherwise. Given a decision between two vaccines with differing success rates/reliability but the same expected value, respondents will choose differently if the scenario is framed in terms of losses or in terms of gains. If our intuition was based on consistent Morality, the reframing of the question would have produced no change in the answers. More broadly, we would not have had any disagreement as respondents would all have been able to see the right choice.)
If we can convincingly rationalize incorrect intuitions, what hope do we have in finding Morality - or recognizing it? One could argue that we’re gradually working to a Moral asymptote. Firstly, justifying this conclusion would require knowing what Moral Truth looks like, a shaky assumption for the above reasons. Secondly, given any value, it appears that there is sufficient evidence to counter the idea of universal improvement. For one, major global powers have largely renounced forcefully spreading ideas, so ideological uniformity is unlikely. Additionally, for most values, we can reasonably suggest counters. For example, let’s claim nonviolence is Moral. While we have seen a decrease in massive military deployment, we can argue that this only occurred due to the threat of ultimately violent mutually assured destruction, which is not a non-violent ideal, etc.
Another objection may be rationalization can override automatic responses. As an example from lecture, subjects who were subconsciously shown an image of a face from someone from a different racial/ethnic background showed spikes in amygdala activity. Given more time, brain imaging suggested that the subjects could consciously process the image and then tamp down the initial “immoral” fear response. But why did the subjects decide that their modified action was correct? We end up reasoning back into a circle as we still cannot show that their rational impulse was distinctively Moral.
Given that we cannot adhere to Morality, we have a case for allowing multiple moral truths given some assumptions. First, morality should be decided based on memetic competition. Second, moral truths that emerge from such a system “deserve” to win.
As a slightly weaker counterpart to the traits of Morality, we claim that those with a successful morality are inclined to succeed and have their morality spread. This takes care of the first assumption, as this defines memetic evolution. The second assumption nearly follows from the first but with some complications. In biological evolution, for example, invasive species outcompete native species but we feel uncomfortable saying that they “should”. But because we have no claim to a gold standard of Morality, though, we cannot find a metric to reject this assumption (even if it has implications that we would dislike). Though this may appear to fall into the naturalistic fallacy, our assumption of how a successful morality should behave is so closely tied with its definition that “is” vs “ought” becomes entangled. These two assumptions also imply more free will than Morality. Evolution cannot occur without variation and mutation, which corresponds to choice and spontaneity. With a crystallized Morality, we are bound to a single right course of action. Even though someone may hypothetically choose to be “wrong”, it would confer no benefit and the universality of Morality would force them against the choice.
Based on the arguments used to call Morality into question, human behavior may only be able to fit a flexible moral ecosystem. Note that the assumptions are not a proposal for a Moral Truth because they are guiding observations and can never decide on good or bad (like how gravity, which is impartial, is not Morality even though we obey it and it is universal). Also, unlike Morality, we can make use of scientific observation to interact with the system. If application of scientific knowledge generates a new moral meme, then it has as much claim to compete in the moral ecosystem as any other idea.