I recently came across a Discover Magazine blog article entitled, “Is resurrecting Neanderthals unethical?” The question was pretty simple: if we developed the ability to use preserved DNA and cloning technology to raise living Neanderthals, would there be anything wrong with doing that?
Most of the people replying made arguments that followed a similar pattern. It would be immoral to create conscious beings just for the purpose of experimenting on them. It would be immoral to create beings just for the military. It would be immoral to create something that would be an outcast in society and would suffer miserably.
What I found interesting about these answers is that they were all based on assumed outcomes of the cloning of Neanderthals. If we cloned a Neanderthal, it’s possible that they would be created just for experimentation, but that isn’t guaranteed, and it wasn’t stated explicitly as part of the question. Similarly, it’s possible the military might be interested in such a project, but that’s not an unavoidable outcome, either. Finally, we might figure that such beings would have trouble integrating in our society and might be unhappy, but that is nonetheless an assumption about what would happen after our hypothetical cloning project had taken place.
Nobody seemed to have any arguments against the cloning on Neanderthals that did not somehow involve a negative assumption about how they would be treated later on.
So here is my question: How much does the morality of doing something depend on the morality of other things that might happen later, but that are not necessarily outcomes of your original act?
In some ways, this is a very old moral question. The year is 1944, and you have the power to build an atomic bomb. You know that it is possible for this to be terribly misused, but you also know that it won’t necessarily be misused. Does the fact that the atomic bomb could be used for unethical actions make the creation of the bomb itself an unethical act?
I think the answer has to be “no.”
The thing is, even after the decision is made (the “original action in question” you might say), there are other decisions involved that lead to the negative consequence. Although you built the bomb, it was someone else’s conscious and avoidable decision to use that bomb for terrorism. Although you cloned the Neanderthal, it was someone else’s conscious and avoidable decision to use that Neanderthal for medical experiments. And so on. The responsibility for the immoral consequence has to adhere to the last conscious choice that was made in the chain, not the first: the responsibility for the torture of the Neanderthal goes to the person doing the experiments, not the person who cloned the creature to begin with.
Otherwise, you have basically allowed yourself to be taken in by the Bad Guy Dilemma. In the “bad guy dilemma,” the Evil Movie Villain says: “I have captured The Fair Maiden. She will die if you do not give me what I demand. It is your decision! If she dies, then her blood is on YOUR hands!”
But it’s obviously a bad argument. Everyone hearing it knows that if the Fair Maiden dies, then it is because the Evil Movie Villain killed her. He is trying to bully the Hero into giving in to his demands, using the “dilemma” argument to make him feel guilty. But the fact remains: although any Good Hero will naturally want to save the Maiden… everyone in the audience knows the Villain’s logic is not sound!
Similarly, saying that it is immoral to clone a Neanderthal (or to create a nuclear bomb) just because someone else might do something immoral with it is also a bad argument. It’s giving the responsibility of the “immoral act” to the wrong piece of the equation.