Before reading this post, make sure to read yesterday’s. So, my post on Slashdot turned it into a little Philosophy forum. Some really great comments came back, I want to try to summarize them here.
My favorite rebuttal was Jim Callahan’s post, which I’ll reproduce below:
Actually, its just the potential moral value = actual moral value argument that’s invalid. The “all organisms with complete human genomes have souls (usually, one soul per genome, thus excluding dead skin cells, etc, separated from the largest mass posessing the unique genome)” + “things with souls have moral value” => “Embryos have moral value” is entirely valid, since embryos are organisms with a complete human genome. It’s perfectly rational.
The simple “embryos have no inherent moral value” is not itself a rational statement, but an assertion devoid of logic. To demonstrate rationality, you have to demonstrate a chain of causality from base assertions to a nontrivial solution. In this case the extent of the logic is “non-conscious things have no moral value” + “embryos aren’t conscious” => “embryos have no moral value”. The rest of the grandparent is a series of strawmen, which are fine for making points but don’t actually support the main point in any way.
When it all comes down to it, the two assertions in question are equally valid. They are both one step removed from the base assertions, and the base assertions both consist of an arbitrary statement of an ill-defined term (consciousness and soul) and an arbitrary, unsupportable assertion as to the moral value of said term (soul = good, consciousness = good). Careful definition can swing science into the favor of the consciousness decision, but careful definition can do the same for the soul argument. Even then, science cannot by its nature make moral commands, so wether the people involved are scientific or not is irrelevant.
So, in conclusion, your point on the ‘scientificness’ of the debaters involved is irrelevant, and both of your examples exhibit roughly equivalent rationality. Rebuttal complete.
Although I think Jim was very careful to point out the logic behind my argument and the logic behind the “other side’s,” I think he stops short when he says that both are essentially logically equivalent. The thing about the souls argument is that the proponents refuse to provide any reason why an embryo should have more or less of a soul than, say, a chair or a rock. He says the fact that embryos have a complete human genome is the contributing factor. But I can only imagine a chair which has the “entire human genome” injected into it (i.e., with DNA for human beings “bonded” into the chair) to be a pretty easy refutation of this.
My argument does arbitrarily say that “consciousness is good”, but consciousness isn’t just some cooked up concept like souls (it isn’t as metaphysical as my opponents make it out to be, in other words). Consciousness is a concept that encompasses the ability to “lead a life” in the sense we understand it. That is, to have hopes and aspirations, to establish relationships, to create art and adapt flexibly to our environment, all those wonderful qualities of human beings. And neuroscientists, more and more, are finding out that consciousness has a real basis in the physicality of the brain–nowadays they describe consciousness as a series of information “loops” with “feedforward” information in the brain as well as “feedback,” that ultimately results in “awareness” and “perception,” and finally in “sentience” or “consciousness.” And consciousness makes sense as a moral requirement because it essentially says, “all those things which lead lives should not be harmed.” This nicely excludes inanimate objects from having moral value when deciding whether they can be harmed, and this nicely includes animals, to a great degree, who do lead lives (albeit less complex ones than we do), and can be deprived of leading that life.
I also don’t think my arguments were just straw men. 😉
Some other arguments. One interesting one on AI:
Ever worry about that “gray period” sometime in the (probably far) future which we will experience when AI systems start to approach the point where almost everyone will consider them as having consciousness? By your argument, after that point, we will have to start treating them as people (something which I generally agree with).
and, on consciousness of people who are sleeping…
“The crux of the matter is, the rock or chair isn’t conscious, and that’s why they have no moral value.”
So a human who is sleeping, and thus not conscious would have no moral value?
To respond to both of these, I’ll post my actual Slashdot response.
“So a human who is sleeping, and thus not conscious would have no moral value?”
Sorry, again, here I was assuming some background reading about what “consciousness ” is. Unfortunately, in Philosophy (this is a flaw of the subject), terms are often quite vague to start off with, and Philosophers make a habit of trying to really define a term. When debating with people who haven’t studied it, I forget that consciousness takes on a different meaning in regular discussion. “Consciousness” as I’m using it has nothing to do with “being awake” or “being asleep.” Whether you are awake or asleep, you are conscious. You are not “unconscious” when asleep, merely with a potential to awake–your brain doesn’t “shut off” when you’re asleep. It simply doesn’t provide you with the constant stream of sense-input you associate with a waking state.
Comas are definitely a gray area. I really don’t know enough about the brain states of humans in comas to make any judgement about whether they are still “conscious,” but I’d say they probably aren’t, especially if it’s a coma from which that person will never recover. If it is a coma which one can recover from (and, after which, be conscious) I can only assume that the brain was either a) in a conscious state the whole time or b) “broken” into an unconscious state (i.e., it no longer functioned) but then “healed” and went into a conscious state again. Again, this (b) possibility makes comas very much a gray area. However, as I like to say to friends: gray areas don’t mean you have the wrong principle, as long as your principle works when we have clear-cut cases. For example, the moral principle that “killing is wrong” has lots of grey areas: what if the person you are killing killed your entire family? What if you fire a gun at a target on a wall and slip and shoot your friend instead? But that’s not to say the moral principle–“killing is wrong”–is bad, just because one can find “grey area cases” in which killing may not be wrong. It just means that things like time and causation can be confused, and things like intent or potential to avoid an accident or negligent action are hard to measure.
Even some concepts we have that seem very clear-cut have gray areas. Take your concept of a “table”. What is a table? Think of modern artists in furniture design who fused the concept of “table” and “chair” to produce something that seems to be a hybrid between the two. Okay, so maybe you define table functionally: something onto which one can place objects. But now imagine a “table” whose surface spins around at high speed, so that nothing can be placed on it. Is it still a table? Okay, so maybe you define it physically, like a surface atop any number of “legs”. But now imagine a table that hangs from the ceiling by steel wire. Etc. etc. I know this seems rather nit-picky, but that’s really what gray areas are, and that’s why I think they’re fun to think about, but ultimately one should evaluate a moral principle by its general-case performance, and then make sure it doesn’t do “insane” things in rational gray areas.
What my argument above tried to do is show that a) since embryos are clearly not conscious beings (nor were they ever conscious beings), they don’t demand a special moral protection and b) moral protection has only been granted to them because embryos have the potential to become conscious beings, the so-called potentiality principle, which has other unacceptable implications.
I really think some great points were raised, however.
For example, one problem with my consciousness argument is what another poster raised: that “strong AI”, should it ever come about (and thinkers like Jeff Hawkins in “On Intelligence” make me believe it just may some day) would give us responsibility to give these new robots moral value. I don’t know if there’s something wrong with that, it just may seem unnatural because AI machines are so different from us, but then again so is the example I gave of an alien life form.
What I think is funny is that we are all thinking about this way more than the people who really have the burden of thinking about it: anti-abortion activists.