Saturday, March 20, 2010

Sentience, Humanity and Synthetic Beings


The question has been raised in class as to what defines “humanity” or “human nature.” Yet this question has been asked for thousands of years, and it continues to remain unanswered. Perhaps then humanity is defined not by a set of ethical goals or patterns of behavior, but by the capacity to make actions of both absolute good and evil, consciousness and to be self aware of one’s actions. For every act of bullying in a schoolyard there is a person who invites a shy child to play. While humans like Hitler exist, so do humans like Mother Theresa. By defining humanity as this capacity rather than a set of morals, humans can relate not only to other intelligent species, but other humans as well. It would be hard to find two civilizations that have identical value sets. Even the US and Canada have very diverse sets of morals concerning issues such as health care and welfare. If humanity is defined by the capacity, should we use the phrases sentient or sapience instead? Sentience is defined as the ability to perceive subjectively, while sapience is the ability to act with judgment. Frankly I prefer sentience as there are people who are unable to make proper judgment that are still able to have subjective thought. We’ve all belonged to this group at one point in our lives… when we were children. There are reasons why youths have limited freedoms and different sets of laws to govern them. However, even children or, as was discussed in class, sociopaths have the ability to feel and think subjectively.

For me, this discussion of sentience and sapience raises the question of how to define synthetic life forms? I have touched upon this topic before and would like to discuss it in greater depth. Would a Terminator or HAL 9000 be considered a sentient being with the rights of a human? In a way, we must recognize that as humans we are programmed. We are socialized from birth to have certain reactions and mentalities. This is why one regurgitates when one sees violent or horrific actions today that may have brought about laughter in Ancient Rome. The question of synthetic sentience was raised in Star Trek the Next Generation when Data the android’s freedom was brought on trial. Although Data lacks emotions, Captain Picard proved that Data was a sentient being because he was intelligent and self aware. An artificial mind may also come to value a person or ideology as essential or optimal to their existence and thus create a pseudo morality. An artificial intelligence can also can a unique personality through experience and interaction and in a way become socialized as well. Through this it can have a subjective mentality that affected by past experiences, just like you or I. Maybe a synthetic being will never be able to feel organic emotions, but I believe that if it can develop a consciousness and personality, it should be treated as an individual. I’d like to end my post with a quote from the T 800 in Terminator 2: “I know now why you cry, even if it is something I could never do.” AI is progressing and has the potential to coexist with organic life one day in the future. If we enslave or threaten an artificial life form, we will have an uprising against a mighty foe. Ever try to beat Chess Titans on expert?

1 comment:

  1. This is an interesting reflection, considering the discussions we had about humanity and what qualifies as "human" in Grass. The problem of distinguishing "humans" from non-humans in Grass raises a few questions about the classification you pose above.

    On Grass, the Hippae are the dominant species. They have achieved this position by brutally wiping out, or attempting to wipe out all races in competition with them (the Arbai, the Foxen). The problem with considering the sentience of the Hippae, the Foxen, and the Arbai, is that each species represents an extreme moral difference. The Arbai are too good, too peace loving. They are virtually all Mother Teresa. The Hippae, on the other hand, are like a race of Equine Hitlers (or Ted Bundys). The Foxen are closer to the human moral average, but even they are too empathic to ever take sadistic pleasure or desire to dominate. The species on Grass don't represent the full range of morality that we associate with being "human." If a Hippae feels pleasure from causing pain, he has no common ground with a Foxen or even a (socially and mentally healthy) human being. While it seems to makes sense to nominate a sentient being "human," does it make sense to call "human" a sentient being that is unable to peacefully coexist with an actual biological human?

    ReplyDelete