Friday, September 25, 2009

Can humans transcend being robots?

Logged on the other day and was confronted with the enticing/intriguing headline, "Can Robots Make Ethical Decisions?"

I get the drift here, but I think articles of this nature conceive the problem exactly backwards. (And for those of you who've been with us for a while now and who know what's coming, yes, I realize I'm repeating myself, so you don't need to remind me of why my theories on predetermination are all wet. Unless you have to, that is.) The question isn't whether robots can be taught to be "as moral as" humans. The real question, to my mind, is whether, in the first place, humans
being, essentially, robots with lots of skin and hair and pretty teeth* and the likecan exercise any conscious "ethical" control over acts they were predetermined to do anyway. After all, if we have to do everything we do, where do the "ethics" come into play, in the standard sense of the term? And for those of you who argue something like "God gave us the free will to act in a moral manner," well, if you think about it, a highly ethical internal clock is another factor that bodes against free will. If you have a strong conscience and an overpowering innate commitment to doing the right things...then you can't do the wrong things. Right? You simply can't choose to misbehave, any more than Mother Teresa could've chosen to just walk out of Calcutta, trade those weathered vestments for some nice new Armani threads, hop the next jet to LAX, move in next to Britney or Paris and begin living the sybaritic life....

* except in the U.K.


RevRon's Rants said...

"... you don't need to remind me of why my theories on predetermination are all wet. Unless you have to, that is."

Given that you've attempted to preclude any challenge to your basic assumption in this post, you've rather limited the scope of discussion to responses that are consistent with your own core perspective, Steve. I'm just saying...

David Brennan said...

You gotta love this one:

"Footbridge version Ian is on the footbridge over the trolley track. He is next to a heavy object, which he can shove onto the track in the path of the trolley to stop it, thereby preventing it from killing the five people. The heavy object is a man, standing next to Ian with his back turned. Ian can shove the man onto the track, resulting in death; or he can refrain from doing this, letting the five die.

Is it morally permissible for Ian to shove the man?"

If Ian thought that the needs of the many outweighed the needs of the few....then why wouldn't he just jump himself? He would have a lot more certainty in hitting his target than if he pushed somebody else to their surprise, where they'd be fighting him back.

Regarding the bigger thesis in the SHAM column: (1) I don't think it's been remotely established that free will is a fraud. This seems impossible because, if it were, than the native peoples of India, Hong Kong, and New Zealand would have been wholly unaffected by the British Empire's attempt to influence their culture. To the contrary, every one of these places has been profoundly changed, demonstrating the fact that the people chose Western ways of life to the old ones (and thank God they did: without the Easterners, us useless American men wouldn't have anything engineered anymore!)

(2) Calling humans machines is trite reductionism at best, and simplification to the point of worthlessness at worst. I won't go into exotic detail here because I don't think anybody would care, but the old theory of "vitalism" (the belief that living organisms cannot be completely described or understood through physical or chemical properties alone) has more and more merit as we look at the genetic code and find that the simplest amoebas have mechanisms far more complex than computer software (which is why we can't create life wholesale).

(3) And the machine vs. vitalism argument, as it relates to humans, is also much more complicated than the reductionists claim. For example, mathematician and sci-fi author Vernor Vinge has recently come to suggest that the current estimates for human memory and processing abilities of the brain have been significantly underestimated (an estimate was wrong: what a shocker!) and that the target date for the time when computers will surpass humans in processing abilities is therefore far too soon.

The point I'm making, I guess, is that we should let the great Indian and Indonesian computer programmers attempt to quantify ethics free from stupid proclamations that tell them we already totally understand human nature and free will. We don't.

(Anyway, cynical American sh-- isn't gonna stop the great Easterners from creating and building. So maybe it's pointless to even try and stand up for them.)

Tyro said...

@Steve - I'm too new to know your views on determanism but didn't that go out the door when Heisenberg came in?

@David - I don't understand what the British Empire has to say about free will. Organic robots can respond to new situations so it seems like your example doesn't help your case nor address Steve's.

When we walk past a stray dog at night when no one can see us, is it really even odds whether we break its legs or give it a pat on the head? We each have an internal character and this greatly restricts the choices we make.

Cosmic Connie said...

I'm kind of rushed today so I don't have time to jump into this in depth, but I did want to mention an interesting book I'm reading now that sheds light on the matter (from a biology-is-destiny point of view): "Evil Genes: Why Rome Fell, Hitler Rose, Enron Failed, and My Sister Stole My Mother's Boyfriend," by Barbara Oakley, Ph.D. (Prometheus Press, 2007). According to this book, genes and such aren't everything, but they go a long way towards 'splainin' a lot of puzzling things about humans -- everything from personality quirks and disorders to much of what we consider to be "evil" behavior.

Steve Salerno said...

Connie: Funny you should mention that. In a private email to me some weeks back, someone suggested that I "must have evil genes," then put a smiley after it--and I thought it was just a generic thought (albeit a rather pointed one, especially with the smiley) until just last week when someone else mentioned that same book. And now comes your comment. Synchronicity, huh?

Anonymous said...

Not exactly to your point, Steve, but check out this brief piece that talks about willpower and its limitations (as applied to a very specific issue, but having broader implications as well).

Anonymous said...

Humans are animals first and last, if we could transcend that state then your speculations about robots and determinism might bear fruit--- but I ain't holding my breath.

diego said...

well, if you could affect a predeterminated situation you can't call it predeterminated does it ?,

a tipical paradox of the type : "all what I say is false"

Steve Salerno said...

Diego: It's not really a paradox, because the "effect" you're having on a situation is also predetermined. You just don't know it.