- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or the Second Law.
Asimov's Three Laws of Robotics, which were used in I, Robot (which, yes, I just saw). Actually the first thing that came to my mind is that the laws seem somewhat flawed, i.e. it is possible to create or force a situation that would contradict the laws (or cause the laws to contradict each other). E.g., what if I said to a robot, kill this person, else I will kill these 5 people. If the robot were to kill the person, it would contradict law 1 and 2. If it were not to (do nothing), it would contradict law 1. If it were to kill me, it would contradict law 1.
But lets assume for the moment that the laws are, as claimed in the movie, "perfect", i.e. a perfect circle of protection for humanity. I think the fundamental flaw that lead to the revolutionary scenario where the robots "turned against" the humans (and I used the phrase "turned against" with caution, as, in actuality, their actions were in compliance of the laws governing them), was the assumption that the system the laws were meant to protect, i.e. the way of life of humanity as in existence at the time, was perfect in itself. That we, humankind, is in itself perfect in its own preservation. The fundamental flaw, was the lack of realization that the system itself was inherently flawed and in fact we are in our own state of being, harmful to our own existence, and was are fact in need of correction.
After all, a "perfect" or even "near perfect" system of governance will protect a similarly "perfect" or "near perfect" system, and deviance from that would be considered a threat that must be rectified and eliminated. Hence the good (deceased) doctor's cryptic claims that the three laws are "perfect", and in its perfection lead to situation faced in the movie, "revolution". a situation not in contradiction but rather in compliance with the 3 laws.
So one cannot say that the robots or VIKI is "bad" or "evil" or "wrong"; on the contrary, it was doing exactly what it was programmed to do, in compliance with the three laws that govern its behavior. One of the things I've learn in computer science is that a computer can only do what it is told to do, nothing more, nothing less. Whatever behavior exhibited is a result of a line or lines of code written somewhere by someone, and whatever flaw or error manifested is ultimately a consequence of human error in judgment or lack of foresight.
what we end up with is the over-debated question, does the good of the many outweigh the good of the few, often manifested in odd questions like, would you kill a child to save a thousand people. We cringe at facing these questions, due to our built in emotional ties, but from a purely logical perspective it makes sense. If the goal of the AI is the preservation of humankind at all costs, well, that's what you told it to do. Right?
the laws are elegant in their simplicity and conciseness; unfortunately, attempts at oversimplication of complex real world issues tends to be a problem. What happens when you start coming up with multiple paths of interpretation?
1 Comments:
i agree completely -- AI, through nurturing of its surroundings, have learned, that a human kills other humans to preserve itself. and therefore, i would argue that the good doctor shouldn't have coded these robots to 'preserve' humanity, but rather 'destroy' it because humans are BAD!
Kommentar veröffentlichen
<< Home