I, Robot by Issac Asimov is a short story anthology that tells the stories of robots in a future where they are common household items. I, Robot was published in 1950 and it revolutionized the science fiction genre. I, Robot is one of few science fiction novels that has had a major impact on popular culture Before I, Robot people were not very interested in robots as a subject of literature because they were bland, authors were not sure how to treat their intelligence or their obedience to humans. Asimov changed this with the introduction what he called The Three Laws of Robotics. Asimov’s three laws are as follows
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Through the use of these three laws governing the robots in his fiction Asimov was able to make the subject of Robots interesting to the public and they remain a staple of popular culture to this day (Transformers anyone?).
The first story is entitled Robbie. This story is about a robot named, you guessed it, Robbie and how he affects the life of the child he takes care of, a girl named Gloria. Gloria’s mother bought Robbie when it was “cool” to own a robot servant. Gloria and Robbie become very good friends. Gloria spends more time with Robbie then she does with any other people. However, when it becomes “uncool” to own a robot Gloria’s mother sends Robbie back to the factory that he came from. Gloria then descends into a deep depression. Gloria’s mom tries everything she can think of to try and get Gloria to let Robbie go but it doesn’t work. As a last resort Gloria’s mom decides that Gloria needs to be in a different place to forget. It doesn’t work and this is when her dad has a thought. He says that Gloria needs to stop thinking of Robbie as a person and think of him as a machine and then she will get better. They decide to go to the robot factory. Robbie is at the factory and Gloria runs to him, in the process almost getting run over by a forklift. Robbie saves her life. It turns out the father planned Gloria’s reunion with Robbie of this. Gloria’s mom finally admits that Robbie might not be a monster like society has made robots out to be.
1. Do you think discrimination against an artificial intelligence, Robbie, could be looked at in the same way as racism is today?
The second story is entitled Runaround as deals with a paradox created by the juxtaposition of the Second and Third law. A robot named Speedy and a scientist are sent to Mercury to repair a mining station that has run out of selenium fuel. Once on Mercury the scientist sends Speedy to go get selenium out of a nearby selenium pool. After several hours the scientist becomes concerned and goes out to investigate. It turns out that Speedy is caught in a feedback loop between the Second Law (obey human orders) and the Third Law (protect it’s own existence). It is having this conflict because the scientist never emphasized the importance of this order to get the selenium and therefore the Second Law is not over ruling the Third Law. Eventually the scientists puts himself in danger so the First Law will over rule the other two laws and end the feedback loop. His plan works and they accomplish the mission.
2. What do you think about this situation? Can you think of any other situations where conflicts could arise between these laws?
The third short story is called Reason and it follows 2 humans on a space station with an AI that has a high capacity for reason. The station’s job is to provide energy via microwave beams to the planet below. The AI and other robots aim the beams. The humans encounter a problem when the AI decides that everything outside of the station does not exist. It develops its own religion where it serves the power source of the ship, who it calls “The Master.” The other robots become disciples of the AI and refuse to obey human orders (the Second Law has been compromised). Interestingly, the AI still performs its job perfectly but not for the humans on the planet (because it thinks they don’t exist) but for “The Master.” The humans decide to leave it alone and tell no one as it is still doing its job.
3. Did the humans do the right thing by leaving the AI alone because it’s still doing its job even though it has created its own religion?
The fourth story is titled Liar! This is the story of a robot that through a manufacturing defect has telepathic abilities. The corporation U.S. Robots and Mechanical Men attempt to figure out what went wrong and the robot starts telling the investigators what people are thinking, but it is also still obeying the First Law so to not hurt anyone’s feelings it lies in gigantic amounts. One of the characters then tells the robot that when it lies to avoid hurting humans that it is hurting them anyway. The robot then experiences a logical conflict and becomes useless. This story is where the phrase “Does not compute” originated.
4. Do you think humans will ever be able to create a robot that can evaluate psychological data? What might this mean if it happens?
There are five more stories in the anthology that I will not ruin for people who want to read the book. They deal with similar themes of Asimov’s writing such as how do you tell a humanoid robot from a moral upstanding citizen (think the Three Laws, it gets interesting), and various instances of logical paths that lead robots to be able to twist the First Law.
Asimov’s I, Robot was the first book to deal with “Robot Psychology,” the first where robots were fallible and what they said not always true (this had been a tradition among sci-f authors), and the first to give robots well thought out defining rules to what governed their actions.
5 comments:
To answer number four, I don't doubt that mankind can make machines that are able to evaluate psychological data, but the act of feeling is less believable at this point in time for the mere fact that even though we can create machines to talk and walk and respond to commands, none of them have to capacity to feel. The human race would be hard pressed to teach a robot feelings, mostly because certain receptors in the brain cannot be recreated in a science lab this does not mean that I don't believe it can be done period, I just believe that if it can be done, it will not be in our lifetime, we will never see a robot with feelings true to that described in the novel.
1) I do not think that discrimination against an AI can be looked at in the same way as racism. I think the two are completely different. Although in this case, Robbie was Gloria's best friend, Gloria never should have been allowed to get that close to a machine anyway. In my opinion, it is similar to letting a child become best friends with a toaster, or a refrigerator, or a microwave. Robots are not human beings, no matter how closely they seem to be. The only reason Robbie saved Gloria is because if it didn't it would have broken the 1st law. It was simply doing what it was programmed to do. Discrimination against an actual human being is completely different. If someone discriminates against a robot, the robot wouldn't care less; it does not have any feelings. Discrimination against a human being makes that person feel useless. It is mean and harmful in ways that discrimination against a robot is not.
1. I don't think that discrimination against an artificial intelligence is the same as racism because obviously they are not real people and are not equal to humans. I agree, robots don't have feelings and no one would have to worry about hurting them by showing they are not equal to mankind.
Just some food for thought here guys based on your responses, since the majority of you believe that robots cannot be discriminated against and thus shouldn't be treated like humans, then what about animals and robots? Should we treat robots like animals or not? What do you think the crazies (in my opinion at least) over at PETA would say because they might as well be People for the Ethical Treatment of non-humans.
I tink that it is possible to one day create a robot that can evaluate us. We already know so much about the human psyche and with the rate of technological advancement it would be foolish to say such an achievement is out of our reach
Post a Comment