Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them? Asimov knew they weren’t perfectĪsimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. We are approaching the time when robots in our daily lives will be making decisions about how to act. Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.A robot may not injure a human being or, through inaction, allow a human being to come to harm.This effort resulted in what became known as Asimov’s Three Laws of Robotics: In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. As robots become integrated into society more widely, we need to be sure they’ll behave well among us.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |