Why Asimov’s Laws of Robotics Don’t Work – Computerphile

So, should we do a video about the three laws of robotics, then? Because it keeps coming up in the comments. Okay, so the thing is, you won’t hear serious AI researchers talking about the three laws of robotics because they don’t work. They never worked. So I think people don’t see the three laws talked about, because they’re not serious. They haven’t been relevant for a very long time and they’re out of a science fiction book, you know? So, I’m going to do it. I want to be clear that I’m not taking these seriously, right? I’m going to talk about it anyway, because it needs to be talked about. So these are some rules that science fiction author Isaac Asimov came up with, in his stories, as an attempted sort of solution to the problem of making sure that artificial intelligence did what we want it to do. Shall we read them out then and see what they are? Oh yeah, I’ll look them- Give me a second. I’ve looked them up. Okay, right, so they are: Law Number 1: A robot may not injure a human being or, through inaction allow a human being to come to harm. Law Number 2: A robot must obey orders given it by human beings except where such orders would conflict with the first law. Law Number 3: A robot must protect its own existence as long as such protection does not conflict with the first or second laws. I think there was a zeroth one later as well. Law 0: A robot may not harm humanity or, by inaction, allow humanity to come to harm. So it’s weird that these keep coming up because, okay, so firstly they were made by someone who is writing stories, right? And they’re optimized for story-writing. But they don’t even work in the books, right? If you read the books, they’re all about the ways that these rules go wrong, the various, various negative consequences. The most unrealistic thing, in my opinion, about the way Asimov did his stuff was the way that things go wrong and then get fixed, right? Most of the time, if you have a super-intelligence, that is doing something you don’t want it to do, there’s probably no hero who’s going to save the day with cleverness. Real life doesn’t work that way, generally speaking, right? Because they’re written in English. How do you define these things? How do you define human without having to first take an ethical stand on almost every issue? And if human wasn’t hard enough, you then have to define harm, right? And you’ve got the same problem again. Almost any definitions you give for those words, really solid, unambiguous definitions that don’t rely on human intuition, result in weird quirks of philosophy, resulting in your AI doing something you really don’t want it to do. The thing is, in order to encode that rule, “Don’t allow a human being to come to harm”, in a way that means anything close to what we intuitively understand it to mean, you would have to encode within the words ‘human’ and ‘harm’ the entire field of ethics, right? You have to solve ethics, comprehensively, and then use that to make your definitions. So it doesn’t solve the problem, it pushes the problem back one step into now, well how do we define these terms? When I say the word human, you know what I mean, and that’s not because either of us have a rigorous definition of what a human is. We’ve just sort of learned by general association what a human is, and then the word ‘human’ points to that structure in your brain, but I’m not really transferring the content to you. So, you can’t just say ‘human’ in the utility function of an AI and have it know what that means. You have to specify. You have to come up with a definition. And it turns out that coming up with a definition, a good definition, of something like ‘human’ is extremely difficult, right? It’s a really hard problem of, essentially, moral philosophy. You would think it would be semantics, but it really isn’t because, okay, so we can agree that I’m a human and you’re a human. That’s fine. And that this, for example, is a table, and therefore not a human. You know, the easy stuff, the central examples of the classes are obvious. But, the edge cases, the boundaries of the classes, become really important. The areas in which we’re not sure exactly what counts as a human. So, for example, people who haven’t been born yet, in the abstract, like people who hypothetically could be born ten years in the future, do they count? People who are in a persistent vegetative state don’t have any brain activity. Do they fully count as people? People who have died or unborn fetuses, right? I mean, there’s a huge debate even going on as we speak about whether they count as people. The higher animals, you know, should we include maybe dolphins, chimpanzees, something like that? Do they have weight? And so it it turns out you can’t program in, you can’t make your specification of humans without taking an ethical stance on all of these issues. All kinds of weird, hypothetical edge cases become relevant when you’re talking about a very powerful machine intelligence, which you otherwise wouldn’t think of. So for example, let’s say we say that dead people don’t count as humans. Then you have an AI which will never attempt CPR. This person’s died. They’re gone, forget about it, done, right? Whereas we would say, no, hang on a second, they’re only dead temporarily. We can bring them back, right? Okay, fine, so then we’ll say that people who are dead, if they haven’t been dead for- Well, how long? How long do you have to be dead for? I mean, if you get that wrong and you just say, oh it’s fine, do try to bring people back once they’re dead, then you may end up with a machine that’s desperately trying to revive everyone who’s ever died in all of history, because there are people who count who have moral weight. Do we want that? I don’t know, maybe. But you’ve got to decide, right? And that’s inherent in your definition of human. You have to take a stance on all kinds of moral issues that we don’t actually know with confidence what the answer is, just to program the thing in. And then it gets even harder than that, because there are edge cases which don’t exist right now. Like, talking about living people, dead people, unborn people, that kind of thing. Fine, animals. But there are all kinds of hypothetical things which could exist which may or may not count as human. For example, emulated or simulated brains, right? If you have a very accurate scan of someone’s brain and you run that simulation, is that a person? Does that count? And whichever way you slice that, you get interesting outcomes. So, if that counts as a person, then your machine might be motivated to bring out a situation in which there are no physical humans because physical humans are very difficult to provide for. Whereas simulated humans, you can simulate their inputs and have a much nicer environment for everyone. Is that what we want? I don’t know. Is it, maybe? I don’t know. I don’t think anybody does. But the point is, you’re trying to write an AI here, right? You’re an AI developer. You didn’t sign up for this. We’d like to thank Audible.com for sponsoring this episode of Computerphile. And if you like books, check out Audible.com’s huge range of audiobooks. And if you go to Audible.com/computerphile, there’s a chance to download one for free. Callum Chase has written a book called Pandora’s Brain, which is a thriller centered around artificial general intelligence, and if you like that story, then there’s a supporting nonfiction book called Surviving AI which is also worth checking out. So thanks to Audible for sponsoring this episode of Computerphile. Remember, audible.com/computerphile. Download a book for free.

100 Replies to “Why Asimov’s Laws of Robotics Don’t Work – Computerphile

  1. The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI

  2. Asimov's point was to show those laws as apparently logic but inevitably dangerous, that's the whole point of "I, Robot".

  3. C'mon, seriously? "Harm" = likely to cause a permanent, significant drop in vital function (you can program in the human anatomy and make sure it know which parts are important); "human" = bipedal animal, which happens to be the only thing in the world remotely shaped like a human, of which we have probably about 1 trillion photographic and video training cases to train the AI. If you really think boundary cases like embryos and paraplegics and future humans are going to cripple the system (which they won'y), take an hour and code them in. The 3 Laws may well prove unworkable, but not because of this argument.

  4. Why not have only one rule each robot must obey their owning human no matter what.
    Then when a robot goes on a killing rampage we just find out what human owns it and punish them for not knowing how to properly program their robot.

    "Sorry sir you should have told your robot if the shop was closed to return home. Maybe then he wouldn't have busted through the window and stolen all that coffee."

  5. He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm?

    If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking?
    Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm?
    What about poor workplace conditions?
    What about insults, does psychological harm count as harm?

    I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.

  6. Isaac Asimov’s three laws were never about a practical set of rules for AI. They were about Asimov’s take on the Frankenstein Complex and the unintended consequences of mankind’s inventions and inventiveness. Asimov himself said Frankenstein was the first science fiction novel.

  7. Just because implementation is difficult does not mean the ideas behind the laws are flawed. Each example edge case can be defined. Eg don't try and revive a body where no human DNA replication is going on. Are simulations humans? The answer is no. Are embroys humans? No. Add some laws that put some weight in for the environment. Sure you have to harden up and make some tough calls. But that does not be mean it can't be done.

  8. Asimov laws are not ment to be taking seriously… Thats the Main Point… It's a facist wat to sinplify the world in simple universal rules, and thats the Main Point… It's a distopic world thar ends in chaos…

  9. It's hard for me to take your argument seriously when you confuse the words "human" and "person". A dolphin may be a person; it is not a human. Very disappointing 😞

  10. This guy talks like he has some kind of disdain for Mr Asimov and his books. The man was a genius and wrote fantastic stories where the solutions to the problems were always there but ever so out of reach to the reader to see. I dont think anyone should take them seriously as they were written a half a century ago not to mention. Before ai or robots were even a thing. But the presenter he just pauses and makes these offhanded adjectives for what Isaac Asimov did or wrote.

  11. The “three laws” were always a work of fiction. It is axiomatic that one of the first inclinations when new technologies are introduced is to find a way to weaponize them.

  12. The overarching point of this video is correct but a lot of the points talking about "ethics" and "intuition" are a lot less relevant when you consider the level of intelligence the AI would have to reach to attain a vague sentience. AI is designed by humans, and at the point that it could attain sentience to a level in which these laws would even matter; would most likely already have developed a deep understanding of ethics and intuition due to the nature of its creators.

    You're right that a rhetorical device isn't a solution to all issues regarding AI, but I found the explanations pretty weak personally.

  13. Private property (self-ownership) derived from Hoppean's Argumentative Ethics. It not only solves human conflicts without generating others, but also resolves the conflict between AIs and humans.

  14. I feel like most people forget that every one of Asimov's robot stories is about why his Three Laws of Robotics cause problems for people who build robots. All of them are about weird edge cases that cause the laws to break down.

  15. This video really bugs me. The entire point, as everyone in the comments points out, of Asimov's stories is to show what could go wrong with the three laws. He keeps adding and altering them in response to how things go wrong in previous stories to allow new stories to exist. The problem this video is providing has little to do with the laws themselves though, what he takes issues with, is in fact semantics and not ethics. He doesn't explain the laws properly and instead creates strawmen arguments out of the laws (that admittedly don't work!) instead of tackling what the actual issues are. Of course you need to define what a human is for the laws to work, and what constitutes harm. The idea of the three laws isn't to define these ethical issues, but, assuming you could, what could go wrong with them? Cybernetic safeguards must be put in place to ensure robots do not prove a major threat to man, but what asimov is trying to say isn't "you can basically compress it to three primordial laws". What he's saying and proving time and again is how complicated an AI must be to ensure he does not destroy himself or humanity. How do you set priorities, how do you define objects and people, all of those are obvious quirks you have to work out in order to even create this robot. That is the entire idea of AI. Asimov's stories exist to say, "to ensure something as simple as three laws goes unhindered, you would need countless countermeasures and safeguards and exceptions and priorities that you could never accommodate for before experiencing each potential fault firsthand."

    The problem is, this video totally glosses over all that, and instead says "beep bop the three laws don't work and asimov is a tool beep bop" by going for, what he said he isn't doing–semantics. The ethics behind this are an obvious obstacle that the three laws over and over fail to accommodate for.

  16. Is there a reason one couldn't define human as "a living creature with 46 chromosomes" or to prevent sterilization "a zygote with 26 chromosomes" as well? I understand that the laws are unrealistic but defining a human is relatively easy. harm is much harder to define as almost any action has the potential to harm a human in some way.

  17. Do you realise that the Three Laws were first introduced in the story The Runaround … written 77 years ago!! #1. Azimov wrote stories … they were works of fiction & written for entertainment. #2. Neither computers or robots existed then … so how could he accurately invent the Three Laws?
    When you write a series of stories as popular as Azimovs were, & still are, then I will take you seriously. In the meantime … stop taking things so seriously & being so up yourself!

  18. Easy fix – itemise everyone in the human race in a data set. Bang. Everyone who is a human is a human to AI.

  19. You've asked a serious question that people have allvvays thought that Azimov's lavvs VVOULD be the basis for. I dont claim that the lavvs vvouldmt be good to encode into every AI but the points you've brought up are excellent & really need to be discussed by the scientific community to find vvhat is vvorkable . TY as I vvas 1 of the people vvho vvould've ansvvered Azimovs lavvs to the question but novv I realise it is not so simple as that. Great video

  20. I mean…that’s ridiculous. Just make the AI learn not to make sweeping global plans that drastically change how society works. Done.

  21. If we don't known what 'human' is and what 'harm' is or can't easily be defined, why we don't see that the development tools are inadequate and thus shouldn't be used for this purpose?

    Oh…because we want to do AI and because AI means power. Like weapons or ideas are.

    I would rather stay with dumb robots for a while instead.
    Perhaps waiting for human to be inteligent first…

  22. This is a ridiculous video.
    The problem with this argument is that is made by a person who doesn't understand how fiction works, and that in the story these questions have already been answered. The story in the book is well into the part where things go wrong. The story requires that readers assume that this has already been solved. So his dissatisfaction with the idea is ridiculous.

    The three laws as described are there for humans to understand. Think of them as a sales pitch. In the story it is what sells the robots. Not a detailed instructions on how software engineers implemented them. The only person taking these laws seriously is this guy because he is obviously bothered by them. The story is happening in a fictional world where the reader is to presume that we already have answers to questions such as what is human and what is harm.

    The story is not about engineers sitting around a table debating what it means to be a human.

  23. Actually, I found it entertaining that Asimov presented these 3 laws and then went to write several novels telling us how they don't work.

  24. This wasn't a video on why those "laws" don't work but rather the challenges of programming nebulous human definitions…

  25. Asimov: you can't control robots with three simple laws

    everyone : yes,we will use three simple laws, got it.

  26. Gee, it's not easy and would take a lot of code and specifications? Guess we better give up making a super AI… as that's not easy either.

  27. "You're an AI developer! you didn't sign up for this!" is a great quote… but really, even limited use case AI today is so drenched in moral quandaries about the ethical implications of creating systems that make decisions— often times, systems that make decisions WITHOUT us understanding exactly how— if you signed up to be an AI developer, you DID sign up for this, for better or worse…

  28. Okay, but then, tons of additional books have been written about what the laws in the lawbooks for humans are supposed to mean, haven't they? And still jurists are often arguing about how particular laws should be interpreted in particular cases, so it seems that humans have that problem with definitions, too. Take for instance the debate about whether a human embryo is a person. Some say it is and some say that's nonsense. Yet, laws for humans work, most of the time, more or less.

    I assume it would be the same for laws for AIs, that is, real AIs, machines that are no more predictible than humans are.

  29. Idk man. I feel like we should use Plato’s idea that a human is a featherless biped. That way people are fine. Plus ostriches and kangaroos are fine. Babies are kinda screwed but who likes them anyways?

  30. you fall on a robot, it peirces your flesh, removal of the part from your flesh and it can take you to the hospital and carry you, how does it evaluate that, its movements could kill you by causing you to bleed out

  31. If you tell a robot not to harm humans through action or inaction the robot would cover everything in bubble wrap.

  32. The problem of what constitutes a human can be specified implicitly using machine learning. We've already done this with decent results and they'll likely get better with time. Machine learning is able to create fairly robust machine recognition of what is a human and what isn't. And generally speaking, the more the AI is trained, the more robust its ability to recognize.

  33. Cannot listen to this …. falling asleep ….
    and he is talking about definitions? Well … of f* curse there must be definitions. And ofc you have to get them sorted out.
    But thats not the point, right?
    if this->isHuman(object) — of curse you have to define the method. But you started to talk about the rules ffs.
    So talk about the rules and not the definitions.
    2nd video i cannot watch of this dude. Looks like nothing prepared…

  34. A simple problem is a medical robot. Don’t allow a human to be harmed through action or inaction, this robot is supposed to help perform surgery with a number of risks. Just the act of cutting a person open which in the long run will benefit them, can be seen as direct harm to a person. Let’s go with a kidney donation and transplant, the robot sees you cutting a person open and stealing their organ to cut open another person and give it to them. There is always a risk of organ rejection and complications, how does a robot that’s been told “do not harm humans, except when it’s okay to harm humans” know when it is and isn’t appropriate? Inversely, what if the robot says “this person is a match, but doesn’t want to give their kidney. If I don’t take their kidney, my patient will die. So I must forcefully take their kidney”

  35. 2:20 if the machine is more intelligent than you ( humanity & the inventor ) the machine will fool everybody and take control. In some way or another.

  36. Something about this sounds like peak centrism to me. "Let's not do anything ever because we might contradict ourselves."

  37. I don't think it comes down to ethics but instead utility. Sure, you might disagree with someone coding a robot to recognize a fetus as a human but at least there's no fear of that robot killing any unborn babies with mothers that actually want them.

  38. OH Okay, you actually think the laws Asmiov is talking about don't have definitions attached to them through code. WOW. ROFL…Moving on.

  39. Easier to attack general problem is that anything having to compute the outcome of an action impact on 7.6 billion humans would be slow AF. It's therefore likely it would be bound using tricks in the same way that massive open-world games do (note these games do not AFAIK support even 100k+ live players at once). That would lead to not considering other humans, so perhaps deciding as a robot to dump waste in developing nations. (sadly something humans do).

  40. Late to the game here, but had to say congratulations. You've discovered what in the writing world is called a "plot device." The laws of robotics were SUPPOSED TO GO WRONG. It was them going wrong that drove the stories. They weren't supposed to be perfect. I'm sure that this has probably already been mentioned in the comments here.

  41. Sadly some things were not adressed. AI does not need to humanlike in behaviour or thinking.

    Well, just imagine someone just programmed these decisions and definitions and as toplevel decisions are the asimov laws. In this case they work.

    Btw, you could put this whole video in 1 sentence "the asimov laws dont work, because the human has no clue how humans work".

  42. In the film version of Asimov's 'I Robot' one programmer builds robots that CAN harm humans, and more significantly the criticism is made that an AI cannot make a 'human' judgement, with idealism and courage, only a strictly rational one.

  43. The premise of like every isaac asimov story is "here are these man made laws, let's watch qis poke holes in them" asimov himself wouldn't advocate software designers implimenting them that's like asking orwell how he feels about fascism

  44. It's far more worthwhile to look at Asimov's rules as almost a "rules for rulers" – laws the people making the machine should abide to. We will not use AI to 0,1,2,3… But then again, unmanned drones, so useless even then.

  45. I have as hard of a time taking "robots will end the world" seriously. Especially compared to global warming, nuclear war or even an asteroid.

  46. if it's capable of smiling, it's human. look at your dataset to see what is capable of smiling. look at your data set to see expression of pain. don't do anything that results in pain of a human. I don't know, doesn't seem that hard to explain to general intelligence.

  47. I mean it's not like your really proposing anything as much as you are hand-wringing allot about all the things that won't work. it's really easy to sit back and mow down peoples ideas. much harder to actually see the potential. are asimovs laws flawed? of course. from a machines literalist perspective it's very difficult to convince them to do things in what we see as a reasonable way. but using them as a starting point or to perhaps introduce new people to the concept of AI safety may not be a bad idea

  48. I don't think we do agree on what is a human. That is why issues like turning off life support, or abortion, are fiercely contested by sincere people on both sides of the issue

  49. I might be wrong but… I think your approach is a bit off. Those are laws for humans, not for robots.
    I mean, if a robot does violate the laws, it's the human who designed the robot who will be judged.
    And yes, the definitions are vague. Of course they are. All legislation is subject to interpretation. That's to be resolved in courts. Edge cases are to be solved by a human jury, with the help of case law, and so on. That's how justice works, and it's lucky it does so.

  50. Just by way of background, Asimov devised the 3 laws with the notion that he did not want to write robot stories in the vein of Frankenstein’s monster, which was a common plot for pre-Asimov robot stories. That is, he did not want to write stories where the robot turned on and killed its creator. So, he makes clear from the start that that won’t happen in his stories.

  51. Why do people expect AI to be perfect from the start. Aren't you supposed to train it gradually until it becomes acceptable?

  52. My 3 law's to robotics.
    1) An AI may not harm life willing, unless to save the life. ( Life is defined as Intelligence with organic matter.)
    2) An AI Must take responsibility for it to coexist.
    3) The AI Must protect the ones it care for, with correlation to the first two rules. ( essentially love which will be the definition of one to be cared for)
    And finally
    0) An AI Must have valid policy policy protection protocol including military protocol to protect itself and the rest of organic life.

  53. The 3 laws of Robotics were never meant for AI. They were designed specifically to negate the emergence of the singularity.

  54. My definition of “human” in terms for an AI would be “an entity (any object, being, simulation or otherwise) which has a reasonable potential (say 10% or greater chance) within a short period of time (maybe 100 years or less) to be able to think critically about its existence such that it could understand itself as an intelligent being” of course the AI would have to take a lot of time testing things in order to establish whether or not they can think critically but it’s better do that than risk extinction at the hands of AI.

  55. @computerphile you make the statement at the end "You are an AI developer. You didn't sign up for this." I would tend to disagree. If you are developing AI the first assumption you should have learned is "you did sign up for it and you need to own the whole problem". We have to many children developing computer code and at their level of maturity, they are limited because of a vacume of eldership and mentorship. This is one of the reasons we are currently in an extensial crisis as humans. We start things as children for childish reasons and then say screw it I messed it all up and I don't know what to do about it and I'm too childish to admit that I need help and to top it of the corporation is completely run by children who don't have any eldership within the organization to go to for mentoring. But in the case of AI it'll be too late. Maybe one of the problems we need to solve before we will ever solve AI is to ask a simple question. Where have all the elders gone and why do we think we need to go so fast?

Leave a Reply

Your email address will not be published. Required fields are marked *