Laws for Robotics

By | December 3, 2018

Recently I finished reading the late Stephen Hawking’s book “Brief Answers to the Big Questions”. One of the chapters it covers is Artificial Intelligence. A point he covers in the book mentions a framework for how AI should be used; a set of guidelines for AI to follow or more simply put some rules for Robotics. Naturally the very first thought would turn to Isaac Asimov’s rules for robotics and for the longest time I have felt they were bullet proof. As a quick reminder I am mentioning them below.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This got me thinking, if these rules were so perfect, it should have worked for humans as well, right?

But we can instantly see that simply replacing the word robot with human results in sentences we can instantly disregard as whimsical.

  1. A human may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A human must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A human must protect its own existence as long as such protection does not conflict with the First or Second Law.

It’s pretty clear that the rules we expect robots to follow clearly would fail if it applied to humans. While it would be prefect it relies on the absence of free will. Essentially we expect the robot to unconditionally follow these rules at any cost. Naturally human beings prefer to bend and break rules as it suits them. This brings me to my first question

Why do we need these rules?

The rules exists because at some point in the near future we expect robots to be able to think for themselves and more importantly be able to act on these thoughts. We need these rules because at the current rate AI would be in charge of some critical aspects of our lives and we need to know we can trust it. So may be the problem is we are moving too fast? It’s like giving a 6 year old a phone. Sure someday we know it will learn how to use the phone and our lives will be easier for it. But right now it doesn’t understand what its doing and there is a 50:50 chance you will see some ridiculous things in your monthly credit card statement. But for arguments sake let’s assume slowing down isn’t an option. We are worried because AI can think for itself, what if we simply remove the ability to think. In other words remove sentience or free will? Then would it still technically be AI? So in the end we are left only with the ability to control its actions hence the rules.

Who should make the rules?

Naturally we all assume we should be the ones creating the rules. After all the intention is make them favorable for us. However if this is the case, how would a self-learning robot interpret these rules? Maybe if we leave AI to its own devices it might come up with rules that are far better than anything we can come up with. To be honest as human beings we have set the bar really low for getting humanity on the same page and striving towards a common goal why should the “AI constitution” be any different. So it seems we can’t be trusted to make the rules and we don’t trust the AI to be fair either? An interesting parallel I recently came across was the formation on home owner’s association rules between home owners and the Managing committee, I leave it you to figure out who the AI and humans are in this case. I guess in the end it needs to be collaborative.

My question is if you were a robot that could reason would you be happy following somebody else’s rules even if it is to your own detriment?

Should the rules apply only to AI or the people creating it?

Any technology can be used for good as well as bad, we recently hear the hue and cry about genetically modified babies, this has some very interesting parallels with AI. A technology that has tremendous potential to do good as well as harm and in this case too we are much unprepared. If the world could punish the creators of the Atom bomb would be do it? After all the quest for scientific breakthroughs have put humanity in peril like it’s never seen before. Never before could one mad man destroy the entire planet.

My point is, is it enough to rely on mutually assured destruction to keep humanity safe or do we learn from past mistakes and make sure to formulate rules that have implications not just for AI but human beings as well if violated.

This brings me to the next most important question

Who is in charge of enforcing the rules once we create them?

This question probably has the most nuances to it. It’s fairly safe to say that when AI becomes ubiquitous it will become a core part of our daily lives, much like Facebook. If AI broke the rules would we still go ahead and switch it off to teach it a lesson? Would AI even be capable of understanding that it’s being punished and learn a lesson from our actions? What if it resists when we try to enforce a penalty for AI doing something it was not supposed. Like Ultron in the Avengers we can’t hope to be able to nail down our bad AI to one device or network, our AI is more like Skynet. It is theoretically impossible to punish a robot. If so what is the point of having rules especially when breaking them has no consequences. At least none that will affect the AI more than the human.

Where does AI begin and Human end?

Another field that’s rapidly closing in on us is enhanced biological capabilities like arms that can lift heavy weights or feel and neural implants that can visualize our thoughts. AI is a machine, and a Human is obviously a human, but a cyborg is more Human or More AI? If so what percentage dictates if the rules apply or not?

If they have rules shouldn’t that have rights too?

To this question I have no real comments because it purely depends how you see AI based Robots. As mechanical slaves to human wishes or assistance animals.

At this point I feel I have only raised more questions than found answers and the fact is with our current understanding of AI and the eagerness to adopt the latest in cutting edge technologies we aren’t really pausing the ask the important questions. We missed the boat with Atomic bombs, Internet privacy and the Industrial revolution and climate change. The only example I can give where we have one our due diligence is Space and that too might change very soon.