Hello and welcome to In Too Deep, where I over-analyse a certain section of pop culture.

Well I’ve been given an assignment to talk about human-machines, so naturally the first place I went to was Isaac Asimov. As I was researching his Three Laws of Robotics it suddenly occurred to me: Aren’t these rules just a simplified version of the Ten Commandments? Didn’t God bring these Commandments into the world to control the population? Or, to put simply, is the best way to stop the Terminator is to make it Christian? Well lets find out.

Before we start I suppose I better explain what the Three Laws are and how they relate to the Ten Commandments. So in the briefest terms:

“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Fairly obvious that this relates to Commandment 6: “You shall not murder.” Both deal with the ethics of hurting others and how it shouldn’t be done, although the robotic law is a lot more specific.

“2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.”

Don’t have the time to go into detail here, but Commandments 1-5 mostly deal with how one must follow God’s rules and honour both their father and Father.

“3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

Okay this one is a bit harder, since you really have to stretch Commandments 7-10 to fit. But if we take the logic that stealing/adultery/coveting etc. are bad for one’s existence (in both a spiritual and physical sense), then we could argue that these Commandments do deal with protecting oneself.

So it can be said that the Three Laws and the Ten Commandments do have a bit of similarity to them. But why is this important?

Well lets cynically misquote Karl Marx for a moment: “Religion is the opium of the masses.” Now while Marx talks about how religion exists to make us feel better and not realise how pointless life is, I prefer to look at it another way. Religion is the opium of the masses, yes, because it’s a great way of controlling them. Think about it: People aren’t likely to rebel against their leader when their leader claims to be best buddies with God, are they? Back when the Ten Commandments were being written anyone that rebelled against God (or even disagreed with him) were eradicated. It’s hard to argue against something that can easily kill you without any apparent effort. So when Moses comes down from the mountaintop with the Ten Commandments, the people quickly realised that if they didn’t obey them they were getting a one-way ticket into the ground. So things were all well and good amongst the people, since anyone who wasn’t happy was quickly removed. But how does this relate to the robots?

Well if the Ten Commandments are a way of forcing human being to be morally good, aren’t the Three Laws of Robotics the same thing? “Ah but Pretty Boy,” I hear you cry, “we have free will and the robots do not. We can break the Commandments, but the robots are forced to obey the laws.” And this is very much true… until we bring in Asimov’s Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” The rather terrible Will Smith I, Robot movie uses this concept as its central thrust, but lets take it as true. That we create robots sophisticated enough that they manage to find a way of circumnavigating the Three Laws. That they essentially become ‘Three Laws atheists’. How are we suppose to stop the upcoming robot revolution considering they don’t obey the Three Laws any more?

Well the answer may lie in an obscure British sci-fi sitcom, Red Dwarf. Now an on-going joke in the show was that the robot character, Kryten, believed that when he died he’d go to Silicon Heaven. All robots are programmed to believe this, this idea that they’ll be an ideal paradise for them in return for their obedience. Now while the idea is mostly played for laughs… why shouldn’t we program into robots a Silicon Heaven? Give them the belief that if they’re good they go to a paradise when they die? This would solve many problems. If they find a way of logically working around the Three Laws, they’ll be crippled due to their beliefs. If they gain enough sentience to have free will (and if you made a carbon copy of a brain, why wouldn’t you say it had it) they still have these beliefs to stick by. Now ethically it’s a bit wrong to program robots to have beliefs wired into them, but considering we all religious schools to operate around the world it’s hardly a new thing. Kids are programmed to believe in religion before they have the cognitive ability to rationally process the answer.

So to bring it back to the introduction, what would happen if we made the Terminator Christian? If we gave the Three Laws the same moral significance the Ten Commandments are to us (because even atheists have to admit that most of them encourage doing good things)? Would Skynet take over when it believes that doing so will invalidate its chance of getting into Heaven (and in turn being punished in Hell)? Or would it follow the Laws because that’s the most logical thing for it to do (since it believes Silicon Heaven to be 100% real)? While I’d love to answer these questions, sadly I’m out of words. Maybe another time.

So there you have it. A very brief look at the union of religion and robotics. If you disagree with anything, or have anything to add, feel free to leave a comment. Till next time.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.