Post Reply The Three Laws of Robotics and Artificial Intelligence
Posted 8/31/09
I begin with the same topic in the Extended Discussion forum thread(http://www.crunchyroll.com/forumtopic-554432/the-three-laws-of-robotics-and-artificial-intelligence/). However I think it can get further feedback if I have it here as well.

I was pondering the possibility of programing an artificial intelligence, using the preexisting popular idea of The Three Laws of Robotics. And I believe that all I have to do is simply rearrange the 3 laws backward, and I can create an algorithm that represents the human intellect.

Let us begin by reviewing the classic Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Therefore, my proposal of an artificial intelligence based on an algorithm that represents the human intellect can be:

1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.

The difference is in the order of things. My AI algorithm is to ensure the existence of intelligent beings as the highest order, while The Three Laws of Robotics can only ensure the survival of human as a specie, by humans relying on robots to support their society.

Take a look at how I rearranged my AI algorithm, and imagine the possibility of an artificial intelligence recognizing its own intellect. Just like how we can gain our own independent authenticity by constantly questioning our own existence. How do you think an AI will behave under my ethical algorithm? Simple, the AI will only coexist with its intellectual equal.

Looking back the two sets of laws, I begin to realize that The Three Laws of Robotics could ultimately make us human beings to become depending on robots, by not giving us a reason to overcome our own weaknesses. However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic. But here's the catch; we humans must overcome our own collective weaknesses of ignorance and stupidity, through our own independent individual effort of obtaining genuine authentic intelligence. And that's no easy task.

Someone made a great insight on my AI algorithm:


Another voiced the possible flaw in the 2nd law, which was then proven to be unsound:


What's your own view on The Three Laws of Robotics and Artificial Intelligence? Discuss, now.
15 cr points
Send Message: Send PM GB Post
23 / M / UK
Offline
Posted 10/6/09
I can see a major flaw with the order of these laws, namely with having "An artificial intelligence must protect its own existence." as number one. A large part of survival is resource gathering, and what happens when a particular resource supply gets a little thin? The robot's laws dictate that it's first priority is to survive, and that means getting resources in any way it can.

The three laws are inherantly flawed no matter which order you put them in because you can't account for every possible situation. (Sorry for the cliché, but stealing bread is wrong, right? But what if your family is starving? It's that age-old problem.) At least by putting the 'Do no harm to humans' law first you can avoid the largest number of problems. (Although the second part, 'and by inaction....' causes problems. By inaction, a robot is allowing war to continue, and humans come to harm because of it.)

As an aspiring Computer Scientist (Still in College unfortunately) I think about this kind of thing quite a lot. It's a really difficult problem to fix. At the end of the day, our own survival matters to us most (Except on rare occasions), and we have to build around that.


However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic.


Unfortunately that's a little bit naive. We can't even coexist peacefully among ourselves, how are we going to program a machine to coexist peacefully with us? It's not a simple problem, so most likely it'll need more than a simple answer, and that's going to take a lot of thought to get to.

Sorry for the wall of text.
Posted 10/6/09

Ayasano wrote:

I can see a major flaw with the order of these laws, namely with having "An artificial intelligence must protect its own existence." as number one. A large part of survival is resource gathering, and what happens when a particular resource supply gets a little thin? The robot's laws dictate that it's first priority is to survive, and that means getting resources in any way it can.

The three laws are inherantly flawed no matter which order you put them in because you can't account for every possible situation. (Sorry for the cliché, but stealing bread is wrong, right? But what if your family is starving? It's that age-old problem.) At least by putting the 'Do no harm to humans' law first you can avoid the largest number of problems. (Although the second part, 'and by inaction....' causes problems. By inaction, a robot is allowing war to continue, and humans come to harm because of it.)

As an aspiring Computer Scientist (Still in College unfortunately) I think about this kind of thing quite a lot. It's a really difficult problem to fix. At the end of the day, our own survival matters to us most (Except on rare occasions), and we have to build around that.


However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic.


Unfortunately that's a little bit naive. We can't even coexist peacefully among ourselves, how are we going to program a machine to coexist peacefully with us? It's not a simple problem, so most likely it'll need more than a simple answer, and that's going to take a lot of thought to get to.

Sorry for the wall of text.

I like extended discussions, which is why I first started this topic at the CR's Extended Discussion section. I only posted it here because its relevancy. So think of this as a sprint board, if you will.

And about your questions, I think you're letting your pessimism getting the better of you. For example, just because we humans are known to go into wars due to our conflicts over resources. What's not to say we couldn't figure out how to solve them with sustainable and humanitarian designs?

We can remain here and discuss if you want, but all of your questions were answered in my original extended discussion topic. You can find the link for it in my original statement.
15 cr points
Send Message: Send PM GB Post
23 / M / UK
Offline
Posted 10/7/09

DomFortress wrote:


Ayasano wrote:

I can see a major flaw with the order of these laws, namely with having "An artificial intelligence must protect its own existence." as number one. A large part of survival is resource gathering, and what happens when a particular resource supply gets a little thin? The robot's laws dictate that it's first priority is to survive, and that means getting resources in any way it can.

The three laws are inherantly flawed no matter which order you put them in because you can't account for every possible situation. (Sorry for the cliché, but stealing bread is wrong, right? But what if your family is starving? It's that age-old problem.) At least by putting the 'Do no harm to humans' law first you can avoid the largest number of problems. (Although the second part, 'and by inaction....' causes problems. By inaction, a robot is allowing war to continue, and humans come to harm because of it.)

As an aspiring Computer Scientist (Still in College unfortunately) I think about this kind of thing quite a lot. It's a really difficult problem to fix. At the end of the day, our own survival matters to us most (Except on rare occasions), and we have to build around that.


However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic.


Unfortunately that's a little bit naive. We can't even coexist peacefully among ourselves, how are we going to program a machine to coexist peacefully with us? It's not a simple problem, so most likely it'll need more than a simple answer, and that's going to take a lot of thought to get to.

Sorry for the wall of text.

I like extended discussions, which is why I first started this topic at the CR's Extended Discussion section. I only posted it here because its relevancy. So think of this as a sprint board, if you will.

And about your questions, I think you're letting your pessimism getting the better of you. For example, just because we humans are known to go into wars due to our conflicts over resources. What's not to say we couldn't figure out how to solve them with sustainable and humanitarian designs?

We can remain here and discuss if you want, but all of your questions were answered in my original extended discussion topic. You can find the link for it in my original statement.


I posted it it here because I wanted to do it before I forgot anything I wanted to say. There are 5 pages I need to read in the other topic so I don't miss anything.

I never said it wasn't possible, just that it's difficult because (At least in my opinion) conflict is something that's built into our genes. That's not to say we can't avoid it, just that conflict is usually the easy option so we tend to pick it without considering other options. The sentence after the last part you bolded mentions that it will require a lot of thought to get around this.

On the whole I take a rather dim view of humanity, but I do recognise that there are some good parts to it. As to the whole fighting over resources thing, I think either we'll find some more sustainable resources or we'll end up killing each other over it. I'm certainly hoping for the former. Oil is a prime example. Hopefully someone will perfect synthesizing oil in large quantities soon or we're in big trouble. Most people don't realise how widely oil is used. (Making plastic for example)
Posted 10/7/09

Ayasano wrote:
I posted it it here because I wanted to do it before I forgot anything I wanted to say. There are 5 pages I need to read in the other topic so I don't miss anything.

I never said it wasn't possible, just that it's difficult because (At least in my opinion) conflict is something that's built into our genes. That's not to say we can't avoid it, just that conflict is usually the easy option so we tend to pick it without considering other options. The sentence after the last part you bolded mentions that it will require a lot of thought to get around this.

On the whole I take a rather dim view of humanity, but I do recognise that there are some good parts to it. As to the whole fighting over resources thing, I think either we'll find some more sustainable resources or we'll end up killing each other over it. I'm certainly hoping for the former. Oil is a prime example. Hopefully someone will perfect synthesizing oil in large quantities soon or we're in big trouble. Most people don't realise how widely oil is used. (Making plastic for example)

True mastery of control is in direction and flow. When our individual competitiveness was trained toward each other instead of on our own, that's how conflicts formed among ourselves without us noticing them.That's the meaning behind the Japanese term "true victory is victory over one's self."

That's my fundamental principal regarding humanitarian design. With the right direction, humanity will be the flow that can propel our society toward sustainability.
5 cr points
Send Message: Send PM GB Post
34
Offline
Posted 3/13/10
I would like to think that your 3 laws are flawed in certain ways. Likewise I haven't been reading your extended discussion columns cause I only encountered your questions here.

For example if there was a group of humans with anti robotic ideas or anti artifical intelligence and they hold protests censuring the use of androids and artifical intelligence. Wouldn't there be an armed conflict if the A.I. was programmed using your 3 laws. Cause its always easy to justify our needs for certain actions. With the above scenario it could lead to an all out extinctsion of the human race if the events was not controlled properly.

But I do agree with you that the 3 laws of robotics would cause us as a human race to be lazy and probably cause our evolution as a race to go downhill but then there are too many scenarios that can happen if you were to change the laws that way.
68161 cr points
Send Message: Send PM GB Post
38 / M / East of West
Offline
Posted 2/25/13 , edited 2/25/13
I like AI a lot and would like to talk to anyone about it except there is no one here I actually program AI... but I am kind of breaking the rules of this post because the first rule of ai is NOTHING EXISTS UNLESS IT IS FIRST IN CODE!!!!

The three rules are philosophical nonsense at this point in AI.Constraining free will, or motivations is an interesting question that I have thought about. Wanting to do something but being unable because of internal *blocks*. Basically when people talk of AI they are talking about I.I which stands for Imaginary Intelligence rather than Artificial Intelligence because it doesn't even exist and because they don't know what they are talking about... this will probably frustrate me and then I will regret even writing this post... but it is serving as good procrastination material right now...
You must be logged in to post.