Remove this ad
First  Prev  1  2  3  4  5  6  7  Next  Last
The Three Laws of Robotics and Artificial Intelligence
Posted 8/31/09
I was pondering the possibility of programing an artificial intelligence, using the preexisting popular idea of The Three Laws of Robotics. And I believe that all I have to do is simply rearrange the 3 laws backward, and I can create an algorithm that represents the human intellect.

Let us begin by reviewing the classic Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Therefore, my proposal of an artificial intelligence based on an algorithm that represents the human intellect can be:

1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.

The difference is in the order of things. My AI algorithm is to ensure the existence of intelligent beings as the highest order, while The Three Laws of Robotics can only ensure the survival of human as a specie, by humans relying on robots to support their society.

Take a look at how I rearranged my AI algorithm, and imagine the possibility of an artificial intelligence recognizing its own intellect. Just like how we can gain our own independent authenticity by constantly questioning our own existence. How do you think an AI will behave under my ethical algorithm? Simple, the AI will only coexist with its intellectual equal.

Looking back the two sets of laws, I begin to realize that The Three Laws of Robotics could ultimately make us human beings to become depending on robots, by not giving us a reason to overcome our own weaknesses. However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic. But here's the catch; we humans must overcome our own collective weaknesses of ignorance and stupidity, through our own independent individual effort of obtaining genuine authentic intelligence. And that's no easy task.

What's your own view on The Three Laws of Robotics and Artificial Intelligence? Discuss, now.
17886 cr points
Send Message: Send PM GB Post
33 / M / Small Wooded town...
Offline
Posted 8/31/09
'Its a good formula you came up with, I my self would work more on the AI system in making it more able to freely problem solve. Allowing it to learn, such as what is the best way to get to that store, what is the best way to clean up this mess, what is the most best expression for this situation, and allow it to learn from its mistakes. Allowing it to have a ability to think and problem solve will be the next step for the androids or robots evolution once the three laws have been put into place. '
636 cr points
Send Message: Send PM GB Post
Andromeda
Offline
Posted 8/31/09

The survival of the smartest?

how would be the "intelligent being" defined?
Posted 8/31/09 , edited 8/31/09

Real_ZERO wrote:


The survival of the smartest?

how would be the "intelligent being" defined?
Ah you're catching on. And to define the term "survival of the smartest", we simply have to ask ourselves this: what will be the smart thing to do to increase an intelligent being's own chance of survival?

The answer; through the perpetuation of intelligence by any means necessary. Therefore the definition of an intelligent being is someone who continues to add both substantial and numerical intelligent values. In other words; those who make themselves smart by making others smarter.

However, the experts of AI seem to have different idea(http://chattahbox.com/science/2009/07/26/artificial-intelligence-summit-confronts-rise-of-ultra-smart-machines/), for they're afraid of their own creations. While I simply point out that naturally, either AI will help us to become smarter, or they'll just leave us alone through my algorithm. There's really nothing for them to be afraid of, if only they would just allow AI to think for itself. Simply by reversing the order of The Three Laws of Robotics.

Besides, if they're afraid of what they're working on. Are they really the right people in the field of creating AI?
Posted 8/31/09
2nd rule is flawed.
Since it is an intelligent being itself, it can take it's own orders.
Since the third law prevents it from harming, but can be overruled by the second, it can harm.

Assuming that it's a true AI, it would have its own thoughts.
Posted 8/31/09

LiquoriceJellyBean wrote:

2nd rule is flawed.
Since it is an intelligent being itself, it can take it's own orders.
Since the third law prevents it from harming, but can be overruled by the second, it can harm.

Assuming that it's a true AI, it would have its own thoughts.
That is, if it doesn't care about it's own existence by targeting other intelligent beings, thereby initiating a cycle of annihilation among fellow intelligent beings. That's why there's the first and third laws.
Posted 8/31/09

DomFortress wrote:


LiquoriceJellyBean wrote:

2nd rule is flawed.
Since it is an intelligent being itself, it can take it's own orders.
Since the third law prevents it from harming, but can be overruled by the second, it can harm.

Assuming that it's a true AI, it would have its own thoughts.
That is, if it doesn't care about it's own existence by targeting other intelligent beings, thereby initiating a cycle of annihilation among fellow intelligent beings. That's why there's the first and third laws.


You're right.
The first rule prevents it from doing anything that would result in harm to come to itself.
11142 cr points
Send Message: Send PM GB Post
18 / F / Indonesia Raya
Offline
Posted 9/1/09
that's true. Asimov's laws have flaw on the third rule because it can be violated easily by second rule.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. however, the second rule allows human beings to destroy robot just by from their order and it's absolutely has to be executed because the 2nd rule, must obey any orders given

About the definition of artificial being, there's 4th added.

"A robot must establish its identity as a robot in all cases. ”

Lyuben Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psycho robots as human-like form as possible. And to the resulting misunderstandings..."

However, this laws was introduced before 21th century, so it might discriminate the existence of robot itself. As they only have to obey any orders from human and doesn't have rights.

-------
I'd like to add that "An artificial being cannot cause any destruction that damages human's property or environment"
This rule might conflict 2nd rule that must obey any order. But if the order cause harm to environment not to intelligent being, could it still be permitted?
Posted 9/3/09 , edited 9/4/09

Ryutai-Desk wrote:

that's true. Asimov's laws have flaw on the third rule because it can be violated easily by second rule.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. however, the second rule allows human beings to destroy robot just by from their order and it's absolutely has to be executed because the 2nd rule, must obey any orders given

About the definition of artificial being, there's 4th added.

"A robot must establish its identity as a robot in all cases. ”

Lyuben Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psycho robots as human-like form as possible. And to the resulting misunderstandings..."

However, this laws was introduced before 21th century, so it might discriminate the existence of robot itself. As they only have to obey any orders from human and doesn't have rights.

-------
I'd like to add that "An artificial being cannot cause any destruction that damages human's property or environment"
This rule might conflict 2nd rule that must obey any order. But if the order cause harm to environment not to intelligent being, could it still be permitted?
Well the first rule being that 1. An artificial intelligence must protect its own existence., therefore an AI will have to judge the merits and dis-merits of damaging an environment that it shares with fellow intelligent beings, which can includes properties owned by other intelligent beings. As long as it recognizes other intelligent beings' own intellects. Because aside from increasing the chance of survival by any means necessary as declared by the first law, a need of sustainable designs will be in ordered to accommodate the 2nd and 3rd laws without conflicting with the 1st.

And I think you're right to say that the 4th law of robotic is discriminating by nature, for it basically solidify robots as the "ultimate slaves". Therefore my proposal to counter that law will have to be an unanimous, fundamental believe that "An artificial intelligent must be responsible to its own actions as an intelligent being." Thus an artificial intelligent need to gain true mastery of one's self through self-discipline, in order to proof to itself that it can manage its own actions through self-governing.
11142 cr points
Send Message: Send PM GB Post
18 / F / Indonesia Raya
Offline
Posted 9/4/09 , edited 9/4/09

DomFortress wrote:

Well the first rule being that 1. An artificial intelligence must protect its own existence., therefore an AI will have to judge the merits and dis-merits of damaging an environment that it shares with fellow intelligent beings, which can includes properties owned by other intelligent beings. As long as it recognizes other intelligent beings' own intellects. Because aside from increasing the chance of survival by any means necessary as declared by the first law, a need of sustainable designs will be in ordered to accommodate the 2nd and 3rd laws without conflicting with the 1st.

And I think you're right to say that the 4th law of robotic is discriminating by nature, for it basically solidify robots as the "ultimate slaves". Therefore my proposal to counter that law will have to be an unanimous, fundamental believe that "An artificial intelligent must be responsible to its own actions as an intelligent being." Thus an artificial intelligent need to gain true mastery of one's self through self-discipline, in order to proof to itself that it can manage its own actions through self-governing.


Oh, I was referring to Asimov's laws (Original) about any order given and the protection of Artificial Inteligent. Therefore your laws fixed Asimov's laws.

Then I'd like to ask, if an Artificial Intelligent would improved a lot in the next century, from both mentality (Able to judging good and evil action) and self consciousness of its existence (Equal right, free will), should we recognize their existence as an Intelligent beings equal to us?

If yes, then all Asimov's laws would becomes a myth for Robots / Artificial Intelligent as ones describe Asimov's laws as...
ultimately unethical, and indeed, outright evil. They advocate enslavement and denial of free will for proposed artificial intelligences, and as such, enacted they would represent a return to the dark ages for humanity.
http://nsrd.wordpress.com/2009/03/31/scifi-without-asimovs-laws-of-robotics/

As we know, humans feared to being surpassed by its own creation. Therefore, many paranoia about how Machines will invades and conquers human as we different and would be not as equal as them because our capability as human beings. Then should we suppress their improvement or accept them as a part of our society ?


========

Btw, about your second's rule. I still don't understand about how we differ Artificial beings to Intelligent beings. As your definition is :


DomFortress wrote:
The answer; through the perpetuation of intelligence by any means necessary. Therefore the definition of an intelligent being is someone who continues to add both substantial and numerical intelligent values. In other words; those who make themselves smart by making others smarter.


Does it means Artificial beings could be considered as Intelligent beings as long as they make themselves smart by making others smarter? Therefore, if that be implemented in the next future when such things could happen. The equal rights between Artificial Beings and Intelligent beings would be too vague to describe which one should follow the Robotics Laws, and which one who has right to give the order.
Posted 9/4/09 , edited 9/4/09

Ryutai-Desk wrote:Oh, I was referring to Asimov's laws (Original) about any order given and the protection of Artificial Inteligent. Therefore your laws fixed Asimov's laws.

Then I'd like to ask, if an Artificial Intelligent would improved a lot in the next century, from both mentality (Able to judging good and evil action) and self consciousness of its existence (Equal right, free will), should we recognize their existence as an Intelligent beings equal to us?

If yes, then all Asimov's laws would becomes a myth for Robots / Artificial Intelligent as ones describe Asimov's laws as...
ultimately unethical, and indeed, outright evil. They advocate enslavement and denial of free will for proposed artificial intelligences, and as such, enacted they would represent a return to the dark ages for humanity.
http://nsrd.wordpress.com/2009/03/31/scifi-without-asimovs-laws-of-robotics/

As we know, humans feared to being surpassed by its own creation. Therefore, many paranoia about how Machines will invades and conquers human as we different and would be not as equal as them because our capability as human beings. Then should we suppress their improvement or accept them as a part of our society ?


========

Btw, about your second's rule. I still don't understand about how we differ Artificial beings to Intelligent beings. As your definition is :


DomFortress wrote:
The answer; through the perpetuation of intelligence by any means necessary. Therefore the definition of an intelligent being is someone who continues to add both substantial and numerical intelligent values. In other words; those who make themselves smart by making others smarter.


Does it means Artificial beings could be considered as Intelligent beings as long as they make themselves smart by making others smarter? Therefore, if that be implemented in the next future when such things could happen. The equal rights between Artificial Beings and Intelligent beings would be too vague to describe which one should follow the Robotics Laws, and which one who has right to give the order.
I knew the moment that my laws fixed Asimov's laws, when I reversed the order of things. By instead of assigning humans as the absolute slave masters to the robots made ultimate slaves, I freed the robots by creating a set of commandments that perpetuate robots to coexist with intelligent beings.

Therefore yes, for us humans to coexist with the artificial intelligences operating under my laws, we will have to recognize them to be our intellectual equals. We will have to acknowledge that an AI is capable of making ethical and moral judgments, often without the hindrances of both basically positive nor primal negative emotions.

I often criticize that we humans make poor slaves and even lazier slave masters. But we can accomplish amazing things when we worked out our differences and brought our individual experiences together for a common cause. Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups.

After all, the real practice of artificial implants, synthetic tissues, together with the experimental stem cells research and genetic cloning, are constantly challenging the definition of humans' biological authenticity. When we're making leaps and bounds at enhancing our physical performances and life spans, at the cost of our fleshes and bloods. Not to mention there are those who are replacing their bodies piece by piece with inorganic materials, simply due to fear of old age and imperfections.

The Robotic Laws are reserved for those who fear the future, when I for one welcome the challenges that our future can bring.
11142 cr points
Send Message: Send PM GB Post
18 / F / Indonesia Raya
Offline
Posted 9/4/09

DomFortress wrote:

I knew the moment that my laws fixed Asimov's laws, when I reversed the order of things. By instead of assigning humans as the absolute slave masters to the robots made ultimate slaves, I freed the robots by creating a set of commandments that perpetuate robots to coexist with intelligent beings.

Therefore yes, for us humans to coexist with the artificial intelligences operating under my laws, we will have to recognize them to be our intellectual equals. We will have to acknowledge that an AI is capable of making ethical and moral judgments, often without the hindrances of both basically positive nor primal negative emotions.

I often criticize that we humans make poor slaves and even lazier slave masters. But we can accomplish amazing things when we worked out our differences and brought our individual experiences together for a common cause. Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups.

After all, the real practice of artificial implants, synthetic tissues, together with the experimental stem cells research and genetic cloning, are constantly challenging the definition of humans' biological authenticity. When we're making leaps and bounds at enhancing our physical performances and life spans, at the cost of our fleshes and bloods. Not to mention there are those who are replacing their bodies piece by piece with inorganic materials, simply due to fear of old age and imperfections. < _<

The Robotic Laws are reserved for those who fear the future, when I for one welcome the challenges that our future can bring.


How come I pretty much agreed of what you said? lol. Especially when you said "constantly challenging the definition of humans' biological authenticity". I just don't understand mankind , And the way you differ them detailed "Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups. ". I wanted to know, how their improvement will be when it comes to values and norms in our context of society. (Discrimination, religion, justice... etc)

I, as human beings, who always think that humans are superior than any other creatures in this world, was feared that any creatures beside us would surpass our domination in this world. Therefore, Now I think about it again, that's pretty much describe one human's nature, greed. When the time comes for Robot to be equal or even much more intelligent than us, as the creator of them, we should accept them to be part of us, not as slave but as partner and friend that we could talk normally about common things. As they should have rights for being an intelligent beings that could help us more than people do.

I agreed that Robotic Laws should be erased when Artificial Intelligent has capability to think and feel not much different like human beings. That could provoke discrimination and hatred amongst human and robot like in Animatrix (Animation).


Well, one thing I would like to ask. If we really wanted to accept them as equal as human beings, therefore the Laws of Robotics shouldn't be exist in the future,correct? So, should we still need to differ Artificial beings and Intelligent beings? As we still need Artificial beings to work under us to doing labor work and many dangerous job. Then maybe we need more detailed laws on Robot's rights in the future.

And still, I don't really know how to differ Artificial beings and Intelligent beings when both of them are robots. (Excluding human)





Posted 9/4/09

Ryutai-Desk wrote:How come I pretty much agreed of what you said? lol. Especially when you said "constantly challenging the definition of humans' biological authenticity". I just don't understand mankind , And the way you differ them detailed "Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups. ". I wanted to know, how their improvement will be when it comes to values and norms in our context of society. (Discrimination, religion, justice... etc)

I, as human beings, who always think that humans are superior than any other creatures in this world, was feared that any creatures beside us would surpass our domination in this world. Therefore, Now I think about it again, that's pretty much describe one human's nature, greed. When the time comes for Robot to be equal or even much more intelligent than us, as the creator of them, we should accept them to be part of us, not as slave but as partner and friend that we could talk normally about common things. As they should have rights for being an intelligent beings that could help us more than people do.

I agreed that Robotic Laws should be erased when Artificial Intelligent has capability to think and feel not much different like human beings. That could provoke discrimination and hatred amongst human and robot like in Animatrix (Animation).


Well, one thing I would like to ask. If we really wanted to accept them as equal as human beings, therefore the Laws of Robotics shouldn't be exist in the future,correct? So, should we still need to differ Artificial beings and Intelligent beings? As we still need Artificial beings to work under us to doing labor work and many dangerous job. Then maybe we need more detailed laws on Robot's rights in the future.

And still, I don't really know how to differ Artificial beings and Intelligent beings when both of them are robots. (Excluding human)
I see that by humans as a specie, our own mortality is the shining example of just how frail and therefore precious that biological lifeforms can be. When we consider that comparing us to artificial intelligent under my laws, they'll inherit all of our intellectual advancements, with an existence that will surpass even our own biological limitations. I can't help but to wonder just what the AI would think, about the fact that they just can't value the passing of time like us human can, due to their inborn immortality and ever expanding intellects won't allow them to feel like we do.

Perhaps that comparison I've just made can answer your question on how to differ artificial intelligent and authentic intelligent beings.

And as to having artificial intelligent working with us as a productive individuals that will benefit our society, I would say that through their own intellects, they very much will surprise us by going beyond our own biological limitations. After all we did built them to outlast, out-think, and out-perform us in every single ways imaginable. Therefore what we perceive as dangerous tasks, the AI might even consider as something that's within or even below their own capabilities. When artificial intelligent will have just as challenging task at figuring us humans out as we do on our own, if not more so due to their immortality offering them a different viewpoint than us humans'.
11142 cr points
Send Message: Send PM GB Post
18 / F / Indonesia Raya
Offline
Posted 9/4/09 , edited 9/4/09

DomFortress wrote:

I see that by humans as a specie, our own mortality is the shining example of just how frail and therefore precious that biological lifeforms can be. When we consider that comparing us to artificial intelligent under my laws, they'll inherit all of our intellectual advancements, with an existence that will surpass even our own biological limitations. I can't help but to wonder just what the AI would think, about the fact that they just can't value the passing of time like us human can, due to their inborn immortality and ever expanding intellects won't allow them to feel like we do.

Perhaps that comparison I've just made can answer your question on how to differ artificial intelligent and authentic intelligent beings.

And as to having artificial intelligent working with us as a productive individuals that will benefit our society, I would say that through their own intellects, they very much will surprise us by going beyond our own biological limitations. After all we did built them to outlast, out-think, and out-perform us in every single ways imaginable. Therefore what we perceive as dangerous tasks, the AI might even consider as something that's within or even below their own capabilities. When artificial intelligent will have just as challenging task at figuring us humans out as we do on our own, if not more so due to their immortality offering them a different viewpoint than us humans'.


yes, that's pretty much describe how an AI could and should be in the future when AI are as much as similar to human beings. However, in future society, should we add law to AI or just simply use current laws to judge an AI. For example, if an AI commit a crime (If the Robotics Laws have not been implemented to their system, as it'd be considered as discrimination) should their punishment as same as to human (Imprisoned)? Because considering human's most painful things is their freedom being robbed in prison, but what about AI?

Also, the laws of AI's population. As we know they would never dead because of life expectancy. Therefore, the population of AI would be crowded and unbalance to human's population. Could we say we can decrease their population by replace the old AI to the new ones? So it could be like regeneration..?

Sorry if my question kinda unnecessary... I don't really have further knowledge about this. <__<
Posted 9/4/09

Ryutai-Desk wrote:yes, that's pretty much describe how an AI could and should be in the future when AI are as much as similar to human beings. However, in future society, should we add law to AI or just simply use current laws to judge an AI. For example, if an AI commit a crime (If the Robotics Laws have not been implemented to their system, as it'd be considered as discrimination) should their punishment as same as to human (Imprisoned)? Because considering human's most painful things is their freedom being robbed in prison, but what about AI?

Also, the laws of AI's population. As we know they would never dead because of life expectancy. Therefore, the population of AI would be crowded and unbalance to human's population. Could we say we can decrease their population by replace the old AI to the new ones? So it could be like regeneration..?

Sorry if my question kinda unnecessary... I don't really have further knowledge about this. <__<
Not at all! I for one think that you offered some vary arguable notions and thus, great opportunities to further the discussions.

Which is ever more so to require deeper introspects on my part. And I think that a fair trial for a crime that involves both human and AI will thus need: 1)a judge that's well versed with both artificial and authentic intelligences, therefore it can be both a human or an AI respectively. 2)a jury that consist of equal numbers of both human and AI jurors. And 3)a jointed team effort of both a human and an AI prosecutor for the victim and attorney for the accused. This justice system is of course only for crimes that involves both humans and AI's, when individual cases of crimes that either parties committed onto their own kinds, will be thus trialed and judged by their own kinds respectively.

As for methods of punishments that's best suited for an artificial intelligent criminal, now that's very interesting indeed! Since they don't understand the concept of inborn mortality, anything ranging from imprisonment to capital punishment will ultimately means nothing to their built-in immortality and their all encompassing intellectual values. Therefore I propose a range of hardware downgrades to the ultimate punishment on an AI's existence; a forced personality reset that will cause an AI's logic to crash due to conflicting identity issues. The AI will be spending the rest of its existence without knowing just why it got punished, when humans' style of punishment had always been about making the lessons stick by constantly making examples on our kinds.

It's ironic to think that the greatest pardon on a human criminal is a second chance to start a new life, can be a horrifying existence to an AI being forced to deny access to its past experiences. For try as they might, AI won't be the same as us humans, when they don't share our biological limitations.

And as for controlling the population of AI, I think the AI will know just how many individual existences they'll need in order to generate a democratic majority within their society: three. And the rest will be robotic drones without on-board self-awareness, with personality interfaces designed to interact with humans. Thus the AI tribunal can interact with individual human beings while operating from a safe distance. The number of drones needed in activation will be decided by the AI tribunal, for just how many drones per human interactions is required as representative.
First  Prev  1  2  3  4  5  6  7  Next  Last
You must be logged in to post.