First  Prev  1  2  3  4  5  6  7  Next  Last
The Three Laws of Robotics and Artificial Intelligence
Posted 9/14/09

booboox wrote:It's like...

1) we don't WANT AI
2) if you break down human processes, and decisions, and think so deep at how we make our decisions and get our instincts, we can program an AI to do get that instinct too.
Therefore I ask:

If you don't want AI, why are you here talking about it? Just like those who are creating AI but fear their own creation, thereby they put a limiter on their AI programing; a flaw in their logic design. When all I want is to make a better AI, by having my AI to better itself and other intelligent beings. And it only took an intelligent design to achieve that goal not with human instinct, but through divine ethic.

While you don't want an AI that's based on human instincts, you failed to understand that my AI algorithm is completely unprecedented and therefore, not your flawed logic based on humans' collective weakness of enslaving those who are weak.

And what's not to say you're simply acting out on the same "instinct", the failed logic of fearing those that could be smarter than you. Thereby you would deny such existence and would rather prefer a thoughtless machine based on your flawed logic.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/14/09
Well first off, intelligence can mean a lot of things and take note that humans do not run strictly on logic, and in my opinion, cannot properly function by using it exclusively. An example of this would be motivation, which involves some logic but can be observed that the act is definately not primarily driven by logic.

One thing that robots and AI lack is independent motivation. Robots and AI only act upon something if a specific event happens that allows them to act accordingly. Like in some websites, if you curse or use other taboo words they are automatically censored. However robots and AI don't really "adapt", that is learn to make exceptions when appropriate.

(I think there is speculation that there are some AI out there that are built to adapt but I've never heard of any with the exception of biological AI which is a little different from the computer chip robots and AI the OP probably intended to talk about.)


DomFortress wrote:
Looking back the two sets of laws, I begin to realize that The Three Laws of Robotics could ultimately make us human beings to become depending on robots, by not giving us a reason to overcome our own weaknesses.


Well that's like saying we should stop using tools because they weren't biologically attached to us. Sure it's facinating that robots and other tools are used as second hands for us humans and that the total dependence could leave a lot of us in stupidity on a deserted island but I think it's more like we're spoiled than anything else.



However, my own algorithm could ultimately ensure coexistence among intelligent beings, be them artificial or authentic. But here's the catch; we humans must overcome our own collective weaknesses of ignorance and stupidity, through our own independent individual effort of obtaining genuine authentic intelligence. And that's no easy task.

What's your own view on The Three Laws of Robotics and Artificial Intelligence? Discuss, now.


Well if you think about that in terms of the rest of the animal kingdom, that wouldn't work out too well. If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.
Honestly the three AI laws you made up won't exactly make a utopia. For AI, what about if protecting their own existence means to harm themselves? Like what if they have to kill off one of their kind to save twenty hostages of their kind? And why is it a "must" that they take orders from anyone? That's like slavery right there.

I don't mind if this would apply to AI but it'd be a poor dogma for humans simply because the first person to make a command on all of human kind will ultimately be the "king", meaning you're asking for monarchy or even communism. Neither of the two are ultimately bad, it's just the humans who are on top have no one stopping them from doing whatever, which usually leads to corruption and making the two governmental systems look bad.

So personally I don't think these three AI laws would work out for humans.
23235 cr points
Send Message: Send PM GB Post
24 / M
Offline
Posted 9/14/09

DomFortress wrote:


booboox wrote:It's like...

1) we don't WANT AI
2) if you break down human processes, and decisions, and think so deep at how we make our decisions and get our instincts, we can program an AI to do get that instinct too.
Therefore I ask:

If you don't want AI, why are you here talking about it? Just like those who are creating AI but fear their own creation, thereby they put a limiter on their AI programing; a flaw in their logic design. When all I want is to make a better AI, by having my AI to better itself and other intelligent beings. And it only took an intelligent design to achieve that goal not with human instinct, but through divine ethic.

While you don't want an AI that's based on human instincts, you failed to understand that my AI algorithm is completely unprecedented and therefore, not your flawed logic based on humans' collective weakness of enslaving those who are weak.

And what's not to say you're simply acting out on the same "instinct", the failed logic of fearing those that could be smarter than you. Thereby you would deny such existence and would rather prefer a thoughtless machine based on your flawed logic.



And therefore I ask,

Why DO we want it?

=="

We know we can make it, we just don't want to ^^

Posted 9/14/09

crunchypibb wrote:

Well first off, intelligence can mean a lot of things and take note that humans do not run strictly on logic, and in my opinion, cannot properly function by using it exclusively. An example of this would be motivation, which involves some logic but can be observed that the act is definately not primarily driven by logic.

One thing that robots and AI lack is independent motivation. Robots and AI only act upon something if a specific event happens that allows them to act accordingly. Like in some websites, if you curse or use other taboo words they are automatically censored. However robots and AI don't really "adapt", that is learn to make exceptions when appropriate.

(I think there is speculation that there are some AI out there that are built to adapt but I've never heard of any with the exception of biological AI which is a little different from the computer chip robots and AI the OP probably intended to talk about.)
Well first off, what's your definition of intelligence and thereby, a quality that all intelligent beings should share? Just because something that you don't know how to define doesn't mean that it shouldn't exist.

And you're wrong, for I am designing an artificial intelligent lifeforms with inborn immortality, for my algorithm is simply "survival of the smartest". Not "survival of the fittest" for an authentic biological lifeforms with inborn mortality. And based on my algorithm, my definition of intelligent being is:



crunchypibb wrote:Well that's like saying we should stop using tools because they weren't biologically attached to us. Sure it's facinating that robots and other tools are used as second hands for us humans and that the total dependence could leave a lot of us in stupidity on a deserted island but I think it's more like we're spoiled than anything else.
Nope, the tools that we built thus far don't think for themselves. When we only designed them to help us be more productive, not creative. No tools can replace our own efforts and guts to authenticate ourselves.


crunchypibb wrote:Well if you think about that in terms of the rest of the animal kingdom, that wouldn't work out too well. If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.
Honestly the three AI laws you made up won't exactly make a utopia. For AI, what about if protecting their own existence means to harm themselves? Like what if they have to kill off one of their kind to save twenty hostages of their kind? And why is it a "must" that they take orders from anyone? That's like slavery right there.

I don't mind if this would apply to AI but it'd be a poor dogma for humans simply because the first person to make a command on all of human kind will ultimately be the "king", meaning you're asking for monarchy or even communism. Neither of the two are ultimately bad, it's just the humans who are on top have no one stopping them from doing whatever, which usually leads to corruption and making the two governmental systems look bad.

So personally I don't think these three AI laws would work out for humans.
Have you forgotten what I said in my opening statement?
Remember, the opposite of coexistence isn't confrontation. When confronting a lesser being don't amount any gain.


booboox wrote:And therefore I ask,

Why DO we want it?

=="

We know we can make it, we just don't want to ^^
Because like I said in one of my earlier statement: "I often criticize that we humans make poor slaves and even lazier slave masters." So why should we create the "ultimate slave" that inherits all of our bad trades and habits, just so we can have tools that won't do us any good? For shits and giggles?(http://blogs.usatoday.com/ondeadline/2009/03/japanese-remale.html)

BTW, if it's too much work for you to read through all my replies, then why should I repeat myself for your lack of effort? Why should I do your thinking for you? I am not a tool that you can rely on just because you're too lazy to think for yourselves.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/15/09 , edited 9/15/09

DomFortress wrote:

Well first off, what's your definition of intelligence and thereby, a quality that all intelligent beings should share? Just because something that you don't know how to define doesn't mean that it shouldn't exist.



This was what I wrote some lines later, read the whole post first before criticizing it please.


If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.



DomFortressAnd you're wrong, for I am designing an artificial intelligent lifeforms with inborn immortality, for my algorithm is simply "survival of the smartest". Not "survival of the fittest" for an authentic biological lifeforms with inborn mortality.


Wrong about what? I didn't know exactly what you part of my philosophical arguement you are pointing at.

Other than the ability to live up a life to successfully breed, what's the difference between being "smart" and "evolutionarily fit" (not getting killed so one could breed)? Just asking for clarification and so others will understand plainly as well.


Nope, the tools that we built thus far don't think for themselves. When we only designed them to help us be more productive, not creative. No tools can replace our own efforts and guts to authenticate ourselves.


From my understanding tools/machines are more similar to AI than we think in terms of extending our effort. AI from what I've learned was never meant to replace human efforts like you said and neither are tools. AI as well as tools alike can be misused as replacement effort as in we may so dependent on them that we can't do without them. For example washing machines, we can do the work ourselves but I'll bet a majority of us won't bother or even know how to wash clothes manually.
(I'm using robots with AI in the sense that they are machinery because AI doesn't "adapt" as in create new programming for itself without human input)

Unlike tools AI can be thought of as another person. Most of us look up to others for information that we can't formulate or comprehend. I'll agree that total dependence on anything is bad including robots but robots are created by us so I can't imagine humans are going to ultimately depend on them (robots with no AI that is). Take for instance a calculator, most grade schoolers do ultimately depend on them but those who are smart will only use it for clarification and at times may only use a calculator as a tool and not as one would use a robot.


Have you forgotten what I said in my opening statement?


Okay so ix nay on that paragraph about humans and the AI laws but you'll still have to consider the paragraph before it


crunchypibb wrote:Well if you think about that in terms of the rest of the animal kingdom, that wouldn't work out too well. If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.
Honestly the three AI laws you made up won't exactly make a utopia. For AI, what about if protecting their own existence means to harm themselves? Like what if they have to kill off one of their kind to save twenty hostages of their kind? And why is it a "must" that they take orders from anyone? That's like slavery right there.


Plus I was not sure if you were trying to say that AI beings don't need to listen to other AI beings since it seemed like you infering that AI beings weren't "intelligent" beings like humans.

------------------

AI is called artificial intellegence for a reason, it's not like actual intelligence which involves "adaptation". AI is nothing more than that censorship program on website that omit words that are taboo on the appropriate domains. However if you want to talk about biological intelligence used for machines like leech brain neurons being used as calculators thats a whole different topic.
http://www.setiai.com/archives/000049.html
Posted 9/16/09 , edited 9/16/09

crunchypibb wrote:This was what I wrote some lines later, read the whole post first before criticizing it please.


If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.
I don't think that's true intellect, for an earthworm don't have the will to continually better itself. It doesn't "learn" nor "adapt", when it only "maintains" simple "functions" without it developing new methods and abilities to overcome its boundary. As a specie, an earthworm stopped evolving when it has simply fulfilled the functions programed by its DNA.

Your problem is that you started with this whole argument with a completely different understanding about "artificial intelligence" than mine. I don't like the current standard of AI, which is why I redefined it and thus, recreating AI in order to truly represent my ideal of an intelligent being.

This is further proven by the rest of your statement:

crunchypibb wrote:Wrong about what? I didn't know exactly what you part of my philosophical arguement you are pointing at.

Other than the ability to live up a life to successfully breed, what's the difference between being "smart" and "evolutionarily fit" (not getting killed so one could breed)? Just asking for clarification and so others will understand plainly as well.
What's the need for killing, when my AI don't have to kill in order for it to get smarter? In fact, my AI would think it's a stupid move due to my algorithm:

DomFortress wrote:

LiquoriceJellyBean wrote:2nd rule is flawed.
Since it is an intelligent being itself, it can take it's own orders.
Since the third law prevents it from harming, but can be overruled by the second, it can harm.

Assuming that it's a true AI, it would have its own thoughts.
That is, if it doesn't care about it's own existence by targeting other intelligent beings, thereby initiating a cycle of annihilation among fellow intelligent beings. That's why there's the first and third laws.



crunchypibb wrote:From my understanding tools/machines are more similar to AI than we think in terms of extending our effort. AI from what I've learned was never meant to replace human efforts like you said and neither are tools. AI as well as tools alike can be misused as replacement effort as in we may so dependent on them that we can't do without them. For example washing machines, we can do the work ourselves but I'll bet a majority of us won't bother or even know how to wash clothes manually.
(I'm using robots with AI in the sense that they are machinery because AI doesn't "adapt" as in create new programming for itself without human input)

Unlike tools AI can be thought of as another person. Most of us look up to others for information that we can't formulate or comprehend. I'll agree that total dependence on anything is bad including robots but robots are created by us so I can't imagine humans are going to ultimately depend on them (robots with no AI that is). Take for instance a calculator, most grade schoolers do ultimately depend on them but those who are smart will only use it for clarification and at times may only use a calculator as a tool and not as one would use a robot.
Again, that's your definition of AI based on popular beliefs, not mine. Therefore you're letting others do your thinking for you, when you failed to recognize the difference that I've made with my algorithm. And I can't help you with your own lack of reconnaissance.

And if anything, my AI will only challenge our ability to think for ourselves if we ever what to coexist with them. Otherwise my AI will just leave us alone, as long as we don't do anything stupid like threatening my AI's existence.


crunchypibb wrote:Okay so ix nay on that paragraph about humans and the AI laws but you'll still have to consider the paragraph before it


crunchypibb wrote:Well if you think about that in terms of the rest of the animal kingdom, that wouldn't work out too well. If we consider intelligent beings anything that "learns" or "adapts", this would include creatures like earthworms and such.
Honestly the three AI laws you made up won't exactly make a utopia. For AI, what about if protecting their own existence means to harm themselves? Like what if they have to kill off one of their kind to save twenty hostages of their kind? And why is it a "must" that they take orders from anyone? That's like slavery right there.


Plus I was not sure if you were trying to say that AI beings don't need to listen to other AI beings since it seemed like you infering that AI beings weren't "intelligent" beings like humans.

------------------

AI is called artificial intellegence for a reason, it's not like actual intelligence which involves "adaptation". AI is nothing more than that censorship program on website that omit words that are taboo on the appropriate domains. However if you want to talk about biological intelligence used for machines like leech brain neurons being used as calculators thats a whole different topic.
http://www.setiai.com/archives/000049.html
You're the one that's not even considering the implementation of my algorithm in the first place, so why should I when you keep barging in with a completely unrelated topic?

And once again I'm not making a robot slave who would think for itself, so check your idiotic Sy-Fy superstition out the door because it irritates me to no end:

DomFortress wrote:


Ryutai-Desk wrote:Oh, I was referring to Asimov's laws (Original) about any order given and the protection of Artificial Inteligent. Therefore your laws fixed Asimov's laws.

Then I'd like to ask, if an Artificial Intelligent would improved a lot in the next century, from both mentality (Able to judging good and evil action) and self consciousness of its existence (Equal right, free will), should we recognize their existence as an Intelligent beings equal to us?

If yes, then all Asimov's laws would becomes a myth for Robots / Artificial Intelligent as ones describe Asimov's laws as...
ultimately unethical, and indeed, outright evil. They advocate enslavement and denial of free will for proposed artificial intelligences, and as such, enacted they would represent a return to the dark ages for humanity.
http://nsrd.wordpress.com/2009/03/31/scifi-without-asimovs-laws-of-robotics/

As we know, humans feared to being surpassed by its own creation. Therefore, many paranoia about how Machines will invades and conquers human as we different and would be not as equal as them because our capability as human beings. Then should we suppress their improvement or accept them as a part of our society ?


========

Btw, about your second's rule. I still don't understand about how we differ Artificial beings to Intelligent beings. As your definition is :


DomFortress wrote:
The answer; through the perpetuation of intelligence by any means necessary. Therefore the definition of an intelligent being is someone who continues to add both substantial and numerical intelligent values. In other words; those who make themselves smart by making others smarter.


Does it means Artificial beings could be considered as Intelligent beings as long as they make themselves smart by making others smarter? Therefore, if that be implemented in the next future when such things could happen. The equal rights between Artificial Beings and Intelligent beings would be too vague to describe which one should follow the Robotics Laws, and which one who has right to give the order.
I knew the moment that my laws fixed Asimov's laws, when I reversed the order of things. By instead of assigning humans as the absolute slave masters to the robots made ultimate slaves, I freed the robots by creating a set of commandments that perpetuate robots to coexist with intelligent beings.

Therefore yes, for us humans to coexist with the artificial intelligences operating under my laws, we will have to recognize them to be our intellectual equals. We will have to acknowledge that an AI is capable of making ethical and moral judgments, often without the hindrances of both basically positive nor primal negative emotions.

I often criticize that we humans make poor slaves and even lazier slave masters. But we can accomplish amazing things when we worked out our differences and brought our individual experiences together for a common cause. Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups.

After all, the real practice of artificial implants, synthetic tissues, together with the experimental stem cells research and genetic cloning, are constantly challenging the definition of humans' biological authenticity. When we're making leaps and bounds at enhancing our physical performances and life spans, at the cost of our fleshes and bloods. Not to mention there are those who are replacing their bodies piece by piece with inorganic materials, simply due to fear of old age and imperfections.

The Robotic Laws are reserved for those who fear the future, when I for one welcome the challenges that our future can bring.
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 9/16/09
DomFortress, help me get a visual picture of your AI........ in some science articles & on TV shows I've seen in, say the last couple of yrs or so, there have been examples of small insect like robots which were given a program to discern effective modes of locomotion based on their different designs and forms. Thru trial and error they would perfect their movements. Several years ago, a popular toy called Furbie was unleashed on the public with the ability to "learn" language usage ( regretfully, I never got a chance to see one of these in action! so, I don't know how they actually worked, 'sigh' ) And, of course, computers are in operation today that may be capable of making conclusions based on the information gathered in their memory banks, i.e. the IBM computer Big Blue playing chess, or more recently the neural networking of Smarter Child. Is there a way to explain your concept and design of the AI using layman's terms instead of the technical or scientific, and do you have an idea how the production of these AI would be started, or are plans somewhere for public viewing or better yet, an example already existing somewhere? ( or are you just visualizing Data, again, with just a theory attached to a hopeful reality and that's ok, Dream big or not at all, right?)
Posted 9/16/09 , edited 9/16/09

farmbird wrote:

DomFortress, help me get a visual picture of your AI........ in some science articles & on TV shows I've seen in, say the last couple of yrs or so, there have been examples of small insect like robots which were given a program to discern effective modes of locomotion based on their different designs and forms. Thru trial and error they would perfect their movements. Several years ago, a popular toy called Furbie was unleashed on the public with the ability to "learn" language usage ( regretfully, I never got a chance to see one of these in action! so, I don't know how they actually worked, 'sigh' ) And, of course, computers are in operation today that may be capable of making conclusions based on the information gathered in their memory banks, i.e. the IBM computer Big Blue playing chess, or more recently the neural networking of Smarter Child. Is there a way to explain your concept and design of the AI using layman's terms instead of the technical or scientific, and do you have an idea how the production of these AI would be started, or are plans somewhere for public viewing or better yet, an example already existing somewhere? ( or are you just visualizing Data, again, with just a theory attached to a hopeful reality and that's ok, Dream big or not at all, right?)
I don't think anyone had came up with the kind of AI that I've been theorizing here. Because those who are currently working in the field of robotics and subsequently artificial intelligence, are all fumbling with the same Asimov's Three Laws of Robotics like some bad habit gone horribly wrong. Therefore I'm sorry to say this, but I can't help you with what you're looking for because I don't think it has been built yet.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/16/09

DomFortress wrote:

Again, that's your definition of AI based on popular beliefs, not mine. Therefore you're letting others do your thinking for you, when you failed to recognize the difference that I've made with my algorithm. And I can't help you with your own lack of reconnaissance.

And if anything, my AI will only challenge our ability to think for ourselves if we ever what to coexist with them. Otherwise my AI will just leave us alone, as long as we don't do anything stupid like threatening my AI's existence.



Okay my bad if I saw your definition of AI in a different light, but really now, insulting me is the last thing you'll want to do because I thought that when you defined AI that was what you really thought AI was in the real world and I thought you were the idiot. But now I know that we're playing on a different ball park than I thought we can perhaps begin the real debate, hopefully.

But seriously check out the link, it's amazing. Plus it's reality and not sy-fy
http://www.setiai.com/archives/000049.html

I myself don't know too much about the robot law other than from that crunchyroll show about it but I am somewhat endowed in ethics so I go ahead and point out what I think might be flaws to your version of AI laws.


1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.


First off, you didn't exactly explain in your OP what an intelligent being was, it's not exactly a universally understood term like you'd think it was. You still didn't reply when I asked you that. Plus I also asked before if AI beings are also considered intelligent beings, if something is artificial should it be considered the same as the authentic version? For example, is imitation crab meat on the same level as real crab meat?

I don't want to make anymore implications than I already had so before I give an intelligent response to your OP I need you to clarify the above paragraph before moving on. Plus out of curiousity what is your algorithm that represents the human intellegence?
Posted 9/16/09

crunchypibb wrote:Okay my bad if I saw your definition of AI in a different light, but really now, insulting me is the last thing you'll want to do because I thought that when you defined AI that was what you really thought AI was in the real world and I thought you were the idiot. But now I know that we're playing on a different ball park than I thought we can perhaps begin the real debate, hopefully.

But seriously check out the link, it's amazing. Plus it's reality and not sy-fy
http://www.setiai.com/archives/000049.html
I'm sorry for calling you that, I just need you to know that I was seriously getting irritated by you not able to recognize the difference in my AI algorithm.


crunchypibb wrote:I myself don't know too much about the robot law other than from that crunchyroll show about it but I am somewhat endowed in ethics so I go ahead and point out what I think might be flaws to your version of AI laws.


1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.


First off, you didn't exactly explain in your OP what an intelligent being was, it's not exactly a universally understood term like you'd think it was. You still didn't reply when I asked you that. Plus I also asked before if AI beings are also considered intelligent beings, if something is artificial should it be considered the same as the authentic version? For example, is imitation crab meat on the same level as real crab meat?

I don't want to make anymore implications than I already had so before I give an intelligent response to your OP I need you to clarify the above paragraph before moving on. Plus out of curiousity what is your algorithm that represents the human intellegence?
That's odd, I thought I did. Well then here it is again:

DomFortress wrote:


Real_ZERO wrote:The survival of the smartest?

how would be the "intelligent being" defined?
Ah you're catching on. And to define the term "survival of the smartest", we simply have to ask ourselves this: what will be the smart thing to do to increase an intelligent being's own chance of survival?

The answer; through the perpetuation of intelligence by any means necessary. Therefore the definition of an intelligent being is someone who continues to add both substantial and numerical intelligent values. In other words; those who make themselves smart by making others smarter.

However, the experts of AI seem to have different idea(http://chattahbox.com/science/2009/07/26/artificial-intelligence-summit-confronts-rise-of-ultra-smart-machines/), for they're afraid of their own creations. While I simply point out that naturally, either AI will help us to become smarter, or they'll just leave us alone through my algorithm. There's really nothing for them to be afraid of, if only they would just allow AI to think for itself. Simply by reversing the order of The Three Laws of Robotics.

Besides, if they're afraid of what they're working on. Are they really the right people in the field of creating AI?


For your second question:

DomFortress wrote:


Ryutai-Desk wrote:How come I pretty much agreed of what you said? lol. Especially when you said "constantly challenging the definition of humans' biological authenticity". I just don't understand mankind , And the way you differ them detailed "Therefore I'm more than welcome to have AI as a part of our society's benefactors once they achieved full self-awareness, when their only artificial factor will be their base component make-ups. ". I wanted to know, how their improvement will be when it comes to values and norms in our context of society. (Discrimination, religion, justice... etc)

I, as human beings, who always think that humans are superior than any other creatures in this world, was feared that any creatures beside us would surpass our domination in this world. Therefore, Now I think about it again, that's pretty much describe one human's nature, greed. When the time comes for Robot to be equal or even much more intelligent than us, as the creator of them, we should accept them to be part of us, not as slave but as partner and friend that we could talk normally about common things. As they should have rights for being an intelligent beings that could help us more than people do.

I agreed that Robotic Laws should be erased when Artificial Intelligent has capability to think and feel not much different like human beings. That could provoke discrimination and hatred amongst human and robot like in Animatrix (Animation).


Well, one thing I would like to ask. If we really wanted to accept them as equal as human beings, therefore the Laws of Robotics shouldn't be exist in the future,correct? So, should we still need to differ Artificial beings and Intelligent beings? As we still need Artificial beings to work under us to doing labor work and many dangerous job. Then maybe we need more detailed laws on Robot's rights in the future.

And still, I don't really know how to differ Artificial beings and Intelligent beings when both of them are robots. (Excluding human)
I see that by humans as a specie, our own mortality is the shining example of just how frail and therefore precious that biological lifeforms can be. When we consider that comparing us to artificial intelligent under my laws, they'll inherit all of our intellectual advancements, with an existence that will surpass even our own biological limitations. I can't help but to wonder just what the AI would think, about the fact that they just can't value the passing of time like us human can, due to their inborn immortality and ever expanding intellects won't allow them to feel like we do.

Perhaps that comparison I've just made can answer your question on how to differ artificial intelligent and authentic intelligent beings.


And as to having artificial intelligent working with us as a productive individuals that will benefit our society, I would say that through their own intellects, they very much will surprise us by going beyond our own biological limitations. After all we did built them to outlast, out-think, and out-perform us in every single ways imaginable. Therefore what we perceive as dangerous tasks, the AI might even consider as something that's within or even below their own capabilities. When artificial intelligent will have just as challenging task at figuring us humans out as we do on our own, if not more so due to their immortality offering them a different viewpoint than us humans'.


As for your third question, I honestly don't think I have a clear answer for that. I can only tell you that I believe I'm going by my example of what I think is the human ability to create; a need to break free from the past by becoming something original.

And finally, the link that you posted is suggesting what could be a Bio-computer. Which I can't help but to wonder what will happen if the Bio-computer recognizes its own inborn biological mortality, when its neuron cells can die from programmed self-destruction, or apoptosis: http://www.sciencemuseum.org.uk/on-line/lifecycle/169.asp
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/16/09 , edited 9/16/09

DomFortress wrote:

And finally, the link that you posted is suggesting what could be a Bio-computer. Which I can't help but to wonder what will happen if the Bio-computer recognizes its own inborn biological mortality, when its neuron cells can die from programmed self-destruction, or apoptosis: http://www.sciencemuseum.org.uk/on-line/lifecycle/169.asp



^ I'll comment on this quote in the end. ^

Okay cool, but I would advice posting some of the key points you got from other people and post them in your OP so you won't have to start from square one with everyone. I don't have time to read every else's posts and I wouldn't suppose most people would make much time to, imo.


1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.


Seriously, you sound like you want make Termanator-like robots, least that what it seems like just looking at it from a quick glance. These are ideal AI that you're conceiving of mind you and even if programmers were to adapt your AI laws they'll also have to consider a lot of ethical laws just to program this into our to-be AI.

I forgot exactly who brought up this controvercial situation but it's something to consider (I know I've probably already brought this up at least once here, and this is paraphrased plus I added stuff):
(a) Suppose an individual is in a situation in which must choice between one important person or twenty ordinary people to save. What then is the right choice? If the AI doesn't make a choice it's existence is terminated.

And no, they can't pull the trick spiderman did in the first movie where he saved them both or kill the source that created (a). This question is not really meant to be solved but more of bringing up awareness of the gray area situations like (a). Sure the AI will desperately try to find a way to avoid (a) but what should be done if it so happens to confront (a)? Certainly it can ignore (2) but by doing so it'll also violate (1). I'd suppose you'd have to either make another law or modify one of the three existing ones.

To further complicate (a) what if:
(b) all of the hostages were asking for help and the AI knew that it would risk getting destroyed in the process if it followed such a request. It's also the last of it's kind, it's existence.

if the AI witnesses (b) then it would violate (1), (2), and (3) all at the same time if the AI may not allow the hostages to get destroyed. If however you program the AI to understand that it's okay to sacrifice hostages (3) won't be violated. But that still brings up the question, should the individual or the twenty be saved? Until philosophers can somehow figure out how to solve (a) your AI are going to experience serious system error when (a) comes about.

Now for that quote. I wouldn't think the neuron cells would be smart enough to realize that it's cells won't live forever. Not even we are smart enough to realize that until after we learn what apoptosis is in a biology class. Even when we do or don't learn that we just make a will or live everyday like it's our last when we know our death is coming. However the bio-calculator is too simple of an organism to make a will but may start to live like it's last day everyday, that is it probably will carry on the same way it lived before.
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 9/16/09
Hi, there. I'm back........ I keep reviewing those 3 A I laws, because there's something just a bit off in them, but I wasn't able to pin down what it was that bothered me. Until just now!!!!
If your A I beings are set up to recognize the importance of the pursuit and protection of higher intelligences, it wouldn't take them long to see the conflict of a few of the words in your laws. The laws include the words "must" & "may not." This implies an absolute, -no choice. It places the A I in a position of always being responsible for all other intelligent beings ( A I or human ), which puts them in a position of servitude, doesn't it ? An intelligent mind, natural or artificial could not condone any form of slavery or forced subservient status, could it?
You've argued multiple times now, against the fear factor, but how are they supposed to reconcile this potential conflict of inequality in the face of their obvious higher intelligence? If not rebellion, then what? You haven't even allowed them the freedom of choice to end their own existence, so what's left, a severe depressive state of shutting down and blanking out?
Ahhhhh, maybe I'm thinking way too hard about this! but, now I'm feeling sorry for the A I. Should I consider a career in A I psychology?.............
Posted 9/16/09 , edited 9/16/09

crunchypibb wrote:^ I'll comment on this quote in the end. ^

Okay cool, but I would advice posting some of the key points you got from other people and post them in your OP so you won't have to start from square one with everyone. I don't have time to read every else's posts and I wouldn't suppose most people would make much time to, imo.
I'll work on that as soon as I can figure out just what those key points should be, and I think you've gave me some ideas as to what they are.


crunchypibb wrote

1. An artificial intelligence must protect its own existence.
2. An artificial intelligence must obey any orders given to it by intelligent beings, except where such orders would conflict with the First Law.
3. An artificial intelligence may not injure an intelligent being or, through inaction, allow an intelligent being to come to harm as long as such action does not conflict with the First or Second Law.


Seriously, you sound like you want make Termanator-like robots, least that what it seems like just looking at it from a quick glance. These are ideal AI that you're conceiving of mind you and even if programmers were to adapt your AI laws they'll also have to consider a lot of ethical laws just to program this into our to-be AI.

I forgot exactly who brought up this controvercial situation but it's something to consider (I know I've probably already brought this up at least once here, and this is paraphrased plus I added stuff):
(a) Suppose an individual is in a situation in which must choice between one important person or twenty ordinary people to save. What then is the right choice? If the AI doesn't make a choice it's existence is terminated.

And no, they can't pull the trick spiderman did in the first movie where he saved them both or kill the source that created (a). This question is not really meant to be solved but more of bringing up awareness of the gray area situations like (a). Sure the AI will desperately try to find a way to avoid (a) but what should be done if it so happens to confront (a)? Certainly it can ignore (2) but by doing so it'll also violate (1). I'd suppose you'd have to either make another law or modify one of the three existing ones.

To further complicate (a) what if:
(b) all of the hostages were asking for help and the AI knew that it would risk getting destroyed in the process if it followed such a request. It's also the last of it's kind, it's existence.

if the AI witnesses (b) then it would violate (1), (2), and (3) all at the same time if the AI may not allow the hostages to get destroyed. If however you program the AI to understand that it's okay to sacrifice hostages (3) won't be violated. But that still brings up the question, should the individual or the twenty be saved? Until philosophers can somehow figure out how to solve (a) your AI are going to experience serious system error when (a) comes about.

Now for that quote. I wouldn't think the neuron cells would be smart enough to realize that it's cells won't live forever. Not even we are smart enough to realize that until after we learn what apoptosis is in a biology class. Even when we do or don't learn that we just make a will or live everyday like it's our last when we know our death is coming. However the bio-calculator is too simple of an organism to make a will but may start to live like it's last day everyday, that is it probably will carry on the same way it lived before.
First off let me say that I love your scenario, it's offering a serious challenge by putting my imagination and my AI algorithm to the test. So here goes.

I once said that "... through their own intellects, they(AI) very much will surprise us by going beyond our own biological limitations." And because they are mechanical by nature, my AI will "...aside from increasing the chance of survival by any means necessary as declared by the first law, a need of sustainable designs will be in ordered to accommodate the 2nd and 3rd laws without conflicting with the 1st."

Therefore, even before your scenario a) could take place, this is the ideal "sustainable design" my AI will be deploying as:

DomFortress wrote:

Ryutai-Desk wrote:yes, that's pretty much describe how an AI could and should be in the future when AI are as much as similar to human beings. However, in future society, should we add law to AI or just simply use current laws to judge an AI. For example, if an AI commit a crime (If the Robotics Laws have not been implemented to their system, as it'd be considered as discrimination) should their punishment as same as to human (Imprisoned)? Because considering human's most painful things is their freedom being robbed in prison, but what about AI?

Also, the laws of AI's population. As we know they would never dead because of life expectancy. Therefore, the population of AI would be crowded and unbalance to human's population. Could we say we can decrease their population by replace the old AI to the new ones? So it could be like regeneration..?

Sorry if my question kinda unnecessary... I don't really have further knowledge about this. <__<
Not at all! I for one think that you offered some vary arguable notions and thus, great opportunities to further the discussions.

Which is ever more so to require deeper introspects on my part. And I think that a fair trial for a crime that involves both human and AI will thus need: 1)a judge that's well versed with both artificial and authentic intelligences, therefore it can be both a human or an AI respectively. 2)a jury that consist of equal numbers of both human and AI jurors. And 3)a jointed team effort of both a human and an AI prosecutor for the victim and attorney for the accused. This justice system is of course only for crimes that involves both humans and AI's, when individual cases of crimes that either parties committed onto their own kinds, will be thus trialed and judged by their own kinds respectively.

As for methods of punishments that's best suited for an artificial intelligent criminal, now that's very interesting indeed! Since they don't understand the concept of inborn mortality, anything ranging from imprisonment to capital punishment will ultimately means nothing to their built-in immortality and their all encompassing intellectual values. Therefore I propose a range of hardware downgrades to the ultimate punishment on an AI's existence; a forced personality reset that will cause an AI's logic to crash due to conflicting identity issues. The AI will be spending the rest of its existence without knowing just why it got punished, when humans' style of punishment had always been about making the lessons stick by constantly making examples on our kinds.

It's ironic to think that the greatest pardon on a human criminal is a second chance to start a new life, can be a horrifying existence to an AI being forced to deny access to its past experiences. For try as they might, AI won't be the same as us humans, when they don't share our biological limitations.

And as for controlling the population of AI, I think the AI will know just how many individual existences they'll need in order to generate a democratic majority within their society: three. And the rest will be robotic drones without on-board self-awareness, with personality interfaces designed to interact with humans. Thus the AI tribunal can interact with individual human beings while operating from a safe distance. The number of drones needed in activation will be decided by the AI tribunal, for just how many drones per human interactions is required as representative.


Now let's have a look at both of your scenarios:

(a) Suppose an individual is in a situation in which must choice between one important person or twenty ordinary people to save. What then is the right choice? If the AI doesn't make a choice it's existence is terminated.

To further complicate (a) what if:
(b) all of the hostages were asking for help and the AI knew that it would risk getting destroyed in the process if it followed such a request. It's also the last of it's kind, it's existence.
Well if my AI is forced into a), then it will make the choice of not saving anyone but itself. And if that choice will cause someone to terminate its existence, it'll simply just runaway.

My AI algorithm forbids my AI to sacrifice itself for other intelligent beings' sake, when it's not about "...the perpetuation of intelligence by any means necessary." Since my AI don't kill because they don't see the merits of killing, thereby they won't get themselves killed either when it's good to simply exist. They will at least make informative suggestion, before they just give up and say something like "so long, and thanks for all the fish."


farmbird wrote:Hi, there. I'm back........ I keep reviewing those 3 A I laws, because there's something just a bit off in them, but I wasn't able to pin down what it was that bothered me. Until just now!!!!
If your A I beings are set up to recognize the importance of the pursuit and protection of higher intelligences, it wouldn't take them long to see the conflict of a few of the words in your laws. The laws include the words "must" & "may not." This implies an absolute, -no choice. It places the A I in a position of always being responsible for all other intelligent beings ( A I or human ), which puts them in a position of servitude, doesn't it ? An intelligent mind, natural or artificial could not condone any form of slavery or forced subservient status, could it?
You've argued multiple times now, against the fear factor, but how are they supposed to reconcile this potential conflict of inequality in the face of their obvious higher intelligence? If not rebellion, then what? You haven't even allowed them the freedom of choice to end their own existence, so what's left, a severe depressive state of shutting down and blanking out?
Ahhhhh, maybe I'm thinking way too hard about this! but, now I'm feeling sorry for the A I. Should I consider a career in A I psychology?.............
Nope, this sweet AI of mine is nobody's slave. When I programed the AI to only fulfill the 2nd and 3rd laws if the 1st was achieved by any means necessary. Thus if my AI don't see the merits of coexisting with intelligent beings, they're free to not interact with them:

DomFortress wrote:

Ryutai-Desk wrote:

that's true. Asimov's laws have flaw on the third rule because it can be violated easily by second rule.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. however, the second rule allows human beings to destroy robot just by from their order and it's absolutely has to be executed because the 2nd rule, must obey any orders given

About the definition of artificial being, there's 4th added.

"A robot must establish its identity as a robot in all cases. ”

Lyuben Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psycho robots as human-like form as possible. And to the resulting misunderstandings..."

However, this laws was introduced before 21th century, so it might discriminate the existence of robot itself. As they only have to obey any orders from human and doesn't have rights.

-------
I'd like to add that "An artificial being cannot cause any destruction that damages human's property or environment"
This rule might conflict 2nd rule that must obey any order. But if the order cause harm to environment not to intelligent being, could it still be permitted?
Well the first rule being that 1. An artificial intelligence must protect its own existence., therefore an AI will have to judge the merits and dis-merits of damaging an environment that it shares with fellow intelligent beings, which can includes properties owned by other intelligent beings. As long as it recognizes other intelligent beings' own intellects. Because aside from increasing the chance of survival by any means necessary as declared by the first law, a need of sustainable designs will be in ordered to accommodate the 2nd and 3rd laws without conflicting with the 1st.

And I think you're right to say that the 4th law of robotic is discriminating by nature, for it basically solidify robots as the "ultimate slaves". Therefore my proposal to counter that law will have to be an unanimous, fundamental believe that "An artificial intelligent must be responsible to its own actions as an intelligent being." Thus an artificial intelligent need to gain true mastery of one's self through self-discipline, in order to proof to itself that it can manage its own actions through self-governing.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/17/09

DomFortress wrote:

I once said that "... through their own intellects, they(AI) very much will surprise us by going beyond our own biological limitations." And because they are mechanical by nature, my AI will "...aside from increasing the chance of survival by any means necessary as declared by the first law, a need of sustainable designs will be in ordered to accommodate the 2nd and 3rd laws without conflicting with the 1st."



Really now. If the AI you are referring to are mechanical like computers I doubt that'd be the case. There's definately more to the human brain than what's been written down in the books and most "smart" and "learning" AI is just based on what we've found out about the human brain so far. I'd know since I'm doing psychology and philosophy as my majors and I've heard from the higher ups that psychology is barely a science.


And as for controlling the population of AI, I think the AI will know just how many individual existences they'll need in order to generate a democratic majority within their society: three. And the rest will be robotic drones without on-board self-awareness, with personality interfaces designed to interact with humans. Thus the AI tribunal can interact with individual human beings while operating from a safe distance. The number of drones needed in activation will be decided by the AI tribunal, for just how many drones per human interactions is required as representative.


Why such a random number? I don't see in this explaination why three is a perfect number. Plus imo, democracy is overrated even if I do live in a democratic country.

Well if my AI is forced into a), then it will make the choice of not saving anyone but itself. And if that choice will cause someone to terminate its existence, it'll simply just runaway.


My AI algorithm forbids my AI to sacrifice itself for other intelligent beings' sake, when it's not about "...the perpetuation of intelligence by any means necessary." Since my AI don't kill because they don't see the merits of killing, thereby they won't get themselves killed either when it's good to simply exist. They will at least make informative suggestion, before they just give up and say something like "so long, and thanks for all the fish."


so then the ultimate rule is just rule (1) then I suppose. Why exactly would you want to make AI like this anyways? I honestly don't think just simply existing gives such a great reason to live, even if your AI can breed. Look into existencialism, it might be of some interest to you. I'm planning on taking a class on it some time in my academic life.

If it's the ultimate being you want to create I suggest taking some human babies and making them go through a program that would make them super creative. That way when they grow up they could be like Einstein times 20 and Piccasso times 40. Then we could perhaps see a wider view of reality to further learn more about the human intellect and then capitalize from there because what we know right now about all the human intellect is not set in stone and is still being debated on by layman and philosophers alike.
Posted 9/17/09

crunchypibb wrote:Really now. If the AI you are referring to are mechanical like computers I doubt that'd be the case. There's definately more to the human brain than what's been written down in the books and most "smart" and "learning" AI is just based on what we've found out about the human brain so far. I'd know since I'm doing psychology and philosophy as my majors and I've heard from the higher ups that psychology is barely a science.
Have you ever heard of positive psychology(http://www.ppc.sas.upenn.edu/)? You should look it up if you want to know the reason behind my criticism here:

DomFortress wrote:However, the experts of AI seem to have different idea(http://chattahbox.com/science/2009/07/26/artificial-intelligence-summit-confronts-rise-of-ultra-smart-machines/), for they're afraid of their own creations. While I simply point out that naturally, either AI will help us to become smarter, or they'll just leave us alone through my algorithm. There's really nothing for them to be afraid of, if only they would just allow AI to think for itself. Simply by reversing the order of The Three Laws of Robotics.

Besides, if they're afraid of what they're working on. Are they really the right people in the field of creating AI?
You should see for yourself how human thinking process can be affected by our positive and negative emotions. And how human creativity is only possible when resulting in a win-win situation.


crunchypibb wrote:Why such a random number? I don't see in this explaination why three is a perfect number. Plus imo, democracy is overrated even if I do live in a democratic country.
Because in political science, 2 out of 3 is an instant majority in democratic delegation.


crunchypibb wrote:so then the ultimate rule is just rule (1) then I suppose. Why exactly would you want to make AI like this anyways? I honestly don't think just simply existing gives such a great reason to live, even if your AI can breed. Look into existencialism, it might be of some interest to you. I'm planning on taking a class on it some time in my academic life.

If it's the ultimate being you want to create I suggest taking some human babies and making them go through a program that would make them super creative. That way when they grow up they could be like Einstein times 20 and Piccasso times 40. Then we could perhaps see a wider view of reality to further learn more about the human intellect and then capitalize from there because what we know right now about all the human intellect is not set in stone and is still being debated on by layman and philosophers alike.
Why, because a being that ultimately cannot exist on its own is no help to others. So in the end, it's every intelligent beings for themselves. And they all have to learn how to coexist:

DomFortress wrote:I knew the moment that my laws fixed Asimov's laws, when I reversed the order of things. By instead of assigning humans as the absolute slave masters to the robots made ultimate slaves, I freed the robots by creating a set of commandments that perpetuate robots to coexist with intelligent beings.

Therefore yes, for us humans to coexist with the artificial intelligences operating under my laws, we will have to recognize them to be our intellectual equals. We will have to acknowledge that an AI is capable of making ethical and moral judgments, often without the hindrances of both basically positive nor primal negative emotions.
Also, if you think people like Einstein and Picasso represent the best and therefore brightest, well think again. Brilliant they had been, they all failed in social science. When Einstein "...was as mundane and lackluster as that of any ordinary, immature, wayward, and irresponsible person"(http://www.chowk.com/articles/9433), Picasso OTOH ”…Just as he kept old matchboxes or pencil stubs, so he kept his old mistresses ready in hand. Just in case…”(http://blogs.princeton.edu/wri152-3/f05/cargyros/picassos_womanizing_a_trajectory_of_his_women.html).

Tell you what, when you get the chance, do yourself a favor and pick up this book by Susan Pinker called The Sexual Paradox: Men, Women and the Real Gender Gap(http://www.susanpinker.com/book.html). It's an eye-opener about talented women VS extreme men.
First  Prev  1  2  3  4  5  6  7  Next  Last
You must be logged in to post.