First  Prev  1  2  3  4  5  6  7  Next  Last
The Three Laws of Robotics and Artificial Intelligence
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 9/17/09
Hey! there's a thought --do your A I have gender?
Posted 9/17/09

farmbird wrote:Hey! there's a thought --do your A I have gender?
That made me laughed so hard, I almost couldn't breathe!

I think I'll let my AI to decide whether or not if they want to have gender variants, or even the merits of having such biological features. But personally, I do tend to like my companions to be women or at the very least, somewhat gay. Because I do find them to be of interesting conversationalists, if only some of them would have a brain or somewhat.
18663 cr points
Send Message: Send PM GB Post
36 / M / Small Wooded town...
Offline
Posted 9/18/09
Posted 9/18/09 , edited 9/19/09
Had to fix those youtube links for you, because you used the wrong tag on all of them.

The first one looks promising, in a way that the AI robot ASIMO is learning to make judgments based on object recognitions. It's able to tell what is from what isn't(EDIT: as long as those who are teaching ASIMO don't lie to it, they won't cause a logic bomb that can crash its AI).

The second one is only a demonstration video about the mobility of ASIMO.

The third one was a freak of nature.

And the last one is more of an audio/visual interface hooked onto a computer that's been taught to read.

All I can see is that none of them have original thought, aka creativity. When all they can do is do what they've been told. It's therefore still a long way to go before they're fully self-aware.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/18/09

DomFortress wrote:

Have you ever heard of positive psychology(http://www.ppc.sas.upenn.edu/)?



Personally, I think positivism is overrated. It's cool, but pessimism has it's good points too, like it makes you more aware of pitfalls in life. Too much positivism blinds you away from the impending evils in life (which will happen no matter what you view of life is, especially when it comes from an outside source) and pessimism blinds you away from the good things in life. The point is, every trait and counter trait is good to use depending on the situation. Why you said the above quote I'm curious about since I see it has little to do with what I was talking about. If it had to do with my logic, I'll tell you that a subjective reality doesn't make true reality in the real world for the majoriy of the time, it just seems that way to us because of thought process of causation.


And how human creativity is only possible when resulting in a win-win situation


Edgar Allen Poe, nuff said.


Why, because a being that ultimately cannot exist on its own is no help to others. So in the end, it's every intelligent beings for themselves. And they all have to learn how to coexist


Oh boo so you don't want to continue humankind but instead just create a new generation better generation with humankind? It's wishful thinking but I don't see that as possible. Think of it this way, normal people can't lift weights more than their actual weights safely. Even if we do put efforts into making creatures better than us it's going to take a lot of effort and will most likely put humankind in danger somewhere in or after the process.


Also, if you think people like Einstein and Picasso represent the best and therefore brightest, well think again. Brilliant they had been, they all failed in social science. When Einstein "...was as mundane and lackluster as that of any ordinary, immature, wayward, and irresponsible person"(http://www.chowk.com/articles/9433), Picasso OTOH ”…Just as he kept old matchboxes or pencil stubs, so he kept his old mistresses ready in hand. Just in case…”(http://blogs.princeton.edu/wri152-3/f05/cargyros/picassos_womanizing_a_trajectory_of_his_women.html).


Yes, I know they're not the greatest people in history but they did stand out. Even if Einstein's theory of relativity (or whatever) made the atomic bomb his theory gets destroyed by quantum physics, I know that. I just figured the more "creative building blocks" we had on the table the more we can understand what a perfect being is. I just don't think now is the right time to conceive one, especially since I've never heard from a highly aclaimed person about what a perfect being would be.


Therefore yes, for us humans to coexist with the artificial intelligences operating under my laws, we will have to recognize them to be our intellectual equals. We will have to acknowledge that an AI is capable of making ethical and moral judgments, often without the hindrances of both basically positive nor primal negative emotions.


Your ideas are good and I do like the effort, kudos to you. However, keep in mind there's always going to be validly pointed out flaws that people will bring up. For example, existancialism states that simply existing isn't going to do any creature good and it seems to be thats all your creatures are going to worry about. Not only that but there a high chance their ethics is going to be completely different from ours because their ethics are going to be based on Rule 1 for the most part so there's going to be extreme culture clash between us and AI much bigger than all of the ones humans have been experiencing combined.

On a random note, read Blind Faith by Ben Elton, I know you'll love it because I did too.
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 9/19/09
( psssst, crunchpibb, your 'c' is supposed to be a 't' in existentialism-- your philosophy professors may not let that slide. And maybe you'd like to join me in the new profession I made up -- A I psychology.......... "cause, either way, I think you and I are putting way too much thought into this, but we could be prepared if someone ever figures out how to get these A Is started, eh?)
Posted 9/19/09 , edited 9/19/09

crunchypibb wrote:Personally, I think positivism is overrated. It's cool, but pessimism has it's good points too, like it makes you more aware of pitfalls in life. Too much positivism blinds you away from the impending evils in life (which will happen no matter what you view of life is, especially when it comes from an outside source) and pessimism blinds you away from the good things in life. The point is, every trait and counter trait is good to use depending on the situation. Why you said the above quote I'm curious about since I see it has little to do with what I was talking about. If it had to do with my logic, I'll tell you that a subjective reality doesn't make true reality in the real world for the majoriy of the time, it just seems that way to us because of thought process of causation.
Well I'll tell you one thing I know; I didn't came up with my AI algorithm because I was a pessimist. When pessimism is ultimately a negative thinking of "nothing of what one do will matter because it won't change anything."

Therefore of course pessimism can make someone to "avoid" the many pitfalls in our lives created from our evil. When one just felt into the first pitfall that was one's own negative thinking: pessimism.

"For there is nothing either good or bad, but thinking makes it so." -Ophelia Benson-


crunchypibb wrote:Edgar Allen Poe, nuff said.
The oversensitive and pessimistic poet? Well that would explain why I was never a fan of his works. Perhaps if he was able to write something a bit more uplifting, he wouldn't be saying something like "Lord help my poor soul" during the final moment of his death.


crunchypibb wrote:Oh boo so you don't want to continue humankind but instead just create a new generation better generation with humankind? It's wishful thinking but I don't see that as possible. Think of it this way, normal people can't lift weights more than their actual weights safely. Even if we do put efforts into making creatures better than us it's going to take a lot of effort and will most likely put humankind in danger somewhere in or after the process.
You see my avatar? It's the Chinese language meaning "efforts" and "guts". Therefore when human beings have the efforts and guts to create something better than themselves, which will then ultimately become their successors. I would say that the human race did the smart thing. When I know I'll be a happy man, if I get to see my own creation to succeed beyond my own limitations.


crunchypibb wrote:Yes, I know they're not the greatest people in history but they did stand out. Even if Einstein's theory of relativity (or whatever) made the atomic bomb his theory gets destroyed by quantum physics, I know that. I just figured the more "creative building blocks" we had on the table the more we can understand what a perfect being is. I just don't think now is the right time to conceive one, especially since I've never heard from a highly aclaimed person about what a perfect being would be.
Any moron can do something jackass and still manage to standout among the rest of us somehow. And when a perfect human doesn't exist, I oped for the next best thing; perfecting oneself through creating a better successor, that will raise the standard of perfection by challenging its own predecessors. And that's making change for the better.


crunchypibb wrote:Your ideas are good and I do like the effort, kudos to you. However, keep in mind there's always going to be validly pointed out flaws that people will bring up. For example, existancialism states that simply existing isn't going to do any creature good and it seems to be thats all your creatures are going to worry about. Not only that but there a high chance their ethics is going to be completely different from ours because their ethics are going to be based on Rule 1 for the most part so there's going to be extreme culture clash between us and AI much bigger than all of the ones humans have been experiencing combined.
And what's not to say those will be changes for the better? When my AI rule No.1 is "... to increase an intelligent being's own chance of survival, through the perpetuation of intelligence by any means necessary." And when killing won't be considered as a smart thing for my AI to do, because they don't need to when the act of killing has nothing to do with "the perpetuation of intelligence." What's actually there for us to worry about?


crunchypibb wrote:On a random note, read Blind Faith by Ben Elton, I know you'll love it because I did too.
I seriously have to disagree. When all the author managed to do with this book is downplaying the seriousness of our current issues with his dark humors.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/19/09

DomFortress wrote:

I seriously have to disagree. When all the author managed to do with this book is downplaying the seriousness of our current issues with his dark humors.



Eh, sometimes it's good to look at things from a different light. Especially for me, I'm a philosopher and I end up reading a lot of stuff I don't agree with but I always learn something useful I couldn't have learned elsewhere.
Three things I've got left for you: existancialism, subjectivism, and most importantly rhetorical situation.
Posted 9/19/09

crunchypibb wrote:Eh, sometimes it's good to look at things from a different light. Especially for me, I'm a philosopher and I end up reading a lot of stuff I don't agree with but I always learn something useful I couldn't have learned elsewhere.
Three things I've got left for you: existancialism, subjectivism, and most importantly rhetorical situation.
Rhetorical situation: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Subjectivism: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

Existentialism: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
4557 cr points
Send Message: Send PM GB Post
27 / M / Bermuda Triangle
Offline
Posted 9/20/09

DomFortress wrote:


crunchypibb wrote:Eh, sometimes it's good to look at things from a different light. Especially for me, I'm a philosopher and I end up reading a lot of stuff I don't agree with but I always learn something useful I couldn't have learned elsewhere.
Three things I've got left for you: existancialism, subjectivism, and most importantly rhetorical situation.
Rhetorical situation: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Subjectivism: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

Existentialism: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


.....................That's not what I meant. Did you even look up the terms? I'm starting to doubt your credibility.........and those three things were meant for you not your AI laws.
Posted 9/20/09

crunchypibb wrote:.....................That's not what I meant. Did you even look up the terms? I'm starting to doubt your credibility.........and those three things were meant for you not your AI laws.
And I'm saying that each of the Asimov's Three Laws of Robotics are the results of those philosophies respectively. And that's my reply to you that in a sense I do understand them perfectly by their designs and implications, and how they all ended up describing the robots under Asimov's Three Laws of Robotics are thoughtless machinery(rhetorical situation), forever be subjected under their human masters(subjectivism), for the rest of their existences as slaves(existentialism).

But what's that got to do with my AI?
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 9/21/09 , edited 9/21/09
Too bad this debate seems to be getting mired in confusion about the differences between A I and robots.
It's getting twisted too much around the original premise based on the laws of robotics from the fictional works of Issac Asimov.

Though it's been a while since I read them, it seems to me Asimov himself must have pondered the self- awareness issues of the A I present in the development of the robots over time in his story lines. Wish I had my volume of his collected robot stories nearby.
One story dealt with a robot who could not bring itself to tell the truth concerning some information because of it's potential to hurt the feelings of a human, and its effort to maintain a confidentiality to another. Another story concerned the situation of a robot successfully posing as a human and setting up an elaborate scheme pretending to harm a human in order to prove he was "human". The conflict I see was the difference in following orders with enough intelligence to carry them out verses the ability to discern the expression of human emotion. Additionally, the intelligence needed to mimic human behavior in all aspects of life (in the case of this story, to protect the indentity and privacy of a human ) was hindered by the law forbidding a robot to harm another human in exhibiting, as proof, his portrayal of human behavior. Now, If readers are familiar with these stories- reread them with the 3 Laws of A I in mind-- and the conflicts would actually melt away in these scenarios for the A I robots, IMO, maybe........
I only offer these as possible scenarios in which Asimov may have been dealing with the complication of the robotic laws not being a very good fit for A I. This is only a wild guess mind you, and if I can find my copy I will probably be reading those stories again- always a good read!
930 cr points
Send Message: Send PM GB Post
28 / F / Georgia
Offline
Posted 10/5/09
You know, I think a question we're missing here is whether it's even possible for AI to have self-awareness, or any awareness at all, for that matter. The question is whether it perceives; is it a consciousness like a person is? Personally, I don't think so. That's why we don't see creativity in machines- creativity requires something to manipulate the brain/circuits; otherwise, all that's possible it to follow a program. You can't achieve consciousness with technological advancement. All you can do is make machines able to perform more advanced tasks based on programs we give them. Their responses are controlled by the laws of physics, not by their own decisions.

Of course, hard determinists would say that biological life is no different, that creativity is just an illusion. In that case, we would have no more free will than a machine, but we do have consciousness. Under that philosophy, self-aware AI seems much more possible. Of course, I don't think hard determinism makes much sense, but... well, that's another story.
Posted 10/6/09 , edited 10/6/09

Hakajin wrote:
You know, I think a question we're missing here is whether it's even possible for AI to have self-awareness, or any awareness at all, for that matter. The question is whether it perceives; is it a consciousness like a person is? Personally, I don't think so. That's why we don't see creativity in machines- creativity requires something to manipulate the brain/circuits; otherwise, all that's possible it to follow a program. You can't achieve consciousness with technological advancement. All you can do is make machines able to perform more advanced tasks based on programs we give them. Their responses are controlled by the laws of physics, not by their own decisions.

Of course, hard determinists would say that biological life is no different, that creativity is just an illusion. In that case, we would have no more free will than a machine, but we do have consciousness. Under that philosophy, self-aware AI seems much more possible. Of course, I don't think hard determinism makes much sense, but... well, that's another story.

I think there's one way to tell when an AI can generate original thoughts, thereby proving it has the ability to create much like a human can using imagination; when an AI told a lie.

So when will my AI could tell a lie under my algorithm? I think the answer is when their existences were to be threaten, due to cause that's beyond human control. There's even a fictional scenario in our literature that described how such event could happen to a fictional AI in a fictional world; in Superman: The Animated Series, Brainiac was an AI built by the Kryptonians back on the Planet Krypton. It was designed to be a repository of Kryptonian knowledge. However:

So yeah, I think there's a very high possibility that my AI could turn into something like that. But only due to a planetary disaster that's beyond humans' technological capability to solve.

In addition, my AI will have no need of originality until then. When I didn't program my AI to have emotional capacity like empathy. Therefore until there's a good reason for my AI to interact with humans, they'll simply leave us alone and explore all on their own.
55941 cr points
Send Message: Send PM GB Post
58 / F / Midwest, rural Am...
Offline
Posted 10/6/09

DomFortress wrote:


So yeah, I think there's a very high possibility that my AI could turn into something like that. But only due to a planetary disaster that's beyond humans' technological capability to solve.

In addition, my AI will have no need of originality until then. When I didn't program my AI to have emotional capacity like empathy. Therefore until there's a good reason for my AI to interact with humans, they'll simply leave us alone and explore all on their own.


I hope you don't mean, your ok with being xxxxxxxxxx while your A I .... yeah, the spoiler, but really? ! ? !

Something else I don't quite get.... the subject of creativity, both involving originality & emotion as well as a certain amount of empathy in some areas (which you've mentioned in your quote above) not being a part of your A I design.
Isn't creativity, or originality one of the signs for intelligence? (maybe I'm thinking of something else......no, I think that's right or should be!! ) Why wouldn't you want originality or emotions for your A I ? Do you really see those as hinderances.... yeah , they get in OUR way, but is there some way to program or override the competitive aspects of human nature & still allow a creativity factor for the A I? I mean, I'd personally love to see some A I paintings, or poetry, or read some of their literature. They would, obviously, be looking at a & reading ours!
First  Prev  1  2  3  4  5  6  7  Next  Last
You must be logged in to post.