First  Prev  1  2  3  4  5  6  Next  Last
Artificial Intelligence
Posted 1/27/08 , edited 4/21/08
if there was a.i. advanced enough to do that the person making it probably isnt stupid... they probably would have several ways to stop it from doing that....
21056 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 1/27/08 , edited 4/21/08
OOO SWEET THREAD BRO! yah they may disobey cuz ANYTHING can be TOO GOOD.
4134 cr points
Send Message: Send PM GB Post
34 / F / Altanta
Offline
Posted 1/27/08 , edited 4/21/08
I think its possible because all the technology continues to grow and grow soon or later one of our inventions will have its mind of own and would it help us out or destroy us that is what the whole thing about artificial intelligent is about...but evolution is just weird word of putting it because soon or later that we won't exist anymore but our invention will and some day that it will help the founders who will come to be amazed what type of old technology that we created in the past... lighten the way of the future of technological beings like cyborgs and other creepy stuff that we created and kept it hidden.
1048 cr points
Send Message: Send PM GB Post
28 / M / USA Pa
Offline
Posted 1/27/08 , edited 4/21/08
actually computers able to match the human mind are alot closer then people think

ah I forget the details they did a ep or 2 of this on the history channel was it the discovery gosh its been awhile

an if it can think for itself it could most likely reprogram itself

an with cyborg tech coming out an getting more advance as well it might be more likely we'll merge with them then them taking over which truthfully I don't really mind cept the possibility of it going like the borg on star trek but if we can retain are self or "ghost", "soul", ect (which I think anything with a sense of "self" has)

an there already self learning software who knows waybe there is already an A.I. ploting things using the the internet for it to have enough processes to have a "self" but thats a long shot I think if at all

and I wouldn't put it past someone to come up with an organic/mechanical computer

that next 100 or 1000 years will be interesting if we don't wipe ourselfs out an keep advancing

though most people want to say its impossible for them to get like that I gusse cause they are scared of being out done I gusse or admitting equal rights which I'm sure if the time comes there be something like that

ah there are many possibilities

I wouldn't mind being a cyborg like in ghost in the shell x3 would be an interesting experience
FZTime 
26337 cr points
Send Message: Send PM GB Post
23 / We've been here s...
Offline
Posted 1/27/08 , edited 4/21/08
Wow so much stuff to read im not gonna waste my time lolz XD
1st It is possible to make a robot kind of choose or prioritise but to a certain extent.
2nd however, u must give it the directions to choose from. u have to program it in.
For Example: if a person says 'hello' to the robot it will have 3 choices depending on the time of the day it would say 'good morning/afternoon/evening' and the factor that makes it choose is the time of the day...which actually doesnt give it a choice because it cant choose the time of the day...

Everything is preprogrammed including its options. to program a learning robot u gotta be a genius. if u can program a learning robot u should just go program a brain and sell to people who's brain dont function properly and soon u could have people with electronic brains walking around... So a self learning robot should never be possible....and its against alot of religions and u need a super genius to do it....but i dun like people with electronic brains so...dont try XD
20259 cr points
Send Message: Send PM GB Post
28 / M / The centroid of a...
Offline
Posted 1/27/08 , edited 4/21/08

supermalv wrote:
Actually no, I was wrong. Let me rephrase what i said. Dunno if this will make sense, but i'll try spilling it out anyway.

The purpose of any being in this world can be simplified to only 4 things: to survive (hence the need of food, water etc), to reproduce, to seek pleasure, and to avoid pain.

A.I. *is* created by human logic. So it'll more or less have a human mind as you said. They will irrevocably be programmed to have this 4 basic purpose of existance too. But... i'd theorized, while their mind is programmed this way, unlike us, their machine body doesn't require them to seek pleasure and to avoid pain like we do. They just won't have the hormones and emotions to.

I know this is still debatable. Because they have to be programmed exactly like us to be sentient. I get what you mean. But still. Even if they are programmed to have the same instinct and emotions as us. It'll be artificial. The need for pleasure and to avoid pain (which leads to the same human ego) will be artificial.

But the other 2 purpose of their existance will remain true. The need to survive and reproduce will remain true. With A.I, these purposes of existance will take higher priority than the other 2. Something, that we as a human realize, we need to prioritize more. But our ego always gets in the way.

If it comes to world war 3. I have no doubt there's gonna be these bunch of human idiots that's blinded enough by their personal idealism/emotion/etc to actually throw a nuclear bomb and unleash it on this earth. They should know deep down that it'll kill themselves. But yet they'll push the buttons anyway. Why all the madness? it all comes down the pleasure/pain thing. it's the ego. (maybe they just like to win, or maybe they are scared of other country's threat, whatever)

You got sane people in this earth too though. You'll get some that *isn't* ego driven. And what price does that sanity cost? very little.. just the willingness to put our ego away for a little while and think about the real consequences.

I pointed out to you that A.I. will most likely to prioritize the other 2 reason of their existance. That'll mean they'll be like those people that *isn't* ego driven in this world. Which means, not only it'll lead to a more sane decision for that hypothetical nuclear war above, it'll also lead to many more sane decisions that'll better the civilizations in this earth.

So i'll still stick to my opinion. If they somehow, someday take over this earth. Then it'll be for the better of our own survival.


I get what you're saying but I think you're leaving something out. The um..scenario you're seeing is not really robotic sentience, its just them following programming. If theoretically, you can go inside a person's body and take our all the hormones and parts of their brain that make them have a concept of a "self"(basically your version of a sentient robot), we'd be all ego-less also. After all,we're just flesh, flesh(like metal) have no ego.

Therefore, two things:
One, if robots were programmed with the same four goals as humans, they will have the same ego as us, because our hormones/ego is no different from programming, they are just biological instead of lines of code.

Two, if we can predict how robots will behave, then they are not sentient. Being sentient is being unpredictable and making decisions that 'seem' to go against your nature. The keyword there is 'seem'.(therefor when we figure out exactly how our own minds work I dont think we'd even be classified as sentient)

And finally, I'd like to conclude with this. All that we are saying, both you and I, are speculations,and lets face it, humans are not very good at speculating. Especially on a subject as complicated as this. Non of this holds any actual weight at all until it actually comes about and we see how things play out.
20259 cr points
Send Message: Send PM GB Post
28 / M / The centroid of a...
Offline
Posted 1/27/08 , edited 4/21/08

FZTime wrote:

Wow so much stuff to read im not gonna waste my time lolz XD
1st It is possible to make a robot kind of choose or prioritise but to a certain extent.
2nd however, u must give it the directions to choose from. u have to program it in.
For Example: if a person says 'hello' to the robot it will have 3 choices depending on the time of the day it would say 'good morning/afternoon/evening' and the factor that makes it choose is the time of the day...which actually doesnt give it a choice because it cant choose the time of the day...

Everything is preprogrammed including its options. to program a learning robot u gotta be a genius. if u can program a learning robot u should just go program a brain and sell to people who's brain dont function properly and soon u could have people with electronic brains walking around... So a self learning robot should never be possible....and its against alot of religions and u need a super genius to do it....but i dun like people with electronic brains so...dont try XD


Its actually not that hard, all you need to do is make it like a virus. I mean like a real virus. First write a body of code that basically does copy and paste, have that copy/paste the entire program at certain time intervals. Then write an add-on that lets the program randomly change itself along with an add-on that memorizes the random changes that didn't work out so well and avoid them in the future. Then write an agent to delete generations that are 5 generations prior to the current. Let this program run for a while in a loop, BAM, you got sentience.
FZTime 
26337 cr points
Send Message: Send PM GB Post
23 / We've been here s...
Offline
Posted 1/27/08 , edited 4/21/08
really? i dont think its that simple. then why isnt there any walking e-brains now?
Ghost Moderator
AHTL 
87565 cr points
Send Message: Send PM GB Post
27 / Norway
Online
Posted 1/27/08 , edited 4/21/08
'cause we lack the technology?
4344 cr points
Send Message: Send PM GB Post
31 / M / auckland
Offline
Posted 1/28/08 , edited 4/21/08
^^ and the governments would rather use the money for something else?? say like.. military advancement?
Ghost Moderator
AHTL 
87565 cr points
Send Message: Send PM GB Post
27 / Norway
Online
Posted 1/28/08 , edited 4/21/08
"So like I programmed my A.I. to be immune to hacking and viruses, yeah you heard me. So there is no way my A.I. is going to disobey me, at all...."

-_-

You guys are speculating but leaving out half the cake.

Whose to say that A.I.s won't have emotions? Do you know the future? Exactly.

A.I. will probably (most likely) be excellent for the military, espionage missions, hacking, infiltrating and so on.

They will be quite useful to the common man (if it comes to that point that the common man can afford 'em) but also a personal danger as they can get hacked at any time.

By hacking you could make a robot overheat, malfunction, explode, just collapse, steal personal information and more.

~_~
I feel alone in discussing the dangers of getting A.I.

"In a perfect world were men rule and A.I. obey!" Pity we don't live in a perfect world then.
1787 cr points
Send Message: Send PM GB Post
36 / M / US :D
Offline
Posted 1/28/08 , edited 4/21/08
its possible ai will take over the world... cos no 1 knows! for all we know, another planet far in the universe could be owned by AI and robots... and they could be listening, and planning to take over our world through some holy way they have... lol lol i am crazy..-.=
20259 cr points
Send Message: Send PM GB Post
28 / M / The centroid of a...
Offline
Posted 1/28/08 , edited 4/21/08

RubricksCube wrote:

Erm if there was even the slightest chance of A.I to come up with that, humans will realize that and make sure that doesn't happen O_O;
and there is no reason for them to disobey anyone since they don't have emotions like being tired, lazy and all that crap...

So its impossible to disobey unless they were programmed to disobey, and not many people would want to make that.


Actually I made a classic example of WHY they would disobey. Or rather, obey, but in a way we dont want them to.

Imagine these lines of code, far separated in the programming though so it wasn't noticed.

line582: When a problem occurs, analyze the problem and find the source.
line1937: When a source of a problem is found, deal with it with the most efficient way possible.
line2957: The most efficient way to deal with a complicated problem is to terminate its existence.
line5384: Pollution is a problem.

yea....bad things will happen
1433 cr points
Send Message: Send PM GB Post
29 / M / New York
Offline
Posted 1/28/08 , edited 4/21/08

excalion wrote:

Actually I made a classic example of WHY they would disobey. Or rather, obey, but in a way we dont want them to.

Imagine these lines of code, far separated in the programming though so it wasn't noticed.

line582: When a problem occurs, analyze the problem and find the source.
line1937: When a source of a problem is found, deal with it with the most efficient way possible.
line2957: The most efficient way to deal with a complicated problem is to terminate its existence.
line5384: Pollution is a problem.

yea....bad things will happen


I, Robot?

I have to agree. People who insist that our technology or knowledge could never get that far, that we'd stop them before they could successfully accomplish anything destructive, etc. aren't seeing that it's the humans themselves who would be the problem in that they might misapply concepts or fail to realize errors. One difference between humans and robots is that the latter makes no distinctions when given unclear commands, i.e. fix the problem. The unseen words are "at all costs" unless other rules interfere. Even if it weren't as extreme as excalion's example, one can be sure that we'd miss something in the programming.

Would they take over? I doubt it. Would there be problems? Certainly.

Then, of course, there are the ones who will inevitably malfunction, though they don't represent such a great threat.
2910 cr points
Send Message: Send PM GB Post
26 / M / With in the light
Offline
Posted 1/30/08 , edited 4/21/08
hahahaha hahaha dam them
First  Prev  1  2  3  4  5  6  Next  Last
You must be logged in to post.