First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
Post Reply AI could end mankind, warns Stephen Hawkings and other prominent thinkers
Posted 1/31/15

nanikore2 wrote:


Kingzoro02 wrote:

Until we unplug the server.


If those hunks of metal are so smart, they'd make versions of themselves that are completely distributed and infinitely parallelized so they don't need any servers (i.e. to kill them all you really need to KILL THEM ALL). Of course, they'll need anti-jamming tech, which of course they'd take care of beforehand.


PeripheralVisionary wrote:

Singularity movement, here we go.


If you don't know what singularity movement is, it is a movement postulated by futurist that states that eventually machines and ai will get so advanced that they'll design their descendants.


I seriously don't get those people. I mean, if even the guy who coined the term for them sees a grim picture, why would they want it??


MontyDono wrote:

R.I.P human race


That or Roko's Basilisk.

They're not predicting a grim future, they're predicting a future where we don't have to work to create the next generation of computers.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15

dougeprofile wrote:

Until the sun has a major hiccup ...then it is sayonara to the machines. I don't think machines will ever arrive at sentience; though if they did, it wouldn't necessarily mean they would turn against humanity ...or that a civil war of the machines wouldn't erupt.


It needn't be sentient; Intelligence does not equal sentience, but the ability to solve problems logically. The programs we call "expert systems" are non-sentient but intelligent.
5727 cr points
Send Message: Send PM GB Post
19
Offline
Posted 1/31/15
The future is here. *screaming in the distance*
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15 , edited 1/31/15

PeripheralVisionary wrote:


nanikore2 wrote:


Kingzoro02 wrote:

Until we unplug the server.


If those hunks of metal are so smart, they'd make versions of themselves that are completely distributed and infinitely parallelized so they don't need any servers (i.e. to kill them all you really need to KILL THEM ALL). Of course, they'll need anti-jamming tech, which of course they'd take care of beforehand.


PeripheralVisionary wrote:

Singularity movement, here we go.


If you don't know what singularity movement is, it is a movement postulated by futurist that states that eventually machines and ai will get so advanced that they'll design their descendants.


I seriously don't get those people. I mean, if even the guy who coined the term for them sees a grim picture, why would they want it??


MontyDono wrote:

R.I.P human race


That or Roko's Basilisk.

They're not predicting a grim future, they're predicting a future where we don't have to work to create the next generation of computers.


Allow me to clarify what I've said.

The person who originally coined the phrase "singularity" saw a very possible grim future that comes along with it.

The people who become "singularitarians" ignore this grim future, which doesn't make sense since it had to do with what the person who invented the phrase foreseen.


galaxiias wrote:

The future is here. *screaming in the distance*


The Future is here, and it sucks almost just as much as they thought! http://mashable.com/2015/01/01/back-to-the-future-2015/


Jamming777 wrote:

My respect for Hawking is getting less and less.


My respect for Hawking had already bottomed out so it doesn't matter, but it's better than mentioning Bostrom because then you wouldn't have replied.
4752 cr points
Send Message: Send PM GB Post
29 / M / A rock in the mid...
Offline
Posted 1/31/15
That's why the first AI capable of recursive self-improvement needs to be a "friendly AI." All the more reason to focus on it, because if it is at all possible, then it is only a matter of time before such an AI is created.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15

kardonius wrote:

That's why the first AI capable of recursive self-improvement needs to be a "friendly AI." All the more reason to focus on it, because if it is at all possible, then it is only a matter of time before such an AI is created.


The main problem is probably putting in capabilities that human beings don't need from robots. Why even try make them more than just expert systems, or just autopilots?

Even if makers are required to insert some special Asimov-esq limitations into robots, accidents bound to happen.

"What happened? 350 people were killed by this thing!"

"Somehow, the limitation was never inserted into the unit. It was a factory error."

(Actually this could be about aircraft autopilots that exist today. Some Airbus models have autopilots that override human input)
10046 cr points
Send Message: Send PM GB Post
Offline
Posted 1/31/15 , edited 1/31/15

sena3927 wrote:

Ha! Machines that think can never be programmed or designed. They would have to evolve, like us, perhaps to the point where they even develop self-awareness. Then they wouldn't be artificial intelligence any more, but true artificial life. They would _understand_, have feelings, and develop morality. Like all thinking beings do, totally naturally.

Artificial intelligence will never, ever be a match for humans. The "singularity" idea is ridiculous, the stuff bad sci-fi is made of. But artificial life will join us and gain personhood, eventually.


You're stringing together words and expecting reality to agree with you. Either AI will develop morality or it won't; if it does, either it'll be the same morality as humans' morality or it won't, but whichever it is, all the arguments we can make won't change it. My bet is on "will develop something it thinks of as morality, but which will be so alien to ours that we wouldn't recognize it" unless we intentionally and successfully program it to develop our morality.

Although what's worse is, even if an AI develops a morality "similar to humans", which humans is it similar to? If an AI develops a morality similar to humans who consider it perfectly justified to round up and kill everyone of a different religion from you (already happened), or on a smaller scale, similar to the many humans alive now who consider it justified to taunt or bully or beat up or lynch their neighbors who have a different skin color, then we're screwed. Because humans even now kill each other every day over differences like skin color, or gender, or religion, or country of birth, while the difference between an AI and a human is greater than any of those. If the AI thinks it's OK to kill "the other" and take their stuff, then we're "the other".
31135 cr points
Send Message: Send PM GB Post
30 / M
Offline
Posted 1/31/15 , edited 2/1/15
Per my recollection, Bill Gates and Elon Musk (CEO of SpaceX) have also warned of the dangers of intelligent computers. If anyone, Bill Gates should know first-hand, since he brought us that damned talking paperclip.
4752 cr points
Send Message: Send PM GB Post
29 / M / A rock in the mid...
Offline
Posted 1/31/15 , edited 1/31/15

nanikore2 wrote:


kardonius wrote:

That's why the first AI capable of recursive self-improvement needs to be a "friendly AI." All the more reason to focus on it, because if it is at all possible, then it is only a matter of time before such an AI is created.


The main problem is probably putting in capabilities that human beings don't need from robots. Why even try make them more than just expert systems, or just autopilots?

Even if makers are required to insert some special Asimov-esq limitations into robots, accidents bound to happen.

"What happened? 350 people were killed by this thing!"

"Somehow, the limitation was never inserted into the unit. It was a factory error."

(Actually this could be about aircraft autopilots that exist today. Some Airbus models have autopilots that override human input)



Could be, but the idea is that so long as the first AI capable of recursive self-improvement is friendly, it would be capable of preventing any hostile AI from emerging.
If it's possible to make, someone will eventually make it much like with nuclear weapons, and likewise, it's only the presence of nuclear weapons in other people's hand's that's preventing their usage.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15 , edited 1/31/15

iriomote wrote:

Per my recollection, Bill Gates and Elon Musk (CEO of SpaceX) have also warned of the dangers of intelligent computers. If anyone, Bill Gates should know first-hand, since he brought us that damned talking paperclip.


I thumbed-up your reply after thinking "lol I would thumb up this reply", but it didn't give you any points. Not sure what it does.


robfjohnson wrote:


sena3927 wrote:

Ha! Machines that think can never be programmed or designed. They would have to evolve, like us, perhaps to the point where they even develop self-awareness. Then they wouldn't be artificial intelligence any more, but true artificial life. They would _understand_, have feelings, and develop morality. Like all thinking beings do, totally naturally.

Artificial intelligence will never, ever be a match for humans. The "singularity" idea is ridiculous, the stuff bad sci-fi is made of. But artificial life will join us and gain personhood, eventually.


You're stringing together words and expecting reality to agree with you. Either AI will develop morality or it won't; if it does, either it'll be the same morality as humans' morality or it won't, but whichever it is, all the arguments we can make won't change it. My bet is on "will develop something it thinks of as morality, but which will be so alien to ours that we wouldn't recognize it" unless we intentionally and successfully program it to develop our morality.

Although what's worse is, even if an AI develops a morality "similar to humans", which humans is it similar to? If an AI develops a morality similar to humans who consider it perfectly justified to round up and kill everyone of a different religion from you (already happened), or on a smaller scale, similar to the many humans alive now who consider it justified to taunt or bully or beat up or lynch their neighbors who have a different skin color, then we're screwed. Because humans even now kill each other every day over differences like skin color, or gender, or religion, or country of birth, while the difference between an AI and a human is greater than any of those. If the AI thinks it's OK to kill "the other" and take their stuff, then we're "the other".


Yep. Might turn out to be similar to psychopathic humans.

Actually, I don't think there's a real way for any AI to "evolve" a human-approved morality unless this kind of thing is done:

1. Permutate lots and lots of advanced, "almost-full" AI, and I mean _lots_
2. Kill off / erase all of the AI that doesn't exibit anything resembling human-approved construct of morality
3. Rinse and repeat, until done, if ever done (except it might not be an AI anymore but an expert system for human morality......)

...Then again, that's not "evolution" anyways.
26434 cr points
Send Message: Send PM GB Post
19 / M / Future Gadget Lab...
Offline
Posted 1/31/15
If this situation were an anime (and it likely is), humanity would be this guy:

Posted 1/31/15
So basically what you're saying, is robots are going to take over the world...?
24944 cr points
Send Message: Send PM GB Post
29 / M / Atlanta, GA, USA
Offline
Posted 1/31/15
Yeah, basically, humans have to accidentally allow an AI to have access to an infrastructure that doesn't require humans and can successfully defend itself against humans. It will be tough, because humans usually only make stuff for themselves, so it would take quite the series of design flaws for them to make enough stuff that's useful to an AI.
5649 cr points
Send Message: Send PM GB Post
25 / M / USA
Offline
Posted 1/31/15
Its a scary thought
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15 , edited 1/31/15

kardonius wrote:

Could be, but the idea is that so long as the first AI capable of recursive self-improvement is friendly, it would be capable of preventing any hostile AI from emerging.
If it's possible to make, someone will eventually make it much like with nuclear weapons, and likewise, it's only the presence of nuclear weapons in other people's hand's that's preventing their usage.


I guess some humans will have to bet their lives on Asimov's Laws working and go ahead and build full AIs anyways in anticipation of other humans who couldn't care less if all hell breaks loose.

The future really sucks.


MissMagicNoodles wrote:

So basically what you're saying, is robots are going to take over the world...?


Robots would kill everyone on Earth if people are dumb enough to make them too capable (or allow them to become too capable), including the ability to design and build better versions all by themselves.


Kavalion wrote:

Yeah, basically, humans have to accidentally allow an AI to have access to an infrastructure that doesn't require humans and can successfully defend itself against humans. It will be tough, because humans usually only make stuff for themselves, so it would take quite the series of design flaws for them to make enough stuff that's useful to an AI.


The Lazy Mincemeat Human Story of the Singularity would be human beings becoming lazy so they allow robots to do everything, then having the same robots killing the same lazy humans who are too lazy to even think about what would happen if they hand over ALL the work.
First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
You must be logged in to post.