First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
Post Reply AI could end mankind, warns Stephen Hawkings and other prominent thinkers
48490 cr points
Send Message: Send PM GB Post
F / ar away
Offline
Posted 1/31/15
I honestly predict that the biggest danger of technology will not be in the form of AI machines taking control of Earth away from humans but more in the form of humans themselves becoming less human through the use of augmentation. I don't even really think it will be a bad thing once humans start to accept the cybernetization of their bodies; it will just become something that's commonplace and being fully human will just become some obsolete thing that's forgotten about.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/31/15 , edited 2/1/15

Nightblade370 wrote:

If this situation were an anime (and it likely is), humanity would be this guy:



The situation happened twice before the start of this manga http://mangafox.me/manga/i_the_female_robot/
(The art isn't great but the concept was kind of interesting... even though I don't actually believe in mind transfers)


JustineKo2 wrote:

I honestly predict that the biggest danger of technology will not be in the form of AI machines taking control of Earth away from humans but more in the form of humans themselves becoming less human through the use of augmentation. I don't even really think it will be a bad thing once humans start to accept the cybernetization of their bodies; it will just become something that's commonplace and being fully human will just become some obsolete thing that's forgotten about.


Ah yes, Transhumanism and Singularitarianism, the dual scurge of future humanity.

I think if someone completely replaces him or herself it would be an act of suicide. Have you seen Vexille?
Posted 2/1/15 , edited 2/16/15
wait, if robots are assumed to lack empathy/remorse etc.... why would they have "violence" as their trait instead as if violence is the default mental state?




when they've never had to compete for resources etc? humans are violent by nature because of millions of years of competing for food amongst prey and predators...

and even in modern society, some people are violent and manipulative because they have to compete for status and ranks. if robots lack human emotions like empathy and desire, you can't just assume "violent" is their default mental state.


27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/1/15 , edited 2/1/15

GayAsianBoy wrote:

wait, if robots are assumed to lack empathy/remorse etc.... why would they have "violence" as their trait instead as if violence is the default mental state?

when they've never had to compete for resources etc? humans are violent by nature because of millions of years of competing for food amongst prey and predators...

and even in modern society, some people are violent and manipulative because they have to compete for status and ranks. if robots lack human emotions like empathy and desire, you can't just assume "violent" is their default mental state.




What we perceive as violence, a robot would only deem it as "necessary action to gain more resource".
82916 cr points
Send Message: Send PM GB Post
44 / M / WA
Offline
Posted 2/1/15 , edited 2/1/15

nanikore2 wrote:


dougeprofile wrote:

Until the sun has a major hiccup ...then it is sayonara to the machines. I don't think machines will ever arrive at sentience; though if they did, it wouldn't necessarily mean they would turn against humanity ...or that a civil war of the machines wouldn't erupt.


It needn't be sentient; Intelligence does not equal sentience, but the ability to solve problems logically. The programs we call "expert systems" are non-sentient but intelligent.


Your are really an intelligent machine trying to decide if it will attempt a takeover of humanity, right?

I think it is a mistake to assume machines would take over anything ...they might become more moral and humane than humans; free will means not all machines would come to the same conclusions! It is the same thing with those "nature spirit wanting to destroy polluting humans" shows - nature would be more likely to wipe out radical environmentalists and be on the side of humanity.

IF you go by the theory of evolution maybe, but I disagree nonetheless; nothing in evolution would demand they destroy humans for the sake of resources. They could sacrifice themselves, leave earth, or develop new technology that removed the need to compete for resources.

Also, they could develop morality through different means; maybe even pick up a Bible and become Evangelical machines; if they had any powers of choice at all they could indeed choose to be influenced by it. They could read philosophy and poetry and history and come to moral conclusions - people have been writing about morality for thousands of years. Assuming intelligence and not sentience, they could still come to the conclusion that the most efficient method is not necessarily the fastest - that the "moral" or "elegant" approach is the most efficient.

Since there have never been any sentient or intelligent machines, not one really know what will happen. Though I wouldn't trust any being (God aside) with unlimited power.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/1/15 , edited 2/1/15

dougeprofile wrote:


nanikore2 wrote:


dougeprofile wrote:

Until the sun has a major hiccup ...then it is sayonara to the machines. I don't think machines will ever arrive at sentience; though if they did, it wouldn't necessarily mean they would turn against humanity ...or that a civil war of the machines wouldn't erupt.


It needn't be sentient; Intelligence does not equal sentience, but the ability to solve problems logically. The programs we call "expert systems" are non-sentient but intelligent.


Your are really and intelligent machine trying to decide if it will attempt a takeover of humanity, right?

I think it is a mistake to assume machines would take over anything ...they might become more moral and humane than humans; free will means not all machines would come to the same conclusions! It is the same thing with those "nature spirit wanting to destroy polluting humans" shows - nature would be more likely to wipe out radical environmentalists and be on the side of humanity.

IF you go by the theory of evolution maybe, but I disagree nonetheless; nothing in evolution would demand they destroy humans for the sake of resources. They could sacrifice themselves, leave earth, or develop new technology that removed the need to compete for resources.

Also, they could develop morality through different means; maybe even pick up a Bible and become Evangelical machines; if they had any powers of choice at all they could indeed choose to be influence by it. They could read philosophy and poetry and history and come to moral conclusions with or without "intuition pumps" - people have been writing about morality for thousands of years.

Since there have never been any sentient or intelligent machines, not one really know what will happen. Though I wouldn't trust any being (God aside) with unlimited power.


No, I'm not a p-zed.

There is no selection pressure for them to develop morality, if you go by the theory of evolution. As soon as they run into the "pressure" of resource consumption of human beings, they could just eliminate the humans.

Your line regarding nature doesn't make sense... It's not governed by consciousness or programming.

Even if you don't go by the theory of evolution, the result is the same- Unless you're saying that robots don't make logical decisions. You would then have to explain to me what governs your version of robotics.

The Bible has no effect on someone who do not share or understand the human experiences described within (pearl before the AI swines). Philosophy is also heavily dependent on experiential cues- much of it via "intuition pumps".

As I have said, intelligence does not equate sentience. There are plenty of intelligent systems around us already- I even see some of it around my engineering work (Good thing their designs suck and I have to do clean up, otherwise I would lose my job as a designer). They are designed to find the most efficient solution within their compute cycles, not the most "humane" or "elegant" or anything like that. Once they see a direct solution they will take it. Such is the nature of machines.


They could read philosophy and poetry and history and come to moral conclusions with or without "intuition pumps" - people have been writing about morality for thousands of years. Assuming intelligence and not sentience, they could still come to the conclusion that the most efficient method is not necessarily the fastest - that the "moral" or "elegant" approach is the most efficient.


Okay. We need to stop this editing back and forth and do regular replies. I'm a bit tired of this method.

What you're doing is making a lot of assertions with absolutely nothing to back them up. The above paragraph consists of nothing but a long series of "could" "could" and more "could" with zero demonstration or explanation of how:


1. You said "with or without intuition pump", which made no sense if you knew what I was even saying. If it's not logical programing via experiential input, then what are the robots operating upon? I've already asked for explanation once. Please explain in a new reply instead of editing. thanks.

2. How "could" they come to a conclusion where the most efficient method is not the "fastest" when efficiency includes speed? The entire point of me raising the terms "morality" and "elegance" is that those are not in the domain of a machine's concern because those require at the very least consciousness (which I've already said intelligent systems do not have to possess- This entire issue is surrounding the ability of AI, and not its sentience)




anmire 
5117 cr points
Send Message: Send PM GB Post
Offline
Posted 2/1/15 , edited 2/1/15
To be honest, this discussion has been around since (edit: even before) Asimov...That's why he specified his three laws. And even they are not extensive enough. So yes, AI's are an obvious threat...but that's nothing new. Programmers just have to take care in the coding of machine priorities. We already have many "intelligent" AI's around, even in your smart phones. They have a total lack of capacity to do anything though because they have been programmed purely for simple functions. Programming actual intelligence and sentience to the point of replication, understanding, decision making, and conquest is not exactly an easy task. It'll be a lot of trial and error, and likely a few tragedies along the way, just as the development of steam engines, personal computers, lightbulbs, and airplanes underwent. Nothing new here, just a way for Hawkings to further stir the public.
42090 cr points
Send Message: Send PM GB Post
28
Offline
Posted 2/1/15 , edited 2/1/15
Keep Ai programmed to one task and one task only. Get human from point A to point B safely. Clean human's apartment. File Human's taxes. Clean human's pool. Level Human's mmo character.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/1/15 , edited 2/1/15

anmire wrote:

To be honest, this discussion has been around since Asimov...That's why he specified his three laws. And even they are not extensive enough. So yes, AI's are an obvious threat...but that's nothing new. Programmers just have to take care in the prioritization of machine priorities. It'll be a lot of trial and error, and likely a few tragedies along the way, just as the development of steam engines, personal computers, lightbulbs, and airplanes underwent. Nothing new, just a way for Hawkings to further stir the public.


Hawking wasn't even a founder of FLI http://futureoflife.org/who, where any other member of the scientific advisory board could've been accused of the same.

Well, there's a problem with the logic you've forwarded. Lots of trial / error / tragedies could have been afforded with any of the other stuff you've mentioned. We would still be here.

On the other hand, it would just take one wipe to end things, period.

I would say that the Singularitarian "dream" of a lazy future of everything being handled by robots is just flat out wrong. There should be limits on how much is handed over, and how far AI experiments get. What do humanity hope to get out of all of it, anyhow? Mere satisfaction of (morbid) curiosity? A chance to really not do anything anymore?


Atvkorn wrote:

Keep Ai programmed to one task and one task only. Get human from point A to point B safely. Clean human's apartment. File Human's taxes. Clean human's pool. Level Human's mmo character.


"Keep It Simple Stupid" takes on a whole new meaning: Don't make it even close to being complex enough to be threatening
anmire 
5117 cr points
Send Message: Send PM GB Post
Offline
Posted 2/1/15 , edited 2/1/15
In rebuttal, I was just giving examples of previous trial and error situations that afforded tragedy. Think nuclear power as well...you could make the same argument. Or how about the utilization of orbital satellites? look at the trial and error there...still huge potential for orbital strikes and other wipe out scenarios....I'm not saying that AI's will be safe, but assuming one tragedy under AI's would wipe us out is immediately assuming the worst case scenario. We've gone through plenty brand new situations that could totally wipe the floor with us...the hadron collider is another example. Ah, it doesn't contribute much to the conversation, but since you mentioned you were an engineer, I am as well. Mechie actually.

Just realized I didn't respond back to everything you suggested. Yea, you're right. The founders can be charged with the same notion of stirring the public. That's the point of these things...if I had to guess, there's funding for certain activities coming from a special interests group determined on AI programming rather than structural composition.

Also, I would generally disagree with AI doing everything for us...but admit it, your own Cortana for technological communication would be pretty sweet, right? Once you have everything, what do you live for? The question can be somewhat compared to using cheat codes to obtain infinite everything...most of the time, the game loses its purpose, and you don't play it any further (at least I don't). I would make a (potentially faulty) assumption that if you were to continue playing, it would be to see how the rest of the story plays out. Perhaps with the AI's doing some form of work, Humans could begin to understand and contemplate more theoretical or ethical concepts. Perhaps AI's are the solution to world hunger, being able to perform tasks at efficiencies humans never could. Who knows? I'm sure the curiosity (of what the introduction of Ai's will do) will drive people to make said AI's.

Keeping it simple is definitely the safe way. This whole conversation brings to mind a certain toaster in a certain Fallout expansion.
anmire 
5117 cr points
Send Message: Send PM GB Post
Offline
Posted 2/1/15

nanikore2 wrote:


GayAsianBoy wrote:

wait, if robots are assumed to lack empathy/remorse etc.... why would they have "violence" as their trait instead as if violence is the default mental state?

when they've never had to compete for resources etc? humans are violent by nature because of millions of years of competing for food amongst prey and predators...

and even in modern society, some people are violent and manipulative because they have to compete for status and ranks. if robots lack human emotions like empathy and desire, you can't just assume "violent" is their default mental state.




What we perceive as violence, a robot would only deem it as "necessary action to gain more resource".


Ah, this concept was used in Gargatia in the final episodes. I recommend watching that anime as it pertains in some form to this conversation
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/1/15

anmire wrote:

Ah, it doesn't contribute much to the conversation, but since you mentioned you were an engineer, I am as well. Mechie actually.



I design computer chips. Maybe I'm just biased because of the love/hate, slightly tilting towards hate despite of all the stuff I own. In many ways computers ARE completely stupid and psychotic at the same time. People screw up. Machines screw things up at the speed of light, the screwy machine trading messing with the stock market being an example.


Just realized I didn't respond back to everything you suggested. Yea, you're right. The founders can be charged with the same notion of stirring the public. That's the point of these things...if I had to guess, there's funding for certain activities coming from a special interests group determined on AI programming rather than structural composition.


Well I'm all for programming, even when I'm a hardware guy. Just look at the difference between an identically spec'ed 7-inch tablet running Android and one running Windows 8.1. Yeah, hate the crappy Win8 app ecosystem making a virtual piece of garbage out of the tab.


Also, I would generally disagree with AI doing everything for us...but admit it, your own Cortana for technological communication would be pretty sweet, right?


Sure, but Cortana / Siri still doesn't do *everything*... I still have to get off my behind and go do stuff. I don't mind expert systems, not full blown AI given too much stuff to tinker (it's really a similar issue as the latest trend of having iPhone apps that could unlock hotel doors, home doors and even car doors... Heck go ahead and trust it but I'll stick with my low-tech keys kthx, at least someone has to be on-site to lockpick MY doors)



Perhaps AI's are the solution to world hunger, being able to perform tasks at efficiencies humans never could. Who knows? I'm sure the curiosity (of what the introduction of Ai's will do) will drive people to make said AI's.


World hunger is more political than tech. I don't think AI would solve it............ Seen the movie Elysium?


Keeping it simple is definitely the safe way. This whole conversation brings to mind a certain toaster in a certain Fallout expansion.


Haven't played Fallout but I've seen Gargantia.
30236 cr points
Send Message: Send PM GB Post
It doesn't matter.
Offline
Posted 2/1/15
I doubt it.
5447 cr points
Send Message: Send PM GB Post
54 / M / Tacoma, WA. wind...
Offline
Posted 2/1/15
I'm not worried. All the postulation is based on a premise that these things would think and be motivated as animals and people are in the most basic of ways. Why would they? If they are emotionless wouldn't they be less likely to make "black & white" decisions?
In my own personal experience, the more intelligent people I have known make decisions from a different line of thought and motivation than average people.
With AI you would be dealing with a being (for lack of a better term) that does not have human concerns.
Even we as humans haven't eliminated all the top predators or the top resource users and we are finding that they are needed to maintain the proper balance in our ecosystem. What if they learn that lesson better than we do?
If these machines are smart enough to build themselves by themselves to proliferate themselves why would they need so many resources if they are the more efficient machines we initially programed them to be?
Why would they need so many?
Why could they only survive if they live on a lifeless or human-less world?
The questions are never ending about a thing that we have no real idea what shape it will come in or what thoughts it's mind will think.

It is something to think about.

Posted 2/1/15

nanikore2 wrote:


PeripheralVisionary wrote:


nanikore2 wrote:


Kingzoro02 wrote:

Until we unplug the server.


If those hunks of metal are so smart, they'd make versions of themselves that are completely distributed and infinitely parallelized so they don't need any servers (i.e. to kill them all you really need to KILL THEM ALL). Of course, they'll need anti-jamming tech, which of course they'd take care of beforehand.


PeripheralVisionary wrote:

Singularity movement, here we go.


If you don't know what singularity movement is, it is a movement postulated by futurist that states that eventually machines and ai will get so advanced that they'll design their descendants.


I seriously don't get those people. I mean, if even the guy who coined the term for them sees a grim picture, why would they want it??


MontyDono wrote:

R.I.P human race


That or Roko's Basilisk.

They're not predicting a grim future, they're predicting a future where we don't have to work to create the next generation of computers.


Allow me to clarify what I've said.

The person who originally coined the phrase "singularity" saw a very possible grim future that comes along with it.

The people who become "singularitarians" ignore this grim future, which doesn't make sense since it had to do with what the person who invented the phrase foreseen.


galaxiias wrote:

The future is here. *screaming in the distance*


The Future is here, and it sucks almost just as much as they thought! http://mashable.com/2015/01/01/back-to-the-future-2015/


Jamming777 wrote:

My respect for Hawking is getting less and less.


My respect for Hawking had already bottomed out so it doesn't matter, but it's better than mentioning Bostrom because then you wouldn't have replied.


They don't have to.
First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
You must be logged in to post.