First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
Post Reply AI could end mankind, warns Stephen Hawkings and other prominent thinkers
Posted 2/1/15 , edited 2/1/15
They just want to be the first to say it. Could also say that he's making fun of majority of humans in that same way Einstein was quoted, "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." Has a wicked sense of humor as they say.


I just now quickly read on the branches of AI and how the research is done:


A long way to go before AI can take matters into its hands.
82916 cr points
Send Message: Send PM GB Post
44 / M / WA
Offline
Posted 2/1/15 , edited 2/1/15

nanikore2

Okay. We need to stop this editing back and forth and do regular replies. I'm a bit tired of this method.

What you're doing is making a lot of assertions with absolutely nothing to back them up. The above paragraph consists of nothing but a long series of "could" "could" and more "could" with zero demonstration or explanation of how:

1. You said "with or without intuition pump", which made no sense if you knew what I was even saying. If it's not logical programing via experiential input, then what are the robots operating upon? I've already asked for explanation once. Please explain in a new reply instead of editing. thanks.

2. How "could" they come to a conclusion where the most efficient method is not the "fastest" when efficiency includes speed? The entire point of me raising the terms "morality" and "elegance" is that those are not in the domain of a machine's concern because those require at the very least consciousness (which I've already said intelligent systems do not have to possess- This entire issue is surrounding the ability of AI, and not its sentience)


What do YOU have to back up your assertions? Hey, I'm just speculating, it is not like there is a lot of material on these machines ...except in sci-fi. I am not so sure your distinction between Intelligent and Sentient is valid or accurate (in the real world). Also, I disagree that conscience would be required to operate in a moral framework any more than intelligence, if it can do one it can do the other. A merely intelligent machine could not "choose" to destroy the world, only a "sentient" one could ...and I don't believe a sentient machine is possible. Rather than attempting to destroy humans for resources it could decide it is more efficient to fund technological innovation.

You are right in the "intuition pump" - just looked it up, I still disagree with you. These machines may not necessarily operate on logic, on what basis do you assume they would? How do you know what they would view as the most efficient method. If you send out 10,000 letters to the wrong people and I send out 5 letters to the right people, I am the most efficient of the two. Speed may or may not be part of efficiency.
17548 cr points
Send Message: Send PM GB Post
18 / F / Croatia
Offline
Posted 2/1/15 , edited 2/1/15
Hail Ultron!

91 cr points
Send Message: Send PM GB Post
21 / M / Long Island, NY
Offline
Posted 2/1/15
Too bad you just wasted your effort discussing all that. No one will give a fuck significantly enough to stop the advancement of tech.
If people did, CS and ECE-heavy colleges like the renown MIT or CMU wouldn't exist. ECE major here, and going all the way.
Posted 2/1/15 , edited 2/1/15
"Prominent thinkers"



And yes Ai could end mankind. AiYumega that is.


9200 cr points
Send Message: Send PM GB Post
35 / M
Offline
Posted 2/1/15

sena3927 wrote:

Ha! Machines that think can never be programmed or designed. They would have to evolve, like us, perhaps to the point where they even develop self-awareness. Then they wouldn't be artificial intelligence any more, but true artificial life. They would _understand_, have feelings, and develop morality. Like all thinking beings do, totally naturally.

Artificial intelligence will never, ever be a match for humans. The "singularity" idea is ridiculous, the stuff bad sci-fi is made of. But artificial life will join us and gain personhood, eventually.

Check out evolvable hardware. Nobody knows what the underlying reason it works is, but it works... Take away "unused" parts, and it doesn't. Ask it to learn how to walk, and it does so in the most unexpected ways.. Basically it works by testing various configuarions and algorithms till it finds ones that work... and then improves upon them... or in teh case of hardware, itself...

fascinating, fascinatig stuff


nanikore2 wrote:

It needn't be sentient; Intelligence does not equal sentience, but the ability to solve problems logically. The programs we call "expert systems" are non-sentient but intelligent.


Wrong.. kinda.

Problem solving and doing it better than humans is easy for machines. Ridiculously easy. but to have motivation, or to be able to understand the nuance of language and intonation.. Or to simply move without instruction.. Those are hard as fuck for a machine to know.... Unless you go the evolvable hardware route... And then that has its own an of worms, as it really makes one question a lot about the whole mind/body problem. and those that believe in "the mind is nothing more than a program" or "the mind is nothing more than the body" will not be getting the answers they suspect. (emergence, funtionalism, there's a few of the more non-redutionist theories that have a lot more sway in my opinion towards all of this)
9200 cr points
Send Message: Send PM GB Post
35 / M
Offline
Posted 2/1/15

nanikore2 wrote:
I design computer chips. Maybe I'm just biased because of the love/hate, slightly tilting towards hate despite of all the stuff I own. In many ways computers ARE completely stupid and psychotic at the same time. People screw up. Machines screw things up at the speed of light, the screwy machine trading messing with the stock market being an example.


I'm glad I'm not the only one that truly despised how muh we have plugged computers into our daily lives...

Though I still assert if we go the evolvable hardware route, chances are less likely that it's going to turn out to be our destroyer...

It's when we try to go the route of atually programming bottom up truly artificial "intelligence" that we're going to fuck it up, ascribe it too much responsibility or control, and only later realize it was a shell without a real ghost.
4215 cr points
Send Message: Send PM GB Post
26 / M / Waterloo, Ontario
Offline
Posted 2/1/15
We make computers to be as smart as we want them to be. And we will never make them smart enough to over power humanity. I doubt computers will ever become sentient or overpower or outsmart humans.
169 cr points
Send Message: Send PM GB Post
41 / M
Offline
Posted 2/1/15
I think human level artificial intelligence will be akin to the development of nuclear power. We haven't destroyed ourselves yet, but this new technology will make it easier to do so.
48494 cr points
Send Message: Send PM GB Post
F / ar away
Offline
Posted 2/1/15 , edited 2/1/15

nanikore2 wrote:


JustineKo2 wrote:

I honestly predict that the biggest danger of technology will not be in the form of AI machines taking control of Earth away from humans but more in the form of humans themselves becoming less human through the use of augmentation. I don't even really think it will be a bad thing once humans start to accept the cybernetization of their bodies; it will just become something that's commonplace and being fully human will just become some obsolete thing that's forgotten about.


Ah yes, Transhumanism and Singularitarianism, the dual scurge of future humanity.

I think if someone completely replaces him or herself it would be an act of suicide. Have you seen Vexille?
Yeah it was pretty interesting and exciting although it ended up being just another movie
In contrast, I think the conversion will happen gradually; more like a frog in a pot of water kind of thing. Perhaps this will end up in a war between humans who still consider themselves human and a new race that have no humanity left but feel that they are a superior being and Earth has no room for both.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/1/15 , edited 2/2/15

PeripheralVisionary wrote:


nanikore2 wrote:

Allow me to clarify what I've said.

The person who originally coined the phrase "singularity" saw a very possible grim future that comes along with it.

The people who become "singularitarians" ignore this grim future, which doesn't make sense since it had to do with what the person who invented the phrase foreseen.



They don't have to.


Of course they don't "have to". It's just really stupid for them to ignore it.


serifsansserif wrote:


nanikore2 wrote:
I design computer chips. Maybe I'm just biased because of the love/hate, slightly tilting towards hate despite of all the stuff I own. In many ways computers ARE completely stupid and psychotic at the same time. People screw up. Machines screw things up at the speed of light, the screwy machine trading messing with the stock market being an example.


I'm glad I'm not the only one that truly despised how muh we have plugged computers into our daily lives...

Though I still assert if we go the evolvable hardware route, chances are less likely that it's going to turn out to be our destroyer...

It's when we try to go the route of atually programming bottom up truly artificial "intelligence" that we're going to fuck it up, ascribe it too much responsibility or control, and only later realize it was a shell without a real ghost.


This faux-"evolution" is another method of design. Explanation below.

From what I've seen (admittedly some of it from PBS), evolving AI ends up being bottom-up AI. There is no magic to the process- The scientists allow the emulations to spit out lots of permutations, and then manually input conditions to weed out for each iteration. (In the case of hardware, the man-made "selection" is by manually changing physical parts) It's not really "evolution" but just another method of design. In my opinion the effort into doing this is like trying to "make" an animal or an insect, to which point there's no real practical use for these things until you attempt to restrain and train. This training is more artificial selection of their behavior. None of them "evolve" to be "smarter"... More like all of them get molded more and more into things capable of performing their expected behaviors (e.g. "Swim" even if using a really weird tail, or "see" even when using this novel sort of "eye" that none of the scientists expected)



Problem solving and doing it better than humans is easy for machines. Ridiculously easy. but to have motivation, or to be able to understand the nuance of language and intonation.. Or to simply move without instruction.. Those are hard as fuck for a machine to know.... Unless you go the evolvable hardware route... And then that has its own an of worms, as it really makes one question a lot about the whole mind/body problem. and those that believe in "the mind is nothing more than a program" or "the mind is nothing more than the body" will not be getting the answers they suspect. (emergence, funtionalism, there's a few of the more non-redutionist theories that have a lot more sway in my opinion towards all of this)


A machine doesn't need "motivation" to do anything. "Motivation" is something people need. You might be over-framing the whole issue of the danger of self-sustaining AI. They don't need "motivation". All that needs to happen is resource shortage, and that has already been talked about in my original post:

- Resource shortage occurs
- Robots sees large resource usage by certain "items" (which just happens to be humans)
- Robots finds "most efficient way" to "greatly reduce or eliminate resource usage of items"

There's nothing that deviates from their regular "routine". No such thing as "motivation" involved.


AiYumega wrote:

"Prominent thinkers"



They're sure as heck more accomplished in the realm of academia than either of us is ever going to be. I assume you beg to differ via the quotation marks?


GodGreatestEver wrote:

Too bad you just wasted your effort discussing all that. No one will give a fuck significantly enough to stop the advancement of tech.
If people did, CS and ECE-heavy colleges like the renown MIT or CMU wouldn't exist. ECE major here, and going all the way.


Uh, CR forums ARE for wasting time discussing stuff. Have you seen topics such as "how many ants are in your yard", "evil laughs", "blondes vs. brunettes", "favorite cereal", and "people who dye their dogs"? Oh don't worry about my time, worry about your own.

Besides, the professors wanted to change developmental priorities, and not have all of his students drop out. "Change priorities in development" doesn't equal "nobody should develop anymore" and there are plenty of students / PhD candidates who are members of FLI anyways http://futureoflife.org/who ah what does it matter, I'm not TELLING anyone to participate in ANYTHING if you didn't notice


severticas wrote:

They just want to be the first to say it. Could also say that he's making fun of majority of humans in that same way Einstein was quoted, "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." Has a wicked sense of humor as they say.


Nah, unfortunately he is being as serious as all the others in FLI, because otherwise feeding Roko's Basilisk would be far more fun than being in some committee.
Posted 2/1/15 , edited 2/1/15
It could, but then again if it's a conscious intelligence there is not much that is off the table.

As for evolvable hardware, looks like a natural application for procedural generation and various machine learning algorithms. The things genetic algorithms alone can do is pretty impressive, they excel at optimizing.

For example, the Wikipedia page on them has the image of an antenna NASA used genetic algorithm to design with the best radiation pattern.
2459 cr points
Send Message: Send PM GB Post
31 / M / Minnesota, USA
Offline
Posted 2/2/15
I for one welcome our robot overlords.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 2/2/15 , edited 2/2/15

dougeprofile wrote:


nanikore2

Okay. We need to stop this editing back and forth and do regular replies. I'm a bit tired of this method.

What you're doing is making a lot of assertions with absolutely nothing to back them up. The above paragraph consists of nothing but a long series of "could" "could" and more "could" with zero demonstration or explanation of how:

1. You said "with or without intuition pump", which made no sense if you knew what I was even saying. If it's not logical programing via experiential input, then what are the robots operating upon? I've already asked for explanation once. Please explain in a new reply instead of editing. thanks.

2. How "could" they come to a conclusion where the most efficient method is not the "fastest" when efficiency includes speed? The entire point of me raising the terms "morality" and "elegance" is that those are not in the domain of a machine's concern because those require at the very least consciousness (which I've already said intelligent systems do not have to possess- This entire issue is surrounding the ability of AI, and not its sentience)


What do YOU have to back up your assertions? Hey, I'm just speculating, it is not like there is a lot of material on these machines ...except in sci-fi. I am not so sure your distinction between Intelligent and Sentient is valid or accurate (in the real world). Also, I disagree that conscience would be required to operate in a moral framework any more than intelligence, if it can do one it can do the other. A merely intelligent machine could not "choose" to destroy the world, only a "sentient" one could ...and I don't believe a sentient machine is possible. Rather than attempting to destroy humans for resources it could decide it is more efficient to fund technological innovation.

You are right in the "intuition pump" - just looked it up, I still disagree with you. These machines may not necessarily operate on logic, on what basis do you assume they would? How do you know what they would view as the most efficient method. If you send out 10,000 letters to the wrong people and I send out 5 letters to the right people, I am the most efficient of the two. Speed may or may not be part of efficiency.


Okay. I'm going to teach you how to back up an assertion, in other words, actually make a real point instead of saying something like "Hey there could be an invisible flying spaghetti monster up in the sky too!" (...which is sophistry, but I'm not going to go OT into that right now)

I'm going to use what I've already said in a previous reply to you as an example.

Assertion:

Intelligence does not equate sentience.


Why? How did I back this up: I could have thrown a dictionary, but instead I gave the example

I even see some of it around my engineering work (Good thing their designs suck and I have to do clean up, otherwise I would lose my job as a designer). They are designed to find the most efficient solution within their compute cycles, not the most "humane" or "elegant" or anything like that. Once they see a direct solution they will take it. Such is the nature of machines.


The above example shows HOW or WHY intelligence is not sentience, by having an example of something that has intelligence but lack attributes that require sentience, such as "being humane" (humane) and having an aesthetic sense (elegant). I'm not even sure if you understood that's what I was doing, because you were just too busy afterwards trying to turn away every bit of every sentence I say.

Do you know how this goes now? I can give more examples if you need them.

You need to support your points, and if you're "just speculating" then too bad- Because I'm not "just" speculating- I am reasoning, using what I know to support it.
33367 cr points
Send Message: Send PM GB Post
26 / M / Socal
Offline
Posted 2/2/15 , edited 2/2/15
I didn't read through all the post but I do believe that if human AI was created society would fall.

Purely because I believe society, in its current state and bleak looking future, will not be equipped to fight it off.

EMPs are the only way I think

Take a look at all the hipsters and smartphone users, they liked flappy bird, were screwed.
First  Prev  1  2  3  4  5  6  7  8  9  10  Next  Last
You must be logged in to post.