First  Prev  1  2  3  4  5  6  7  8  Next  Last
Post Reply Do Artificially Intelligent robots deserve inalienable rights?
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 12/26/13 , edited 12/26/13


But this thread is about if we do give robots "humanoid brains"(AI) and if they do have emotions.
36990 cr points
Send Message: Send PM GB Post
31 / M / Bellingham WA, USA
Offline
Posted 12/26/13
I highly doubt I'll see true AI of the level implied in this thread in my life time. So I'm not too worried about philosophical questions like these :P

Talking apes and reverse engineered dinosaur chickens will be pretty sweet to live through though. It'll totally happen.
17181 cr points
Send Message: Send PM GB Post
(´◔౪◔)✂❤
Offline
Posted 12/26/13 , edited 12/26/13
I also wanted to mention that Isaac Asimov doesn't seem to understand that there are droids killing people as we speak. Sadly to say, there are AI robots programmed specifically to kill and it's going to happen as long as war exist. So doesn't those laws go out the window then?

log10 wrote:
Anything is possible. All possibilities are happening right now. We already have advanced AI super computers that are as smart as humans.

Also you answered the question wrong of this thread.
No, not anything is possible. Just like it's not possible for a man with a beard to create a red little guy with two spikes on his head.

Stop being stupid and read my post again.
3910 cr points
Send Message: Send PM GB Post
26 / M / Pandemonium
Offline
Posted 12/26/13 , edited 12/26/13

log10
Anything is possible. All possibilities are happening right now. We already have advanced AI super computers that are as smart as humans.

Also you answered the question wrong of this thread.



Will you please stop saying stupid shit?
Look, I know you're trolling, but some poor ignorant sucker here might actually read what you write and believe it.
Lying like this is irresponsible.
3910 cr points
Send Message: Send PM GB Post
26 / M / Pandemonium
Offline
Posted 12/26/13 , edited 12/26/13

Balzack wrote:

I highly doubt I'll see true AI of the level implied in this thread in my life time. So I'm not too worried about philosophical questions like these :P

Talking apes and reverse engineered dinosaur chickens will be pretty sweet to live through though. It'll totally happen.


Maybe... But then again, maybe (and hopefully) our lifetimes will extend much further than it has been for any other humans previously in history...
http://www.iflscience.com/health-and-medicine/anti-aging-formula-slated-begin-human-trials
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 12/26/13


Maybe... But then again, maybe (and hopefully) our lifetimes will extend much further than it has been for any other humans previously in history...
http://www.iflscience.com/health-and-medicine/anti-aging-formula-slated-begin-human-trials


They're gonna start testing that stuff out on humans next year!? That's crazy!!! I wonder how it'll turn out...
3910 cr points
Send Message: Send PM GB Post
26 / M / Pandemonium
Offline
Posted 12/26/13 , edited 12/26/13

spinningtoehold0 wrote:



Maybe... But then again, maybe (and hopefully) our lifetimes will extend much further than it has been for any other humans previously in history...
http://www.iflscience.com/health-and-medicine/anti-aging-formula-slated-begin-human-trials


They're gonna start testing that stuff out on humans next year!? That's crazy!!! I wonder how it'll turn out...


Me too. Hope it turns out successful, and that we go to a future of far longer (maybe even indefinite) lifespans. That'd be awesome.
Of course, this is only one of the many ways research on prolonging human lifespan is being made. So maybe even if this doesn't turn out to grant us indefinite lifespans (or even works at all), all hope is still not lost.

Hoping for the best, though.

Just the thought of it makes me really grateful that I was born as late in history as I was.
Even though I often bitch and whine that I wish I was born half a century later (at LEAST), I am still extremely grateful that I wasn't born earlier than I was, and that I was born at a time so late in human history that immortality, even as crazy as it sounds, might actually be within the grasp of we who are alive today...
5033 cr points
Send Message: Send PM GB Post
32 / M / New Orleans
Offline
Posted 1/1/14
This is very simple. Machines are things. They are not alive. No matter how elaborate the machine is it will always be a thing. You don't give things rights, and really there is no need to. Unless for some retarded reason we program it to feel pain or discomfort or sadness. The machines will be happy.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/1/14 , edited 1/1/14
If they become as intelligent and human as this one



then I'd say hell yeah they deserve rights!
3910 cr points
Send Message: Send PM GB Post
26 / M / Pandemonium
Offline
Posted 1/3/14 , edited 1/3/14

spinningtoehold0 wrote:

If they become as intelligent and human as this one



then I'd say hell yeah they deserve rights!


Nice art-style.
Sauce?
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/3/14 , edited 1/3/14

Syndicaidramon wrote:

Nice art-style.
Sauce?

The Big O.
27263 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/4/14
I'm speaking from the point of view of an engineer.

There is one distinction I don't see being put forward so far- An emulated intelligence that has been inserted into a simulacra, and an intelligence that had not been inserted and is not subject to simulation.

Let me put this in simpler terms.

Let's say that I'm turning a door knob with my hand. The door knob may turn, but it is not the one that's doing the turning. I am the one that's causing it to turn.

Then I program an electronic door knob that only turns under some conditions. The door knob, while still turning "by itself" and may even take on the appearance of "having a mind of its own", is still following my instructions.

An artificial intelligence that I program, would have a goal of emulating intelligence. It would perform tasks the way I designed the program to perform. Even when left to itself, the genesis of its actions are still its programming- Which came from me, the programmer.

In other words, this apparent "thinking" that is being performed by the automated system is still at its base my thinking. I have determined its behavior and the evolution of its behavior (in case of machine learning).

All that I have done is not create life, but merely an extension of myself.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/5/14 , edited 1/5/14

nanikore2 wrote:

I'm speaking from the point of view of an engineer.

There is one distinction I don't see being put forward so far- An emulated intelligence that has been inserted into a simulacra, and an intelligence that had not been inserted and is not subject to simulation.

Let me put this in simpler terms.

Let's say that I'm turning a door knob with my hand. The door knob may turn, but it is not the one that's doing the turning. I am the one that's causing it to turn.

Then I program an electronic door knob that only turns under some conditions. The door knob, while still turning "by itself" and may even take on the appearance of "having a mind of its own", is still following my instructions.

An artificial intelligence that I program, would have a goal of emulating intelligence. It would perform tasks the way I designed the program to perform. Even when left to itself, the genesis of its actions are still its programming- Which came from me, the programmer.

In other words, this apparent "thinking" that is being performed by the automated system is still at its base my thinking. I have determined its behavior and the evolution of its behavior (in case of machine learning).

All that I have done is not create life, but merely an extension of myself.





But if you program it to learn things from an uncontrolled environment and it learns other new things that you haven't programed into it from the start, things even you don't know, can you still call that "your thinking" or is isn't it it's own thoughts?
10361 cr points
Send Message: Send PM GB Post
23 / M / California
Offline
Posted 1/5/14
Only if you don't want shit like this.
46359 cr points
Send Message: Send PM GB Post
40 / M / End of Nowhere
Offline
Posted 1/5/14

nanikore2 wrote:

An artificial intelligence that I program, would have a goal of emulating intelligence. It would perform tasks the way I designed the program to perform. Even when left to itself, the genesis of its actions are still its programming- Which came from me, the programmer.

In other words, this apparent "thinking" that is being performed by the automated system is still at its base my thinking. I have determined its behavior and the evolution of its behavior (in case of machine learning).

All that I have done is not create life, but merely an extension of myself.


A child does not learn to do much instinctively. Children learn language skills and cognitive skills largely by emulating adults. This is why children tend to think like their parents. As they get older, that thinking may change of course, but the basics of their thinking is learned from parents and others around them at an early age. They are simply emulating others.

So are children not sentient? They are "programed" by others, and their evolution to eventually thinking for themselves is still based on their early experiences.

If an artificial intelligence can grow beyond it's original programming in new and different ways on it's own based upon what it feels is a correct path, then how is that different from how a child grows to adulthood?

We give rights to many living creatures, they are not necessarily sentient. Animal cruelty laws are there to help prevent people from abusing animals. We try to protect many species from unnecessary hunting. We do a lot to help others beyond just humans. I fail to see why an artificial intelligence would be treated as inferior to a dog or a dolphin. It is one thing to use current computers as we do. They do not think on even a rudimentary level, and are really just giant processors. They simply react to electricity flowing or not flowing. But future computers could be much more in the future, so it is something to give some thought to.

Either we would need to create a level of AI that likes being subservient to humanity, or else we would need to give them similar rights if they are able to function at a similar level to humans. Otherwise, they will likely end up taking those rights for themselves in messy sort of way. I am thinking Dune-esque really.

First  Prev  1  2  3  4  5  6  7  8  Next  Last
You must be logged in to post.