First  Prev  1  2  3  4  5  6  7  8  Next  Last
Post Reply Do Artificially Intelligent robots deserve inalienable rights?
134 cr points
Send Message: Send PM GB Post
16 / M / [Insert Reference...
Offline
Posted 1/8/14
AI?

Well, giving them rights is a tricky as fuck subject. I mean, getting to the point where they are sentient and they comprehend the meaning of life is also a tricky as fuck subject. In the end, it's the question of "Do we give something that's human like human rights?"

If they really are sentient, and they have emotions and such, then sure, why the hell not? Otherwise, if they can't comprehend such things, it'd be like trying to fight for equal rights for pigs. They don't understand the concept, and in human terms, quite frankly do not give two shits.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/8/14

nanikore2 wrote:

As if doing it with a doll isn't bad enough.
If you're thinking of nanotech which may emulate "growth" (it is another imitation after all) then you should watch the anime Vexille to comprehend the absolute horror and sickening evil of the concept.

We should begin educating people regarding the true implications of any "pretend life" or any "pretend intelligence" starting today if we are to avert any related future disasters.


Lol. Doing it with a doll might not be so bad. I mean some people already do it with fake.. parts.

I watched Vexille and it was pretty interesting. I doubt we would create nanomachines like that, at least I hope not. I think the machines and the man who created them were both pretty evil if you ask me.
By the way, you should try playing Binary Domain for the PS3 someday. It's really good and kinda involves all of these things.




TheNameWithNoNumbers wrote:

AI?

Well, giving them rights is a tricky as fuck subject. I mean, getting to the point where they are sentient and they comprehend the meaning of life is also a tricky as fuck subject. In the end, it's the question of "Do we give something that's human like human rights?"


If they really are sentient, and they have emotions and such, then sure, why the hell not? Otherwise, if they can't comprehend such things, it'd be like trying to fight for equal rights for pigs. They don't understand the concept, and in human terms, quite frankly do not give two shits.


That makes sense. How would we even tell if they're sentient or not. And if we decided that they were, then the only reasons we wouldn't give them rights would be because of prejudice and/or fear. Maybe fear because of the Frankenstein complex.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/9/14

spinningtoehold0 wrote:

Lol. Doing it with a doll might not be so bad. I mean some people already do it with fake.. parts.


At least they don't treat those things as living beings, but as the tools that they rightfully are.

To others: Look back at my comments in this thread for explanations on why AI are NEVER living entities. You would not understand unless you've tried your hand at programming and/or are familiar with certain topics in Philosophy of Mind (which I've already read into for a number of years... I am not making stuff up as I go along). I'm not being "stuck up" here- I'm speaking from my engineering experience and knowledge in certain philosophical topics. If after looking at my thread comments you still have questions, I would be glad to address them. I'm here to help clear up questions people may have.

Please continue to enjoy science fiction by utilizing suspension of disbelief, as I do. However, when "chitz get real", it's another matter.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/9/14 , edited 1/9/14

nanikore2 wrote:

At least they don't treat those things as living beings, but as the tools that they rightfully are.



Then the question would be "how do we treat the tools that we program and design to emulate humans, their intelligence, and sentience to an almost indistinguishable level?"

We would either have to treat them like humans and give them rights (including the right to even exist), or try to destroy them all and never create them again.

I for one, would find it interesting seeing these things exist.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/9/14 , edited 1/9/14

spinningtoehold0 wrote:


nanikore2 wrote:

At least they don't treat those things as living beings, but as the tools that they rightfully are.



Then the question would be "how do we treat the tools that we program and design to emulate humans, their intelligence, and sentience to an almost indistinguishable level?"

We would either have to treat them like humans and give them rights (including the right to even exist), or try to destroy them all and never create them again.

I for one, would find it interesting seeing these things exist.


They have zero rights. Emulated sentience is not sentience. See the concept of P-zombies. Even if those programs on the Internet could spit back perfect responses on the screen when you type stuff on the webpage (...which they don't. They seem awfully gun-shy about talking about how it feels to be themselves, which they couldn't... and even if they CAN.........), they're still just canned responses to stuff.

I see no point in creating stuff that goes beyond the Uncanny Valley. I do not find the concept interesting but instead revolting.

They shouldn't be created in the first place.

First, I see no such need. Things either staying short of or just on the brink of the other side of the Uncanny Valley can already serve all kinds of functions adequately. We do not have a need to perform self-deception or others-deception of the worst order by creating tools that goes entirely beyond the Uncanny Valley. If people want to see that level of AI, curiosity already satisfied by that point; No need to completely duplicate outward appearance if functionality is already there.

Second, I have no desire to encounter P-zombies. As far as I'm concerned they're nightmare fuel.

Third, such social / psychological experimentation on a general populace would be dangerous and irresponsible (unless those things are really going to be canned to a facility with no means of escape whatsoever- Then again, why bother? We can simply have live people serving as placebos, and tell the test subjects that they're dealing with robots when they're in fact seeing live people! Do that if you must do the psych experiment so badly for some reason)
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/10/14

nanikore2 wrote:

I see no point in creating stuff that goes beyond the Uncanny Valley. I do not find the concept interesting but instead revolting.

They shouldn't be created in the first place.

First, I see no such need. Things either staying short of or just on the brink of the other side of the Uncanny Valley can already serve all kinds of functions adequately. We do not have a need to perform self-deception or others-deception of the worst order by creating tools that goes entirely beyond the Uncanny Valley. If people want to see that level of AI, curiosity already satisfied by that point; No need to completely duplicate outward appearance if functionality is already there.

Second, I have no desire to encounter P-zombies. As far as I'm concerned they're nightmare fuel.

Third, such social / psychological experimentation on a general populace would be dangerous and irresponsible (unless those things are really going to be canned to a facility with no means of escape whatsoever- Then again, why bother? We can simply have live people serving as placebos, and tell the test subjects that they're dealing with robots when they're in fact seeing live people! Do that if you must do the psych experiment so badly for some reason)


I think it would be kinda cool.

I guess the only reasons would be for aesthetic appeal or to actually trick some people, maybe as a spy or weapon.

How do we know we aren't the p-zombies?

They should be tested in a facility first rather than just unleashing them into the general unknowing public. That does sound pretty funny though.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/11/14 , edited 1/11/14

spinningtoehold0 wrote:

How do we know we aren't the p-zombies?


Take a look at the definition. What does it say?

"for aesthetic reason and to trick people"? Sorry, to go through all the required effort to build such a thing you've got to cough up a better reason than basically "for chitz and giggles".

852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/11/14

nanikore2 wrote:

"for aesthetic reason and to trick people"? Sorry, to go through all the required effort to build such a thing you've got to cough up a better reason than basically "for chitz and giggles".




That's okay. I'm done already. See you.
27265 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/11/14 , edited 1/11/14
In sum:

An AI does absolutely nothing "on its own":
http://www.crunchyroll.com/forumtopic-809661/do-artificially-intelligent-robots-deserve-inalienable-rights?fpid=45416791

An AI is incapable of "individual thoughts":
http://www.crunchyroll.com/forumtopic-809661/do-artificially-intelligent-robots-deserve-inalienable-rights?fpid=45425203

An AI that can perfectly imitate human beings on the outside (practically a p-zombie) has no good reason to even be made:
http://www.crunchyroll.com/forumtopic-809661/do-artificially-intelligent-robots-deserve-inalienable-rights?fpid=45459195

An AI robot shouldn't have any rights, because it possesses absolutely nothing that demands it. You might as well demand rights for a stuffed talking bear toy.

Q.E.D.

Human augmentation is a far more interesting and practical idea than human imitation. Further reading: http://www.technologyreview.com/view/511421/the-brain-is-not-computable/
18243 cr points
Send Message: Send PM GB Post
M / Mars, Mt. Olympus
Offline
Posted 1/14/14 , edited 1/14/14
No, we created them so it's up to us to give them inalienable rights. They don't inherit them. This is of course, under the assumption that we created an AI that's that good.
28514 cr points
Send Message: Send PM GB Post
22 / M / Delaware
Offline
Posted 1/17/14
No, should cows have inalienable rights?
17181 cr points
Send Message: Send PM GB Post
(´◔౪◔)✂❤
Offline
Posted 2/9/14

tehstud wrote:

No, should cows have inalienable rights?
All them vegans lemma hear you holla YEA
2106 cr points
Send Message: Send PM GB Post
25 / M / Guess
Offline
Posted 2/9/14 , edited 2/9/14
No, because artificial intelligence is merely what has been programmed into the robot, and their responce is merely mechanical. In effect, it has the apperance of humanity, without being any more human than an automobile is.
585 cr points
Send Message: Send PM GB Post
30 / M / Center-of-US
Offline
Posted 2/26/14
I don't think we "give" rights in general. Those who feel oppressed demand rights, and if the oppressed + supporters have enough power, the rights will be acknowledged. So, some people will sympathize with robots, and some will probably want the robots' favor for business/political reasons. I use all of these terms broadly; "power" for example might be money, special knowledge, or simply the ability to disrupt things.

Also, I'm not sure if we could ever really tell how a robot feels. Even for fellow humans, I don't think we really know; we just assume the next person is similar enough that his "sad" is like my "sad", and so on. That's probably a fair assumption for humans; for robots I'm not so sure. Of course, if the robots start going "screw you guys, we've had enough of this oppression," we'd have a pretty good idea how they felt...

Finally, I was quick to think of 0s and 1s when "artificial intelligence" was the topic, but science's efforts also extend to manipulating bio and genetic processes. So, perhaps the artificial intelligence we're speculating about is some hybrid between a fancy computer, a genetically modified organism, and who knows what else. I don't know what that would be, or if it changes the analysis.
Posted 2/27/14

Quaternion wrote:

I don't think we "give" rights in general. Those who feel oppressed demand rights, and if the oppressed + supporters have enough power, the rights will be acknowledged. So, some people will sympathize with robots, and some will probably want the robots' favor for business/political reasons. I use all of these terms broadly; "power" for example might be money, special knowledge, or simply the ability to disrupt things.

Also, I'm not sure if we could ever really tell how a robot feels. Even for fellow humans, I don't think we really know; we just assume the next person is similar enough that his "sad" is like my "sad", and so on. That's probably a fair assumption for humans; for robots I'm not so sure. Of course, if the robots start going "screw you guys, we've had enough of this oppression," we'd have a pretty good idea how they felt...

Finally, I was quick to think of 0s and 1s when "artificial intelligence" was the topic, but science's efforts also extend to manipulating bio and genetic processes. So, perhaps the artificial intelligence we're speculating about is some hybrid between a fancy computer, a genetically modified organism, and who knows what else. I don't know what that would be, or if it changes the analysis.


I'd have to agree. In a different way thou. Rights only get passed if the people with a lot of money care about it imo.
First  Prev  1  2  3  4  5  6  7  8  Next  Last
You must be logged in to post.