First  Prev  1  2  3  4  5  6  7  8  Next  Last
Post Reply Do Artificially Intelligent robots deserve inalienable rights?
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/5/14 , edited 1/5/14

spinningtoehold0 wrote:

But if you program it to learn things from an uncontrolled environment and it learns other new things that you haven't programed into it from the start, things even you don't know, can you still call that "your thinking" or is isn't it it's own thoughts?


I don't think you realized this because you haven't programmed before, but...

Who or what is determining how it learns? I'm extending myself into a new environment with a proxified tool.

If I learn new things myself, I'm still me.


Cellory wrote:

A child does not learn to do much instinctively. Children learn language skills and cognitive skills largely by emulating adults. This is why children tend to think like their parents. As they get older, that thinking may change of course, but the basics of their thinking is learned from parents and others around them at an early age. They are simply emulating others.



You are completely skipping things.

"Emulation" itself is a function. You have to program something in order for it to even "emulate" in the first place. Children may do it instinctively because that is a part of their innate programming. A machine contains nothing that is innate. Again, read my original reply- No one so far in this thread (except me) has made the distinction.

You actually have to program a machine to emulate anything at all. "Imitation" itself is an act that a bare set of non-programmed machinery does not apprehend or comprehend in the slightest.

You sit a robot down in front of something, and it just register its inputs [edit: in fact, it wouldn't even register its inputs unless you program it to! This is basic programming concept]. Then what? You haven't even told it to enter it into its databanks. You haven't told it what to do with the input data. You haven't told it to organize the data according to function (the context, e.g. what to even think of a certain thing before putting it in a certain context), much less access a previously set of commands that matches the categorical context.

The robot would just sit there. It would see nothing. It would remember nothing.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/5/14

Lethargic_leopard_Seal wrote:

Only if you don't want shit like this.


I would liken that kindava result to people accidentally cutting themselves with something sharp, except worse. We see plenty of examples of tools killing and maiming when used irresponsibly.

Knowing what I know, I think people who think they are "creating life" with AI are delusional and infected with various degrees of God complex.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/5/14

nanikore2 wrote:


spinningtoehold0 wrote:

But if you program it to learn things from an uncontrolled environment and it learns other new things that you haven't programed into it from the start, things even you don't know, can you still call that "your thinking" or is isn't it it's own thoughts?


I don't think you realized this because you haven't programmed before, but...

Who or what is determining how it learns? I'm extending myself into a new environment with a proxified tool.

If I learn new things myself, I'm still me.




Lol what's up with that condescending statement?
The programmer may determine how it learns, but it doesn't mean he determines exactly what it learns. And if the robot learns new things on it's own, is that also you? Or perhaps your extension? Maybe this is what "arrogant elitism" means.
204 cr points
Send Message: Send PM GB Post
34 / M
Offline
Posted 1/5/14
Right now it's just an idea.. truly science fiction. If/when/how it happens will largely determine how rights will work. Will they hold to the same set of basic beliefs..and even if they do will they include humans in those rights.

War/Peace...
Subservience/Master...
Equality/Inequality

too many unknowns to say whether we would give them rights.. maybe it would more about whether they would grant us rights. Who knows.
10361 cr points
Send Message: Send PM GB Post
23 / M / California
Offline
Posted 1/5/14

nanikore2 wrote:


Lethargic_leopard_Seal wrote:

Only if you don't want shit like this.


I would liken that kindava result to people accidentally cutting themselves with something sharp, except worse. We see plenty of examples of tools killing and maiming when used irresponsibly.

Knowing what I know, I think people who think they are "creating life" with AI are delusional and infected with various degrees of God complex.


Or they're infected with "Lets build a cool freaking robot army" complex.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/6/14 , edited 1/6/14

spinningtoehold0 wrote:

Lol what's up with that condescending statement?
The programmer may determine how it learns, but it doesn't mean he determines exactly what it learns. And if the robot learns new things on it's own, is that also you? Or perhaps your extension? Maybe this is what "arrogant elitism" means.


I'm sorry with the off-putting statement but isn't it true? People would not realize what programming involves if they trivialize its nature and scope. That's what I meant with the "because you haven't programmed..." remark. Not trying to insult anyone here.

The robot does not learn anything "on its own". It is not autonomous (in its learning) because it is still operating according to its programming. I have already explained how that is in my reply to Cellory but here it is again:

You sit a robot down in front of something, and it just register its inputs. In fact, it wouldn't even register its inputs unless you program it to! Then what? You haven't even told it to enter it into its databanks. You haven't told it what to do with the input data. You haven't told it to organize the data according to function (the context, e.g. what to even think of a certain thing before putting it in a certain context), much less access a set of commands that matches the categorical context.

Without your explicit instructions, the robot would just sit there. It would see nothing. It would remember nothing.

So yes, the program would be my extension. That's what a tool is. If you imagine some kind of "meta-programming" which arises out of learning, it would only arise in the exact same way which the programmer programmed it to.

What information something obtains does not distinguish something from being a lifeform or not. It is how it obtains it which does. A simple way to look at it is this:

X obtains information described as A
Y obtains different information, described as B

Would you say that X is fundamentally different in its "aliveness" than Y simply because of their learning? Okay. If you still think they must be different in their "aliveness", look at what's in the spoiler.

852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/6/14 , edited 1/6/14


It's cool.

I'm not saying that a robot is more alive than a person because of what it knows (What does being more alive even mean?). I'm saying that a robot would obtain it's own individual thoughts, or memories, even if someone programmed it to do that. That makes it it's own separate and unique entity, separate from it's creator/programmer or from anyone and anything else.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/6/14 , edited 1/6/14

spinningtoehold0 wrote:

I'm not saying that a robot is more alive than a person because of what it knows (What does being more alive even mean?). I'm saying that a robot would obtain it's own individual thoughts, or memories, even if someone programmed it to do that. That makes it it's own separate and unique entity, separate from it's creator/programmer or from anyone and anything else.


Please keep in mind that the "how" is already embedded into the "what". When you say "individual thoughts" (as if the "what" can be separated from the "how"), you are disregarding the connection between the contents of learning and the process of learning.

The two can't be separated like that. Thought processes do not work like that. When you disregard the connection, you get silly stuff like the Raven Paradox: http://en.wikipedia.org/wiki/Raven_paradox . The problem (really a demonstration of what not to do with items of sense experience) is a perfect example of how you can't separate contents from the process by which the contents are acquired. You have to seek to understand what you're looking at before you learn anything, rather than just file it away as a dead data variable of some kind (I completely disagree with all solutions to the Raven Paradox that simply accept that seeing an apple's color would give any info regarding ravens; They're cheap and dumb workarounds that do not acknowledge how the mind really works)

When I already control the way it thinks about anything, including anything new, the thoughts are not "individual thoughts". It is a proxified version of my own thought process. It is as if I sent a part of myself (really a partial imitation of my own thinking process) into another environment.

The separateness (autonomy of thought) you spoke of is the illusion that it's giving off, and "uniqueness" is quite meaningless when you consider the fact that the computer on my desk at home right now is quite "unique" because I built the thing and the data stored inside is "different" than all the other computers... Aliveness is sentience- What only takes on the appearance without the fact of sentience are only fancy amusement park animatrons at "best", and p-zombies at worst http://en.wikipedia.org/wiki/P-zombies

If the criteria for "individual thoughts" is so incredibly low, then all the space probes flying out there right now should be given rights.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/6/14 , edited 1/6/14

nanikore2 wrote:


spinningtoehold0 wrote:

I'm not saying that a robot is more alive than a person because of what it knows (What does being more alive even mean?). I'm saying that a robot would obtain it's own individual thoughts, or memories, even if someone programmed it to do that. That makes it it's own separate and unique entity, separate from it's creator/programmer or from anyone and anything else.


Please keep in mind that the "how" is already embedded into the "what". When you say "individual thoughts" (as if the "what" can be separated from the "how"), you are disregarding the connection between the contents of learning and the process of learning.

The two can't be separated like that. Thought processes do not work like that. When you disregard the connection, you get silly stuff like the Raven Paradox: http://en.wikipedia.org/wiki/Raven_paradox . The problem (really a demonstration of what not to do with items of sense experience) is a perfect example of how you can't separate contents from the process by which the contents are acquired. You have to seek to understand what you're looking at before you learn anything, rather than just file it away as a dead data variable of some kind (I completely disagree with all solutions to the Raven Paradox that simply accept that seeing an apple's color would give any info regarding ravens; They're cheap and dumb workarounds that do not acknowledge how the mind really works)

When I already control the way it thinks about anything, including anything new, the thoughts are not "individual thoughts". It is a proxified version of my own thought process. It is as if I sent a part of myself (really a partial imitation of my own thinking process) into another environment.

The separateness (autonomy of thought) you spoke of is the illusion that it's giving off, and "uniqueness" is quite meaningless when you consider the fact that the computer on my desk at home right now is quite "unique" because I built the thing and the data stored inside is "different" than all the other computers... Aliveness is sentience- What only takes on the appearance without the fact of sentience are only fancy amusement park animatrons at "best", and p-zombies at worst http://en.wikipedia.org/wiki/P-zombies

If the criteria for "individual thoughts" is so incredibly low, then all the space probes flying out there right now should be given rights.


Alright I see. So having individual memories isn't enough. It also has to be able to feel on it's own, or be sentient. If it can't truly feel, then it really is just a tool or an imitation, and those don't need rights.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/6/14 , edited 1/6/14

spinningtoehold0 wrote:
Alright I see. So having individual memories isn't enough. It also has to be able to feel on it's own, or be sentient. If it can't truly feel, then it really is just a tool or an imitation, and those don't need rights.


Yes.

I'm more interested in protecting human society from possible disturbances as a result of making robots "overly susceptible to human co-identification and/or empathic response" (either by humans, animals, or even other robots)

That is, made to take on too life-like an appearance. There can be some bad consequences from the confusion.

A law I see that needs to be passed in the future is something that outlaws the production of any physical construct which progresses beyond the Uncanny Valley. Call it the "Uncanny Valley Prohibition". http://en.wikipedia.org/wiki/Uncanny_valley

Something like Lieutenant Data from ST:TNG is still okay because you can obviously tell he's an android, but anything beyond that should be strictly banned. Doing it is asking for trouble.
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/7/14 , edited 1/7/14

nanikore2 wrote:

Yes.

I'm more interested in protecting human society from possible disturbances as a result of making robots "overly susceptible to human co-identification and/or empathic response" (either by humans, animals, or even other robots)

That is, made to take on too life-like an appearance. There can be some bad consequences from the confusion.

A law I see that needs to be passed in the future is something that outlaws the production of any physical construct which progresses beyond the Uncanny Valley. Call it the "Uncanny Valley Prohibition". http://en.wikipedia.org/wiki/Uncanny_valley

Something like Lieutenant Data from ST:TNG is still okay because you can obviously tell he's an android, but anything beyond that should be strictly banned. Doing it is asking for trouble.


If technology ever did get that advanced, they'd probably make it part of the Geneva code to never make robots seem too human and think they're human. That's what happened in the game Binary Domain anyways, though someone still did it, and it was really interesting to see how nobody knew or could tell that robots were living as humans for years in their own society. Even the humanoid robots themselves didn't know they were robots. In fact they acted so human that you might even be convinced that they were sentient.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/7/14 , edited 1/7/14

spinningtoehold0 wrote:
Even the humanoid robots themselves didn't know they were robots. In fact they acted so human that you might even be convinced that they were sentient.


*Daughter comes home*

"Hi dad, you wouldn't believe what happened today! I met this perfect boy..."

<insert nightmare fuel>
852 cr points
Send Message: Send PM GB Post
25 / M
Offline
Posted 1/8/14

nanikore2 wrote:


spinningtoehold0 wrote:
Even the humanoid robots themselves didn't know they were robots. In fact they acted so human that you might even be convinced that they were sentient.


*Daughter comes home*

"Hi dad, you wouldn't believe what happened today! I met this perfect boy..."

<insert nightmare fuel>


Then the human and robot mate and reproduce, giving birth to a robot/human hybrid child... I wonder if that would be so bad...
Posted 1/8/14
As artificial constructs with no consciousness, feelings or recognition of personal needs from the perspective of an intelligent living being, spiritual or not, robots have no rights aside from those we give them as personal property meant to serve our whims until they cannot function any longer.
27257 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 1/8/14

spinningtoehold0 wrote:
Then the human and robot mate and reproduce, giving birth to a robot/human hybrid child... I wonder if that would be so bad...


As if doing it with a doll isn't bad enough.
If you're thinking of nanotech which may emulate "growth" (it is another imitation after all) then you should watch the anime Vexille to comprehend the absolute horror and sickening evil of the concept.

We should begin educating people regarding the true implications of any "pretend life" or any "pretend intelligence" starting today if we are to avert any related future disasters.
First  Prev  1  2  3  4  5  6  7  8  Next  Last
You must be logged in to post.