Created by jtjumper
First  Prev  1  2  Next  Last
Post Reply Should powerful enough AIs be given rights?
48603 cr points
Send Message: Send PM GB Post
23 / M / AZ
Offline
Posted 5/24/16
No
9606 cr points
Send Message: Send PM GB Post
20 / M / Imoutoland!
Offline
Posted 5/24/16

Lord_Jordan wrote:


PeripheralVisionary wrote:


Lord_Jordan wrote:



In that sense, PV, (about tools/discrimination), would not my throwing away a broken ruler, or breaking a table, or deciding to use a different pencil due to the unsatisfactory state of the previous one be considered discrimination?

As a better answer to the original post, no, robots/AI should not have rights due the lack of a soul of some sort


That doesn't make sense considered Skynet acted out of self preservation (Although I admit, how sentient Skynet is up to debate.), and there were a great deal of conflict in the Matrix that lead to human's downfall. To them, programming them any other way would be considered a form of unnatural manipulation, a lobotomy of sorts. Turning them off and destroying them is essentially a form of death.
Acting to preserve yourself is a things a great deal of living things, even some plants, are capable of. Another criteria to examine is the ability to suffer, whether or not it is capable of long term thought, etc.

I also debate the existence of an immaterial, intangible thing as a soul.



So what exactly did you mean about programming them a different way? Sorry, I may have misread that portion...
However, I must disagree on the form of death, and I'll cite Star Wars for it (xD haha). C3P0 was turned off numerous times, as well as other droids willingly or unwillingly, and turned back on, so would that be a form of robotic "death"? That seems like turning off a computer would also be killing it, so would that be acting against it? And how would they suffer? I mean, do robots/machines register pain of sorts, or even any emotion, that would make them semi-human?

I hope that wasn't completely random or off-topic, and somewhat contributed to the discussion...


My argument was against JtJumper's claim that those machines in said movies weren't self aware but rather "not programmed" right, which I felt was a bit odd considering that their entire purpose was to be essentially have the capacity for rational thought at 10000000x the thought process and whatnot.

C-3PO also had a fear of death, versus his robot counterpart R2-D2 who was more or less the stoic of the two. A great deal of robots display sentience (IG-88 probably) in Stars Wars, although I admit, most of this is not a part of said cannon due to Disney wanting to make room for creativity for the newer movies.

5625 cr points
Send Message: Send PM GB Post
18 / M / Korriban
Offline
Posted 5/24/16


Whoops, sorry about that (The post to the OP part...).

About the Star Wars droids, those in Old Republic Fatal Alliance, while possessing a sentience of sorts, are just killing machines for the most part, and have no "soul" (morals/ethics, emotions other than programmed hatred, etc.)
Also, it sucks that so much is now de-canonized >.> There was so much to work with in the Legends/Extended Universe...
12636 cr points
Send Message: Send PM GB Post
30 / M / Marshall, Michigan
Offline
Posted 5/24/16 , edited 5/24/16

PeripheralVisionary wrote:

My argument was against JtJumper's claim that those machines in said movies weren't self aware but rather "not programmed" right, which I felt was a bit odd considering that their entire purpose was to be essentially have the capacity for rational thought at 10000000x the thought process and whatnot.

C-3PO also had a fear of death, versus his robot counterpart R2-D2 who was more or less the stoic of the two. A great deal of robots display sentience (IG-88 probably) in Stars Wars, although I admit, most of this is not a part of said cannon due to Disney wanting to make room for creativity for the newer movies.



A claim I did not make.
What I actually said:


jtjumper wrote:
No, they arose because the humans didn't handle and program the robots rightly.


Humans are self-aware creatures that have involuntary responses to certain stimuli. We can't make our hearts stop beating no matter how hard we will otherwise. Designing robots with the ability to rebel against and overthrow their human masters, IS a programming error IF they were built to serve mankind as servants. Self-awareness and Asimov's "Three Laws of robotics" aren't contradictory. Rather, the three laws would define how the robot would think and what limits it would have. It would not take away self-awareness.

Designing robots with the ability to rebel only makes sense as a general product if it was intended to be treated as an equal, or if the humans wanted to earn respect from the robots (which doesn't seem likely since the robots were rebelling because of mistreatment).

The way humans handled the robots didn't match the way they programmed them, so of course they rebelled.

Earlier, you almost understood the point I was making (without realizing I was making it):


PeripheralVisionary wrote:

You can't just say "it's programmed wrong" when the thing in question is essentially a virtual being, unless said thing isn't meant to be such. If it weren't, then they don't deserve said rights.

We might want self-aware machines that have limited decision-making abilities, but that can still make some independent decisions, freely.
12636 cr points
Send Message: Send PM GB Post
30 / M / Marshall, Michigan
Offline
Posted 5/24/16 , edited 5/24/16

Lord_Jordan wrote:



In that sense, PV, (about tools/discrimination), would not my throwing away a broken ruler, or breaking a table, or deciding to use a different pencil due to the unsatisfactory state of the previous one be considered discrimination?

As a better answer to the original post, no, robots/AI should not have rights due the lack of a soul of some sort


While I wholeheartedly agree that humans have souls and that robots do not, denying robots rights because they are soulless machines may not be very pragmatic. If I throw a mechanical pencil away, it won't seek revenge. A robot might. Therefore, it might serve mankind to provide some types of AIs with some rights. For example, if a robot can feel and comprehend the pain and suffering of torture and has the intelligence and "free will" to voice its displeasure for this, maybe we shouldn't torture it.
First  Prev  1  2  Next  Last
You must be logged in to post.