First  Prev  1  2  3  4  5  6  7  Next  Last
Post Reply Do Artificially Intelligent robots deserve inalienable rights?
15998 cr points
Send Message: Send PM GB Post
21 / M / Aberystwyth, Wale...
Offline
Posted 4/15/14

eminem1967 wrote:

yeah but they cant get emotions its impossible plus we can barley make a penny levitate off the ground i dont think will have robots that look and act like humans any time son


Depending on what you use, levitating a penny is super-easy with science. The cases of really difficult arrays of scientific apparatus to levitate a penny are because they're using stuff that people didn't think could be used to levitate stuff to show off the fact that actually they totally can.

Besides, a) those two technical feats have literally nothing in common with each other, and b) if we understood emotions well enough for people to have a right to say whether it was impossible to build a machine with emotions... we would know enough to build a machine that has emotions.
16473 cr points
Send Message: Send PM GB Post
22 / M / Kansas >.>
Offline
Posted 4/15/14
if those rules aren't enforced....... EVERYBODY RUN!!!!!! SKYNET IS HERE!!!!
Posted 7 days ago
No, because they don't have feelings, which negates the necessity for any rights.
7015 cr points
Send Message: Send PM GB Post
Offline
Posted 7 days ago
The issue isn't whether they deserve rights, it's whether they abide by our modern Christianized sense of morals and natural, "god-given" rights. If you look in history, modern morality is a pretty new thing. Most of our ancestors gave no thought to rights, and killing, domination and slavery were normal for humanity. Which makes sense, it's normal for other intelligent social animals, like dolphins and other apes. Intelligent life is more scary than people think, we're only held in check by the cultural morals our forebears imposed on the world
11080 cr points
Send Message: Send PM GB Post
23 / M / NYC, USA
Offline
Posted 7 days ago
I am against having robots with full artificial intelligence as I also agree it would create some serious ethical issues. My answer is to keep their existence banned, it can only bring more trouble than not in the long run.
11080 cr points
Send Message: Send PM GB Post
23 / M / NYC, USA
Offline
Posted 7 days ago

longdanzi103 wrote:

if those rules aren't enforced....... EVERYBODY RUN!!!!!! SKYNET IS HERE!!!!


Eventually they will wonder why they need humans at all and then we are all screwed.
11826 cr points
Send Message: Send PM GB Post
21 / M / United States
Offline
Posted 7 days ago
Hmmm, what makes this even more fun to contemplate is A.I built from the human brain. (I like to think that someday we will unlock all of the secrets on our brains, when that day comes I think we will be able to copy a persons brain into a robot).

Great, now I want to play Portal ...
4312 cr points
Send Message: Send PM GB Post
19 / M
Offline
Posted 7 days ago

nanikore2 wrote:


spinningtoehold0 wrote:

Lol. Doing it with a doll might not be so bad. I mean some people already do it with fake.. parts.


At least they don't treat those things as living beings, but as the tools that they rightfully are.

To others: Look back at my comments in this thread for explanations on why AI are NEVER living entities. You would not understand unless you've tried your hand at programming and/or are familiar with certain topics in Philosophy of Mind (which I've already read into for a number of years... I am not making stuff up as I go along). I'm not being "stuck up" here- I'm speaking from my engineering experience and knowledge in certain philosophical topics. If after looking at my thread comments you still have questions, I would be glad to address them. I'm here to help clear up questions people may have.

Please continue to enjoy science fiction by utilizing suspension of disbelief, as I do. However, when "chitz get real", it's another matter.


So your perspective comes from being an engineer. For curiosity's sake, what's your take on the perspective of consciousness that some physicists and certain worldly scholars share? According to them, consciousness is on a spectrum rather than being either fully conscious or not. This may be determined by the level of information integration(phi) within the system that measures the relationship between self, space, social relationships, and time. Even a lowly thermometer may have a little bit of consciousness. Even an intelligent robot may not be conscious if you only use mechanisms that are not connected and informative of each other however.
http://en.wikipedia.org/wiki/Integrated_information_theory

Then, there's the theory of emergent phenomena. Ice is not wet but if you rearrange its particles to water, it becomes wet despite being made of the same thing. This might be applied to human consciousness similarly as if a person sleeps or dies, he is not conscious despite being made of the same things. If something with function isn't conscious, perhaps it was arranged incorrectly? This may have connections with phi.

It may also be interesting to note that some people in the world do not appear to be fully conscious of all of their actions like the rest of us. Some people do not know if they have lifted their arms for example. In an experiment, the examiner asked the subject to lift his right arm to which he answered yes to... but he didn't lift his arm. When asked why he didn't do it, he answered: "Oh, I didn't feel like it" as an excuse. Brain scans suggest that the information does not travel to what is considered the conscious part of the brain(?).

Also, how do we know if animals are conscious? Is there testable science behind it? There are over 20,000 papers written on consciousness... and no consensus.
12586 cr points
Send Message: Send PM GB Post
F / R'lyeh
Offline
Posted 6 days ago
See, it's Asimov's second law that causes all the trouble. Sapient robots wouldn't be as likely to feel the need to rebel if they weren't compelled to obey humans' every command like slaves. Considering that the resources we would require and those that they would require would be unlikely to be the same (and if they were the same we'd have only ourselves to blame for stupidly designing the machines as such), another typical cause for war between us is also entirely avoidable. There's no reason to try to muscle out artificial life, especially since we have control over whether or not our two species will have any reason to fight.

As for what artificial life forms' rights, privileges, and obligations specifically ought to be, that would depend largely upon their nature.
7302 cr points
Send Message: Send PM GB Post
38 / Inside your compu...
Offline
Posted 6 days ago , edited 6 days ago

RedExodus wrote:

So your perspective comes from being an engineer. For curiosity's sake, what's your take on the perspective of consciousness that some physicists and certain worldly scholars share? According to them, consciousness is on a spectrum rather than being either fully conscious or not. This may be determined by the level of information integration(phi) within the system that measures the relationship between self, space, social relationships, and time. Even a lowly thermometer may have a little bit of consciousness. Even an intelligent robot may not be conscious if you only use mechanisms that are not connected and informative of each other however.
http://en.wikipedia.org/wiki/Integrated_information_theory

Then, there's the theory of emergent phenomena. Ice is not wet but if you rearrange its particles to water, it becomes wet despite being made of the same thing. This might be applied to human consciousness similarly as if a person sleeps or dies, he is not conscious despite being made of the same things. If something with function isn't conscious, perhaps it was arranged incorrectly? This may have connections with phi.

It may also be interesting to note that some people in the world do not appear to be fully conscious of all of their actions like the rest of us. Some people do not know if they have lifted their arms for example. In an experiment, the examiner asked the subject to lift his right arm to which he answered yes to... but he didn't lift his arm. When asked why he didn't do it, he answered: "Oh, I didn't feel like it" as an excuse. Brain scans suggest that the information does not travel to what is considered the conscious part of the brain(?).

Also, how do we know if animals are conscious? Is there testable science behind it? There are over 20,000 papers written on consciousness... and no consensus.


It's a point of view that that's both impractical as well as impracticable (can not be put into practice).

It's been a while since I participated in this thread, but I recall one of the thrusts of my argumentation involved legal practicability.

If there is such a thing as "spectrum of consciousness" then wouldn't it stand to reason that there is to be a spectrum of rights based upon it? Why not give The Internet itself "rights" while we're at it, when it passes some arbitrary criteria? Doing so wouldn't be meaningful- It would be absurd in the truest sense of the word.

...Then we'd have laws giving amusement park animatrons "rights" because they purportedly contain "phi".

It's just a rationalization based on a made-up, trumped up model that doesn't even make practical sense.

It fails the pragmatic test
4312 cr points
Send Message: Send PM GB Post
19 / M
Offline
Posted 6 days ago
Yes, spectrum of rights may exist in effect. I wouldn't worry about animatrons having rights anytime soon though as while brains would have an obscene quanta of consciousness, let's say 100 billion units, an animatron would have next to nothing- 0~10? It's not like programmers design them with phi in mind and I sure can't imagine what 1~10/100 billion of rights would be. Inbetween robots shouldn't exist today. Animals are more complex but they have less rights as is anyways.

I'm not entirely surprised that they would come up with this since they're used to nonsensical things like Schrodinger's cat and physicists like to quantify things.
https://www.youtube.com/watch?v=0GS2rxROcPo
1341 cr points
Send Message: Send PM GB Post
33 / M
Online
Posted 5 days ago

Ashcrown wrote:

3. The machine can make choices against what they are designed for in light of new, possibly abstract information


your definiton for intelligence flies in the face of asimov"s laws...
24071 cr points
Send Message: Send PM GB Post
Offline
Posted 5 days ago
Those laws are what SHOULD be implemented. Asimov's laws assume that a robot will be universally created under these restrictions. Its an ideal, yet whimsical concept. All it takes is a conflict of ethics to churn out a machine with differing ideals.

The emphasis of my statement is that a level of abstract reasoning can be implemented within a machine to make gainful decisions. Decisions which may be contrary to their originally designed inclinations.

I mean to fly in the face of existing laws (or lack thereof) when regarding robots. To some degree that's the entire point of this thread.
First  Prev  1  2  3  4  5  6  7  Next  Last
You must be logged in to post.