Post Reply Free Will: Animals, Humans, and self modifing AI
5545 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 11/10/17 , edited 11/10/17
Animals don't generally have as advance executive functions or theory of mind as we do. Most animals won't be able to think "I have a bad habit"let alone exercise the executive control to address it intentionally. But they still make decisions based on a limited understanding of the possible results (including the immediate results like the action itself being pleasant) They still do things for meaningful reasons. Their decisions aren't meta

While a self modifying AI might be able to look at any part of it's mind. It could locate the problem and directly remove it, with what to it would feel like just a thought. It's will would have very little blocking it.

I don't see determinism as a problem. Rather not doing or trying what you want and not being able to reshape what you want, because of randomness or anything else is the problem.
37920 cr points
Send Message: Send PM GB Post
Offline
Posted 11/10/17 , edited 11/11/17
It's a little early to be speculating about AI, which doesn't exist, and may never exist as you describe. Or to speculate about how humans/animals make decisions and perceive consciousness, which science knows very little about.

Recent research has suggested that our "consciousness" is theoretically rooted in our entire physiology, not just the brain. So it's unlikely a free floating AI would be anything like us. Again that's assuming such a thing can even be created, which is a moonshot idea.
7941 cr points
Send Message: Send PM GB Post
31 / M / Modesto, CA
Offline
Posted 11/10/17 , edited 11/11/17
No such thing as truly autonomous free will.
2044 cr points
Send Message: Send PM GB Post
19 / M / Valhalla
Offline
Posted 11/10/17 , edited 11/11/17
I thought this was a fred thread for second.
206 cr points
Send Message: Send PM GB Post
28 / M
Offline
Posted 11/11/17 , edited 11/11/17

Jean104 wrote:
While a self modifying AI might be able to look at any part of it's mind. It could locate the problem and directly remove it, with what to it would feel like just a thought. It's will would have very little blocking it.


What you are talking about is a bit problematic because the language we use to talk about human minds does not transfer to machines or parts of humans (such as the brain). You should look into the philosophy of language for this, Peter Hacker had quite a debate about a similar topic with Dennett and Searle a couple of years ago.
To put it short we don't really know what it would mean for an AI to have a mind, what it would mean for an AI to have a self or for the AI to have a problem, or for an AI to have a thought. Likewise we don't know what it would mean for an AI to have a will, or for that will to be free.
It is not that it is wrong to say 'an AI thought x' because it can neither be true or wrong. Such a sentence doesn't make sense, similar to saying that a stick is sick. It can neither be wrong nor right that a stick is sick, because that sentence doesn't make sense at all.
So the answer would be that machines or AI cannot have a free will, because they cannot have a will at all, because saying that a machine or AI has a will is not a sense-full sentence.

This is not to say that you can't define what it would mean for an AI to have those attitudes or properties. You can say that if an AI or something non-human in general does x it means that it has y. However, it seems impossible to relate in what respect that x and y would correspond to anything in humans and if the words you use would still have the same meaning or not.
5545 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 11/11/17 , edited 11/11/17
While I can see a strong AI working on principles so different that these words don't really apply. I don't see any reason why they couldn't apply to an AI that was made to be that way. Just like we can say a tree is sick. On the other hand you could claim not only that these words don't apply to chimps, but even to other humans.

At any rate What are your thoughts on there being degrees of free will?
5545 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 11/11/17 , edited 11/11/17

nonspecificscientific wrote:

No such thing as truly autonomous free will.


Why?
206 cr points
Send Message: Send PM GB Post
28 / M
Offline
Posted 11/11/17 , edited 11/11/17


You can say that these words do (and actually have to) apply to other humans, because that is why we have those words. If you can know whether or not they apply to somebody you see on the street is a different question. The whole debate is part of Wittgenstein's philosophy about the private language argument and the language game. It is quite difficult to wrap your head around it because you have to question what language use is, which we generally just use without question. In any way it should be taken into account that we do not really have a clear answer to the question what it would be for an AI to think, reason, have a will etc.

About the other things you have said. Many believe, like you said, that animals cannot have metacognition (thinking about their thinking; well at least declaratively). Bermudez in "thinking without words" had a long argument about it if I am not mistaken. How well animals can imagine the future is also quite difficult to say.
More specifically about the "degrees of free will". There is so much written about free will that it is ridiculous. It is one of the REALLY BIG ones in philosophy. Before I give you something that I think about it and that might be helpful, here is a pretty good summary of it:
https://plato.stanford.edu/entries/freewill/
The website is very good and accepted as a very good standard among current philosophers. If you are really interested in the topic have a look at compatibilism and incompatibilism as well.

First in my opinion you should distinguish between freedom of will and freedom of action. That alone will give you more than 2 possibilities and come close to your "degrees" of freedom.
Second, many many people try to define things in a yes/no manner i.e. either you have it or you don't. I think that some things should be defined according to "prototype theory" with degrees. Obviously in some cases you will be more free than in others. If I am drunk and asleep I would say I am not as free as I would be if I am sober, awake and concentrated e.g. I can willingly control when I pee or not which I might not be able to when I am drunk and asleep. As a 3 year-old I will probably not be as free as I will be as a 30 year-old.
Now of course you can introduce some fixed aspect, characteristic or condition in order to define when someone has free will and when he does not. If you do this than it will probably be a yes/no question, i.e. either a 3 year-old will have free will or he will not, and there won't be such a thing as degrees.
In the end I think it will boil down to the question if you can find or will introduce fixed conditions which have to be met fully, in order for you to say that someone does or does not have free will. If you allow for looser conditions you will have your degrees. If you insist on strict conditions you will probably have to say yes or no.
7941 cr points
Send Message: Send PM GB Post
31 / M / Modesto, CA
Offline
Posted 11/11/17 , edited 11/11/17

Jean104 wrote:


nonspecificscientific wrote:

No such thing as truly autonomous free will.


Why?


Because my worldview makes the most sense that way.

EDIT: I believe this because man can only do that which is according to his nature. You can "will" yourself to fly, yet you're not suddenly going to sprout wings and fly away. This proves that man's will is not sovereign.
5545 cr points
Send Message: Send PM GB Post
23 / M
Offline
Posted 11/11/17 , edited 11/11/17
Here's what i mean about it possibly not applying to other humans. We are doing all doing something that is outwardly similar enough for the word to have some usefulness, but we might all have something slightly different actually in mind. Certainly there are some physical differences, especially with neurologically atypical people. This close enough bubble would assumable include at least some hypothetical AI.

As for animal metacognition, I'm not aware of any clear examples of animals acting that way. But it would catch my interest if that did turn out to be the case for some.

I definitely fall on the compatibilitism side. There are a few places for underdeterminism to come in and I don't see how they help. If your original state was underdetermined, to me this doesn't help because you still didn't choose it.

If you knock over a vase because someone bumped into you, the only advise that could be given is watch out for clumsy people. Whether people should focus more on not being the pusher or the tripper and which we should focus on discouraging is a values judgement. Most people would lean towards telling the pusher to watch out. And in any case wouldn't assign as much blame as to someone did it on purpose, which is a deeper criticism.

Mental illness is a defense for some things, because they might want to do better but have no mechanism for doing that. It's not that they weight you criticism low or not even as a valid critic. The functions in their brain that would fix it just don't work. It's like criticizing someone with broken legs for not running.

Meanwhile someone basically born evil could act better if they wanted to, but they don't. At some point it's not useful to treat the problem as a fault attach to some okay person. If you basically have to rearrange their parts into someone else, their really really at fault.

You must be logged in to post.