First  Prev  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  Next  Last
Post Reply Artificial consciousness is impossible
6334 cr points
Send Message: Send PM GB Post
Offline
Posted 12/11/16 , edited 1/10/17
Hello! New poster to the thread here! I read through all 16 pages of this debate because I'm a student of Computer Science & Computational Neuroscience as well as a sucker for anything to do with science fiction. I won't attempt to provide a counter-argument to the OP's logical assertions about the plausibility (or possibility) of artificial consciousness but instead wish to ask some questions about the assumptions of their statements as well as satisfy my curiosity about their point of view. Forgive my ignorance if these questions have already been asked/proven/refuted in this or another thread related to the debate:

1. Does the OP assume that consciousness is an emergent (i.e a complex, unique system derived from the interactions of behaviorally simpler components) property of the brain and/or the peripheral nervous system and how said systems interact with an external environment?

2a. If so, what argument may be given that biological neurons are uniquely qualified for carrying out the function of learning that is hypothesized to underlie higher-level brain behavior (e.g consciousness, cognition, dreaming, etc)?

2b. If not, what definition do they give for consciousness that doesn't rely on (1) the existence of an ethereal soul or (2) observation of both intention and qualia by an observer external to the assumedly conscious individual?

3. Would the OP agree that, regardless of whether artificial consciousness is indeed possible, endeavoring to comprehend the underpinnings of consciousness and human intelligence through the scientific method is a worthwhile effort?

4. Does the OP agree that computer models derived from theories about our current level of understanding of the human brain (e.g artificial neural networks, support-vector machines) can have utility in further investigations of its unsolved mysteries?

*Unsubstantiated Opinion Time!* I think that there may be a degree of faith involved in whichever side of this debate one falls on. If one is opposed to the idea that consciousness can be properly implemented with a man-made device and supports such an argument with logical reasoning, I think that we need to fully understand the assumptions behind that reasoning before evaluating its validity. In this case, those assumptions would involve the inherent nature of consciousness and any logical statements about it. That debate could probably go on ad infinitum and may very well require the intercession of a hyper-intelligent being (or a forum moderator) to conclude. On the flip side, proponents of the argument that computers can and probably will achieve consciousness one day likewise face difficulties about the nature of symbols and the basic maths underlying their theoretical models of learning and intelligence. I am not a mathematician, but I see a degree of arbitrariness in number theory comprised of convenient assumptions made by various investigators when deriving proofs. Has the OP ever read "Gödel Escher Bach" by Hofstadter? It's a bit dated, but some of the assertions it makes about symbolic math and isomorphisms between abstract and intelligent systems are still relevant. I'm not trying to convince the OP one way or another, but I wanted to provide some perspective on my own view: that philosophical questions regarding the nature of consciousness and intelligence are good in theory but don't have too much practical utility in an infinite universe. Granted, we can ruminate about it until the heat death of said universe, but why bother when there are potential applications for work in embodied cognition/intelligent systems everywhere? I think it was Sir Francis Crick who was a huge public skeptic of perceptrons and Rosenblatt neural nets when they first made a splash in the 50's and 60's. He said something like we cannot assume that perceptrons are capable of anything close to resembling human intelligence because there's no way to scientifically prove that the biological neural systems behave exactly like those models predict. He was correct, in a way - the initial hype about how perceptrons could potentially learn anything that humans could died off, and neural net popularity waned as more logic-based artificial intelligence gained prominence. Only relatively recently has there been a kind of Renaissance for neural nets due to increasing computational power and accessibility. Also only relatively recently have we started using computers for in-depth analysis of neurophysiological data. There are TONS of ways we can smash computers and brains together to see what happens, some of which might actually help people live better lives. Related to consciousness, we could try to use computer models to simulate grand mal seizures or investigate how involved the claustrum actually is in the persistence of conscious perception. I believe that that is something worth doing, rather than get caught up in endless discourse. Please do not mistake my personal opinions as an assertion of moral (or whatever) superiority - they're just opinions after all.
5472 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/11/16
Two of the arguments for this that I've heard of are the Chinese room and the Chinese brain thought experiments.

The Chinese brain argument says that had every chinese person act as a neuron that this system would clearly not have experiences, in contradiction to materialism. But the claim that the Chinese brain lacks experiences is unsupported intuition.

The Chinese room has a convincing program that takes in Chinese and gives sensible Chinese replies. This program is converted into an English manual that is followed by a human using pencil and paper. It is then argued that since the man doesn't understand what he either the inputs or outputs that neither would a computer.

However strong A.I. proponents wouldn't claim that the computer considered separately from the program, to understand it either. If you ran a truly conscious A.I. on the internet that wouldn't make the internet conscious. If you had 2 conscious programs and had the Chinese A.I. simply emulated by the English A.I., the English A.I. would just be parenting the Chinese one.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16 , edited 12/11/16

BlancheJacquestopus wrote:


nanikore2 wrote:


Let's handle this one thing at a time. Programming is the insertion of coded instruction into a system after its physical introduction into the world. Our intelligence is innate and exists prior to birth. Machines are dependent on low level code even for the allocation of memory, even before any sort of data manipulation. That is, they need instructions on even how to store a piece of data and have no innate ability to do so, much less any innate ability to categorize any data. Such is the difference between innate intelligence and artificial intelligence.

Your second question is laden with inbuilt assumptions. First, that an exhaustive functional account can exist. It can't, because of underdetermination. Second, it ignores group functions which may remain invisible under individual examination and conversely design; group functions are also underdetermined. All functional arguments thus fail. https://plato.stanford.edu/entries/scientific-underdetermination/


Does the fact that the brain is pre-equipped to store information and such matter? That's still coded by the DNA that led the brain to develop that way isn't it? At the end of the day they're both coming from some set of instructions or another. The only difference I see innateness creating is that we don't have to make our own.

I don't have enough of a scientific background to argue whether we really can't figure out enough about the brain to selectively control consciousness, so I guess I'll have to leave that alone. I think we may have a greater capacity to understand it than you think even just by continuing what we're doing now, but oh well, perhaps that's a little unrealistic. What I'm pretty sure of though is that neural function lends itself pretty well to programming because individual neurons have a pretty limited number of jobs and that, assuming we magically knew exactly how to make a brain, we could make a virtual one.

So here's a hypothetical: let's say we develop the not too unrealistic technology to scan a brain and encode the function, patterns, structure, etc. of each and every neuron. That way, without actually having to know where and how consciousness arises, we could still make an exact virtual replica of someone's brain as a huge neural network and run it. We could outfit it with a means of communication and sensory input too so it can be observed. Is there any reason to think that wouldn't be conscious?


Of course it matters. DNA doesn't control development in the way you think it does. Fruit flies are dramatically different from humans not in their number of genes, but in the number of protein interactions in their bodies. Your supposition is out-of-category.

https://www.sciencedaily.com/releases/2008/05/080512172904.htm

Paragraph two and three still appeals to the same functional argument that you did earlier. I don't see what's different about their suppositions (which also come with a lot of hand-waving, but I'm just going to let those go) that allow them to escape underdetermination.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16

Mister_Vulcan wrote:

Hello! New poster to the thread here! I read through all 16 pages of this debate because I'm a student of Computer Science & Computational Neuroscience as well as a sucker for anything to do with science fiction. I won't attempt to provide a counter-argument to the OP's logical assertions about the plausibility (or possibility) of artificial consciousness but instead wish to ask some questions about the assumptions of their statements as well as satisfy my curiosity about their point of view. Forgive my ignorance if these questions have already been asked/proven/refuted in this or another thread related to the debate:

1. Does the OP assume that consciousness is an emergent (i.e a complex, unique system derived from the interactions of behaviorally simpler components) property of the brain and/or the peripheral nervous system and how said systems interact with an external environment?

2a. If so, what argument may be given that biological neurons are uniquely qualified for carrying out the function of learning that is hypothesized to underlie higher-level brain behavior (e.g consciousness, cognition, dreaming, etc)?

2b. If not, what definition do they give for consciousness that doesn't rely on (1) the existence of an ethereal soul or (2) observation of both intention and qualia by an observer external to the assumedly conscious individual?

3. Would the OP agree that, regardless of whether artificial consciousness is indeed possible, endeavoring to comprehend the underpinnings of consciousness and human intelligence through the scientific method is a worthwhile effort?

4. Does the OP agree that computer models derived from theories about our current level of understanding of the human brain (e.g artificial neural networks, support-vector machines) can have utility in further investigations of its unsolved mysteries?



Ok, very brief first-go answer before I'm off to lunch, will handle this more in depth later if you're still interested.

1. I don't. I'm not sure what it is. Is it an "afterimage" or something else like a shadow on a wall, I don't know. What I do know is I experience it fully. I don't think epiphenominalism is necessarily true, and that has consequences and produces more questions than answers.

2b. (2) It could only be defined with qualia and intentionality because consciousness is locked to internal observation. There is no external access. You can't know precisely "what it is like" for me to experience X any more than I do of anyone else's experience of X.

3. Of course.

4. Hm. Well, the models are highly contrived. It seems to me that the general exercise is to fling all sorts of models to the wall, test them all and see what sticks (as in which matches "actual" behaviors best). Seriously, this top-down approach isn't going to have as much luck as perhaps clean-slate bottom-up ones. To me, imitation and emulation is not the way to produce some kind of artificial consciousness, if it is possible... Which it isn't.

As far as practical consideration of this entire affair, there is one. Legal and moral. Do we allow p-zombies to marry human beings? Do we give something with no consciousness, rights (i.e. even less consciousness than animals, which do have rights precisely because they are conscious)?
32 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/11/16

nanikore2 wrote:


Of course it matters. DNA doesn't control development in the way you think it does. Fruit flies are dramatically different from humans not in their number of genes, but in the number of protein interactions in their bodies. Your supposition is out-of-category.

https://www.sciencedaily.com/releases/2008/05/080512172904.htm

Paragraph two and three still appeals to the same functional argument that you did earlier. I don't see what's different about their suppositions (which also come with a lot of hand-waving, but I'm just going to let those go) that allow them to escape underdetermination.


But are these protein interactions any different either? Considering that they add up to the same features in humans nearly 100% of the time (barring mutations I guess), they seem to be pretty systematic and reliable. What makes them so inherently different from the program that tells a machine to store data?

And, if I'm understanding underdeterminism right to mean that the reason we can't figure out what's necessary for consciousness because there are so many unobservable things going on in the brain that we can't work backwards and find the specific neurons or combination, the hypothetical would ostensibly get around that. If we could literally just copy a brain; whatever's necessary would already be there without having to isolate it. It's essentially the same as making an exact replica of a bike. Even if the creator didn't know how a bike works, it would still ride.

To use the fruit example, you gave up trying to figure out how many apples and oranges were bought; you just went to the store and said "I'll have what he's having".

Also, this is admittedly an uneducated opinion, but I still question whether underdeterminism even applies. What's the evidence that we can never localize consciousness or figure out the brain's methods? It's slow going, but we've at least managed the former with a few things like language. And every breakthrough in technology for studying the brain has gotten us a clearer picture. Seems to be jumping the gun to say that the evidence to reverse engineer the brain doesn't exist. The evidence for a lot of what the brain does didn't exist until fMRI's and such.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16

BlancheJacquestopus wrote:


nanikore2 wrote:


Of course it matters. DNA doesn't control development in the way you think it does. Fruit flies are dramatically different from humans not in their number of genes, but in the number of protein interactions in their bodies. Your supposition is out-of-category.

https://www.sciencedaily.com/releases/2008/05/080512172904.htm

Paragraph two and three still appeals to the same functional argument that you did earlier. I don't see what's different about their suppositions (which also come with a lot of hand-waving, but I'm just going to let those go) that allow them to escape underdetermination.


But are these protein interactions any different either? Considering that they add up to the same features in humans nearly 100% of the time (barring mutations I guess), they seem to be pretty systematic and reliable. What makes them so inherently different from the program that tells a machine to store data?

And, if I'm understanding underdeterminism right to mean that the reason we can't figure out what's necessary for consciousness because there are so many unobservable things going on in the brain that we can't work backwards and find the specific neurons or combination, the hypothetical would ostensibly get around that. If we could literally just copy a brain; whatever's necessary would already be there without having to isolate it. It's essentially the same as making an exact replica of a bike. Even if the creator didn't know how a bike works, it would still ride.

To use the fruit example, you gave up trying to figure out how many apples and oranges were bought; you just went to the store and said "I'll have what he's having".

Also, this is admittedly an uneducated opinion, but I still question whether underdeterminism even applies. What's the evidence that we can never localize consciousness or figure out the brain's methods? It's slow going, but we've at least managed the former with a few things like language. And every breakthrough in technology for studying the brain has gotten us a clearer picture. Seems to be jumping the gun to say that the evidence to reverse engineer the brain doesn't exist. The evidence for a lot of what the brain does didn't exist until fMRI's and such.


"Considering that they add up to the same features in humans nearly 100% of the time (barring mutations I guess)"

Please back up this assertion.

"If we could literally just copy a brain"

That a huge hand-wave. Copying something not entirely observable how?

"I'll have what he's having"

You don't know "what he's having". More hand-waving.
32 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/11/16 , edited 12/11/16

nanikore2 wrote:

"Considering that they add up to the same features in humans nearly 100% of the time (barring mutations I guess)"

Please back up this assertion.

"If we could literally just copy a brain"

That a huge hand-wave. Copying something not entirely observable how?

"I'll have what he's having"

You don't know "what he's having". More hand-waving.


1. Unless I drastically misunderstand what you're saying, a combination of DNA and those protein interactions are responsible for humans developing the way that they do, right? So if I have to point to evidence for that, how about identical twins? The same genetic structure creates a physically mostly identical person on a physical level. I'm guessing I just worded it badly ('same' may not have been the best word choice), but I'm fairly sure this is pretty widely accepted.

The point is that the brain, and by extension, intelligence, don't come out of nowhere. They're only created from instructions from DNA, protein interactions, and whatever else we do or don't know about right now. The fact that those instructions are innate (because part of those instructions say to pass a set on to offspring) doesn't change that the concept is the same as programming. Besides, from a computer's perspective its own programming is innate too.

2. Neurons are observable. How the things they do end up causing thought and consciousness isn't, but neurons themselves are physical things that can be individually studied even now with the right equipment. The technology I'm proposing just does that for every neuron, converts it into an identical virtual copy with the same functions and connections, and combines them into a neural network that works exactly the same as the brain that was scanned.

3. Yes, that's why it's phrased that way. The one who sold them would ostensibly know, whether you do or not. The entire point of the metaphor is that you don't have to know. Use the bike analogy if you don't like this one.

...And you seem to have skipped over the last part. "Maybe we'll develop the technology later" may not be a very powerful argument, but you still have to deal with it to prove anything is "impossible". And you'd need to prove that underdeterminism even applies before you use it as evidence.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16

BlancheJacquestopus wrote:


nanikore2 wrote:

"Considering that they add up to the same features in humans nearly 100% of the time (barring mutations I guess)"

Please back up this assertion.

"If we could literally just copy a brain"

That a huge hand-wave. Copying something not entirely observable how?

"I'll have what he's having"

You don't know "what he's having". More hand-waving.


1. Unless I drastically misunderstand what you're saying, a combination of DNA and those protein interactions are responsible for humans developing the way that they do, right? So if I have to point to evidence for that, how about identical twins? The same genetic structure creates a physically mostly identical person on a physical level. I'm guessing I just worded it badly ('same' may not have been the best word choice), but I'm fairly sure this is pretty widely accepted.

The point is that the particular combination of genes, DNA, protein interactions, and whatever else we do or don't know about determine how the brain is formed in a consistent and deterministic, if opaque, way. They're essentially just a set of instructions for how to create/run everything, including the brain. Being "innate" doesn't change the fact that this is no different in concept from programming. Besides, from a computer's perspective, its own programming is innate too.

2. Neurons are observable. How the things they do end up causing thought and consciousness isn't, but neurons themselves are physical things that can be individually studied even now with the right equipment. The technology I'm proposing just does that for every neuron, converts it into an identical virtual copy with the same functions and connections, and combines them into a neural network that works exactly the same as the brain that was scanned.

3. Yes, that's why it's phrased that way. The one who sold them would ostensibly know, whether you do or not. The entire point of the metaphor is that you don't have to know. Use the bike analogy if you don't like this one.

...And you seem to have skipped over the last part. "Maybe we'll develop the technology later" may not be a very powerful argument, but you still have to deal with it to prove anything is "impossible". And you'd need to prove that underdeterminism even applies before you use it as evidence.


1. You need to slow down before responding. That's not what the article indicates. Identical twins develop different personality based not on genes but gene expression. Gene expression is underdetermined.

2. Observations point to the underdetermination. Scientists have found repeat stimulation of same neuron group in a fly's brain lead to non-repeatable results ("random").

3. Oh yes you do. How do you copy a black box? Please stop the hand-waving.

Technological advances do not overcome underdetermination. It also doesn't overcome logical contradiction.
6334 cr points
Send Message: Send PM GB Post
Offline
Posted 12/11/16

nanikore2 wrote:

Ok, very brief first-go answer before I'm off to lunch, will handle this more in depth later if you're still interested.

1. I don't. I'm not sure what it is. Is it an "afterimage" or something else like a shadow on a wall, I don't know. What I do know is I experience it fully. I don't think epiphenominalism is necessarily true, and that has consequences and produces more questions than answers.

2b. (2) It could only be defined with qualia and intentionality because consciousness is locked to internal observation. There is no external access. You can't know precisely "what it is like" for me to experience X any more than I do of anyone else's experience of X.

3. Of course.

4. Hm. Well, the models are highly contrived. It seems to me that the general exercise is to fling all sorts of models to the wall, test them all and see what sticks (as in which matches "actual" behaviors best). Seriously, this top-down approach isn't going to have as much luck as perhaps clean-slate bottom-up ones. To me, imitation and emulation is not the way to produce some kind of artificial consciousness, if it is possible... Which it isn't.

As far as practical consideration of this entire affair, there is one. Legal and moral. Do we allow p-zombies to marry human beings? Do we give something with no consciousness, rights (i.e. even less consciousness than animals, which do have rights precisely because they are conscious)?


Hmm, I guess we'll have to agree to disagree on that one. I try to apply the scientific method whenever possible, and if consciousness is indeed something that an outside observer cannot verify through experimentation, then there is probably no method to scientifically verify or debunk its existence. You can put me down as a functionalist or believer in epiphenominalism. With regard to the moral implications, I think that once enough humans start treating p-zombies as if they were conscious beings (e.g with affection, altruism, love, hate, etc.) there will be adequate political impetus to begin seriously considering their rights. Heck, military personnel already feel attached to quadrupedal robot carriers like they're bomb-sniffing dogs, and those are waayyy to the left of the Uncaniney Valley. Just another example of how fundamentally irrational humans are, I suppose. Just a note on the arbitrariness of neural nets. Yeah, many of them are by no means biologically plausible, but others like Spiking Neural Nets or ones that use point neurons come pretty darn close. The main issue is that the more complicated these neural representations become, the more expensive it is to represent them computationally. That imposes limits on the scale of simulations that can be run on Von Neumann machines, but that's why researchers are working on neuromorphic architectures. If we ever get to the point where computers have enough power to simulate a 80+ billion-unit neural network in realtime (~0.01 ms resolution) and we can use genetic algorithms to "grow" a brain over thousands or millions of generations with those computers, I think we'll get asymptotically close to consciousness. At that point, it probably won't even matter whether true artificial consciousness is possible or not; humans will feel cognitive dissonance if they treat such artificial entities differently. More than likely, we'll destroy our planet before technology advances enough to accomplish this, however. This was fun; maybe I'll get back to you in a few decades if anything big happens.

32 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/11/16 , edited 12/11/16

nanikore2 wrote:



1. You need to slow down before responding. That's not what the article indicates. Identical twins develop different personality based not on genes but gene expression. Gene expression is underdetermined.

2. Observations point to the underdetermination. Scientists have found repeat stimulation of same neuron group in a fly's brain lead to non-repeatable results ("random").

3. Oh yes you do. How do you copy a black box? Please stop the hand-waving.

Technological advances do not overcome underdetermination. It also doesn't overcome logical contradiction.


1. I had a boring morning today and a couple hours to kill last night, that's why I waded into this discussion in the first place. Anyway, not sure where the article says any of that. Gene expression and protein interactions aren't the same concept as far as I can tell. I'm not sure why it's relevant anyway; the instructions may be malleable to an extent but there are also programs that can overwrite themselves. And again, you can't immediately call "underdetermined" because we don't yet understand it.

2. Do you have any human examples to point to? We have the means and willingness to do it to humans, but none of anything I can find says there's any randomness to the results. Fly brains are pretty far removed from ours, so it's hardly a relevant example. Not to mention, the mere fact that electrical stimulation doesn't produce repeatable results doesn't mean it's underdetermined. More than likely there's just another factor that contributes. The fact that zapping alone isn't enough to reveal it isn't proof.

3. You copy a black box with brute force. Refer to the bike again: it's entirely possible to make a functional bike without knowing that the pedals move the chain which moves the wheels. The same is true of a car, a space shuttle, whatever. You just look at each and every piece, make/get your own, and put them together in the same way. We can observe neurons, what makes them fire, where they connect to, etc. Mindlessly copying each and every one and then putting them together in the same way should create consciousness even if we have no idea how.

They don't, no. But lacking the technology to find out certainly makes it easier to mistake something as being underdetermined. Ask someone 2,000 years ago how hot the sun is and (assuming you didn't get a theological answer) he'd probably tell you it can't ever be known. It's not like you could just go there and check, right? As for logical contradiction, that's what I'm still unconvinced of.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16 , edited 12/11/16

BlancheJacquestopus wrote:


nanikore2 wrote:



1. You need to slow down before responding. That's not what the article indicates. Identical twins develop different personality based not on genes but gene expression. Gene expression is underdetermined.

2. Observations point to the underdetermination. Scientists have found repeat stimulation of same neuron group in a fly's brain lead to non-repeatable results ("random").

3. Oh yes you do. How do you copy a black box? Please stop the hand-waving.

Technological advances do not overcome underdetermination. It also doesn't overcome logical contradiction.


1. I had a boring morning today and a couple hours to kill last night, that's why I waded into this discussion in the first place. Anyway, not sure where the article says any of that. Gene expression and protein interactions aren't the same concept as far as I can tell. I'm not sure why it's relevant anyway; the instructions may be malleable to an extent but there are also programs that can overwrite themselves. And again, you can't immediately call "underdetermined" because we don't yet understand it.

2. Do you have any human examples to point to? We have the means and willingness to do it to humans, but none of anything I can find says there's any randomness to the results. Fly brains are pretty far removed from ours, so it's hardly a relevant example. Not to mention, the mere fact that electrical stimulation doesn't produce repeatable results doesn't mean it's underdetermined. More than likely there's just another factor that contributes. The fact that zapping alone isn't enough to reveal it isn't proof.

3. You copy a black box with brute force. Refer to the bike again: it's entirely possible to make a functional bike without knowing that the pedals move the chain which moves the wheels. The same is true of a car, a space shuttle, whatever. You just look at each and every piece, make/get your own, and put them together in the same way. We can observe neurons, what makes them fire, where they connect to, etc. Mindlessly copying each and every one and then putting them together in the same way should create consciousness even if we have no idea how.

They don't, no. But lacking the technology to find out certainly makes it easier to mistake something as being underdetermined. Ask someone 2,000 years ago how hot the sun is and (assuming you didn't get a theological answer) he'd probably tell you it can't ever be known. It's not like you could just go there and check, right? As for logical contradiction, that's what I'm still unconvinced of.


1. Any sufficiently complex system that has to be ascertained with any measurement are underdetermined. At this point I doubt you understand the scope of underdeterminism. If A correlates with B, then A may cause B, B may cause A, A and B may be caused by a common variable C, or the correlation may be a statistical fluke and not “real”.

2. "Fly brains are pretty far removed from ours" Please back up that assertion.

3. "it's entirely possible to make a functional bike without knowing that the pedals move the chain which moves the wheels." Sigh. Do I need to elaborate on a layman level on a black box? A bike isn't a black box. You know there is a pedal. You know there is a chain. You know there is a wheel. With a black box you don't know what parts are there. There are underdetermined parts because all of these parts that you see are results of measurements.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/11/16 , edited 12/11/16

Mister_Vulcan wrote:


nanikore2 wrote:

Ok, very brief first-go answer before I'm off to lunch, will handle this more in depth later if you're still interested.

1. I don't. I'm not sure what it is. Is it an "afterimage" or something else like a shadow on a wall, I don't know. What I do know is I experience it fully. I don't think epiphenominalism is necessarily true, and that has consequences and produces more questions than answers.

2b. (2) It could only be defined with qualia and intentionality because consciousness is locked to internal observation. There is no external access. You can't know precisely "what it is like" for me to experience X any more than I do of anyone else's experience of X.

3. Of course.

4. Hm. Well, the models are highly contrived. It seems to me that the general exercise is to fling all sorts of models to the wall, test them all and see what sticks (as in which matches "actual" behaviors best). Seriously, this top-down approach isn't going to have as much luck as perhaps clean-slate bottom-up ones. To me, imitation and emulation is not the way to produce some kind of artificial consciousness, if it is possible... Which it isn't.

As far as practical consideration of this entire affair, there is one. Legal and moral. Do we allow p-zombies to marry human beings? Do we give something with no consciousness, rights (i.e. even less consciousness than animals, which do have rights precisely because they are conscious)?


Hmm, I guess we'll have to agree to disagree on that one. I try to apply the scientific method whenever possible, and if consciousness is indeed something that an outside observer cannot verify through experimentation, then there is probably no method to scientifically verify or debunk its existence. You can put me down as a functionalist or believer in epiphenominalism. With regard to the moral implications, I think that once enough humans start treating p-zombies as if they were conscious beings (e.g with affection, altruism, love, hate, etc.) there will be adequate political impetus to begin seriously considering their rights. Heck, military personnel already feel attached to quadrupedal robot carriers like they're bomb-sniffing dogs, and those are waayyy to the left of the Uncaniney Valley. Just another example of how fundamentally irrational humans are, I suppose. Just a note on the arbitrariness of neural nets. Yeah, many of them are by no means biologically plausible, but others like Spiking Neural Nets or ones that use point neurons come pretty darn close. The main issue is that the more complicated these neural representations become, the more expensive it is to represent them computationally. That imposes limits on the scale of simulations that can be run on Von Neumann machines, but that's why researchers are working on neuromorphic architectures. If we ever get to the point where computers have enough power to simulate a 80+ billion-unit neural network in realtime (~0.01 ms resolution) and we can use genetic algorithms to "grow" a brain over thousands or millions of generations with those computers, I think we'll get asymptotically close to consciousness. At that point, it probably won't even matter whether true artificial consciousness is possible or not; humans will feel cognitive dissonance if they treat such artificial entities differently. More than likely, we'll destroy our planet before technology advances enough to accomplish this, however. This was fun; maybe I'll get back to you in a few decades if anything big happens.



There are certain parts of that response I consider somewhat strange.

" once enough humans start treating p-zombies as if they were conscious beings"
But you for one is calling them p-zombies. Do you know what I mean here? Once someone calls something a p-zombie, one is already treating it as not conscious.

"Just another example of how fundamentally irrational humans are, I suppose."
Here again. We won't be the only ones labeling such situation "irrational". There will be groups of people such as us, calling p-zombies... p-zombies, and saying that irrational treatments are... irrational.

"I think we'll get asymptotically close to consciousness." No... you will be close to another p-zombie brain simulation. It may not act like a human brain, but it's still a philosophical zombie brain. Before you put all of those simulations in, what is that piece of hardware... Fundamentally inert just like any other piece of hardware. There is nothing innate happening inside.

Re: spending all that computation power and incurring costs
Why even bother? It's faster to clone a human brain and call it a day. Or, clone an animal brain and enhance with cybernetics. All that extraneous effort in computation makes zero sense when there are other ways to "get our brain". Why? Just for chits n' giggles? Just to prove something?.... Because to me it still proves nothing.

32 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/12/16 , edited 12/17/16

nanikore2 wrote:


1. Any sufficiently complex system that has to be ascertained with any measurement are underdetermined. At this point I doubt you understand the scope of underdeterminism. If A correlates with B, then A may cause B, B may cause A, A and B may be caused by a common variable C, or the correlation may be a statistical fluke and not “real”.

2. "Fly brains are pretty far removed from ours" Please back up that assertion.

3. "it's entirely possible to make a functional bike without knowing that the pedals move the chain which moves the wheels." Sigh. Do I need to elaborate on a layman level on a black box? A bike isn't a black box. You know there is a pedal. You know there is a chain. You know there is a wheel. With a black box you don't know what parts are there. There are underdetermined parts because all of these parts that you see are results of measurements.


1. Fair enough, it's a broader term than I thought. That said, if that's the definition we're using, then I'm not sure you could use underdetermination to prove something impossible to understand. Yes, at this point it would be, but just because we only have a correlation now and correlation doesn't show causation doesn't mean there is no causation or that we lack the capacity to find it. For example, living in dirty conditions is correlated with poor health. Before the microscope, that's all we could really say. It was underdetermined. But now we can say with quite a bit of certainty that germs, bacteria, infection, and the like are the cause. Thus it isn't anymore. And again, you're trying to prove that it's "impossible", which means you'd have to prove that, no matter how we advance, it still can't happen.

2. Well to use those fallacies you like so much, fly brains are a non-sequitur if you can't prove they were relevant in the first place. You're the one using it to prove a point, and we're not talking about fly brains here. But if I had to give something off the top of my head, how about the fact that, as far as classifications go, they're in a different phylum? Meaning we share a pretty distant link to them evolutionarily, one step away from being as distant as one can get.

3. ...But we do know what the parts are. They're called neurons. We have a whole branch of science devoted to studying them. We can look at them, and we know what they do on an individual basis. We know that neuron A will have an action potential under B circumstances, and communicates with neurons C, D, and E, and so on. The black box comes in at the point at which those action potentials become things like thought. But if the whole physical system was copied anyway, then the mechanisms responsible got copied too.

But while you're mentioning laymen, you probably should be explaining things that way, yes. This isn't academia or anything close to it, it's a forum on a site that streams anime. You can probably articulate things more precisely the way you type, but a pretty small portion of the people here (i.e. the ones who took a college philosophy class and actually paid attention) will understand a word of it. And if no one understands, that precision is meaningless. Maybe dial back the condescension too while you're at it.
32714 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 12/12/16 , edited 12/12/16

BlancheJacquestopus wrote:


nanikore2 wrote:


1. Any sufficiently complex system that has to be ascertained with any measurement are underdetermined. At this point I doubt you understand the scope of underdeterminism. If A correlates with B, then A may cause B, B may cause A, A and B may be caused by a common variable C, or the correlation may be a statistical fluke and not “real”.

2. "Fly brains are pretty far removed from ours" Please back up that assertion.

3. "it's entirely possible to make a functional bike without knowing that the pedals move the chain which moves the wheels." Sigh. Do I need to elaborate on a layman level on a black box? A bike isn't a black box. You know there is a pedal. You know there is a chain. You know there is a wheel. With a black box you don't know what parts are there. There are underdetermined parts because all of these parts that you see are results of measurements.


1. Fair enough, it's a broader term than I thought. That said, if that's the definition we're using, then I'm not sure you could use underdetermination to prove something impossible to understand. Yes, at this point it would be, but just because we only have a correlation now and correlation doesn't show causation doesn't mean there is no causation or that we lack the capacity to find it. For example, living in dirty conditions is correlated with poor health. Before the microscope, that's all we could really say. It was underdetermined. But now we can say with quite a bit of certainty that germs, bacteria, infection, and the like are the cause. Thus it isn't anymore. And again, you're trying to prove that it's "impossible", which means you'd have to prove that, no matter how we advance, it still can't happen.

2. Well to use those fallacies you like so much, fly brains are a non-sequitur if you can't prove they were relevant in the first place. You're the one using it to prove a point, and we're not talking about fly brains here. But if I had to give something off the top of my head, how about the fact that, as far as classifications go, they're in a different phylum? Meaning we share a pretty distant link to them evolutionarily, one step away from being as distant as one can get.

3. ...But we do know what the parts are. They're called neurons. We have a whole branch of science devoted to studying them. We can look at them, and we know what they do on an individual basis. We know that neuron A will have an action potential under B circumstances, and communicates with neurons C, D, and E, and so on. The black box comes in at the point at which those action potentials become things like thought. But if the whole physical system was copied anyway, then the mechanisms responsible got copied too.

But while you're mentioning laymen, you probably should be explaining things that way, yes. This isn't academia or anything close to it, it's a forum on a site that streams anime. You can probably articulate things more precisely the way you type, but a pretty small portion of the people here (i.e. the ones who took a college philosophy class and actually paid attention) will understand a word of it. And if no one understands, that precision is meaningless. Maybe dial back the condescension too while you're at it.


1. It means an exhaustive model is impossible.

2. If we can't have an exhaustive model of a fly neuron, what makes you think that we can have an exhaustive model of a human's?

3. See #2. There is no exhaustive modeling. I had already told you- Experimental results are random on a particular neuron group, demonstrating underdetermination.

How about demonstrating an understanding of the points instead of making me more or less repeat the same thing over and over? That is very frustrating. It's not condescension, it's me feeling like I'm talking to a wall.
32 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 12/12/16

nanikore2 wrote:


1. It means an exhaustive model is impossible.

2. If we can't have an exhaustive model of a fly neuron, what makes you think that we can have an exhaustive model of a human's?

3. See #2. There is no exhaustive modeling. I had already told you- Experimental results are random on a particular neuron group, demonstrating underdetermination.

How about demonstrating an understanding of the points instead of making me more or less repeat the same thing over and over? That is very frustrating. It's not condescension, it's me feeling like I'm talking to a wall.


1. Like I said, given the information we have it may be impossible. But many things were underdetermined in the past that no longer are. Underdetermination is caused by the available evidence, right? So it could change as the evidence did. Exhaustive models being impossible didn't respond to any of that. But you know what, I'm willing to leave this one. It's deviated far enough from genetics being analogous to programming as it is.

2. So many things wrong with that:

A) That's entirely different from what you brought it up to prove
B) That hinges on the fact that a fly brain is just a less complex version of ours. That's an enormous assumption. It's structures and functions could well be wildly different.
C) I doubt we've spent nearly as much time studying them. The only thing you mentioned them trying is zapping it.
D) It would be irrelevant anyway. Being unable to understand one doesn't imply that the other is impossible.
E) You have, for the second time, completely ignored the need to prove that fly brains are relevant in the first place.
F) You're appealing to our current technological capacity. Need I remind you that you used the word "impossible"? Even ignoring A - E up there, the fact that we can't do it now doesn't mean anything. If we were talking about what we're capable of right this second, there wouldn't be a discussion.

3. You had, and I explicitly told you that evidence is still meaningless if you can't demonstrate a connection. See #2 for more detail on that.

Believe me, the feeling's mutual on all accounts. I've done plenty of repeating myself because you seem to either ignore or misinterpret everything I'm saying. That said, literally typing the word "sigh" is pretty condescending in any context.

Anyway let me try to simplify this for the both of us. Just try, in the simplest terms you know, to describe how it is impossible for us to make a virtual replica of a given neuron with no knowledge whatsoever about it's overall cognitive function because that is impossible because of underdetermination, just knowledge of its individual properties.
First  Prev  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  Next  Last
You must be logged in to post.