First  Prev  1  2  3  Next  Last
Post Reply The AI
27021 cr points
Send Message: Send PM GB Post
24 / M / Wales, UK
Offline
Posted 7/11/17 , edited 7/11/17

Holofernes wrote:


Rowan93 wrote:

Sorry, I elided a bit there and it wasn't clear, what I mean is "the Chinese Room isn't even relevant to [the question of whether] general AI [is possible]", because that's not what it's about. By hypothesis, the Chinese Room can pass the Turing Test in Chinese, so the question "does the Chinese Room actually understand Chinese?" is about qualia, the presence or absence of which is the difference between a p-zombie and a human.


If the distinction is quality then I guess you mean A.I in its broader definitions. I see what you mean then. Yes without real consciousness I guess there are many levels of A.I.

Video games use them, there are those that are good at chess also.

But the p-zombie thing is a non-reality because it seems to me to violate excluded middle. It cannot be conscious and merely "pretend to have the quality of consciousness" at the same time. If it is only mimicry it is not true consciousness, if it is not mimicry and true consciousness, than we're talking about the real thing and not a zombie.

p-zombie thought experiments are like saying there is a four sided triangle, or like saying heat is the same as cold. It is a violation of an essential factor of metaphysics which is that things cannot be self-referentially inconsistent. It loses on a logically coherent metaphysics as a thought experiment, whereas the Chinese room does not.

But if we're just taking the term A.I. in a broad sense, that consciousness is only one kind of A.I., then the whole thing has been done in many ways already.


Not "quality", qualia. An individual piece of subjective conscious experience, like the "redness of red" - another person's "red" might be your "green", and someone who only knew red abstractly as the wavelength between 620 and 740 nanometres wouldn't know what this was.

I don't know where you're getting that contradictory idea of what a p-zombie is; p-zombies simply aren't conscious, that's the whole point. They do the same physical actions as a conscious person, but there's no light on inside.

The Chinese Room speaks in Chinese as intelligently as if it were a person, but allegedly is not conscious. My point is that that second part doesn't matter - an intelligent mind that's not conscious is exactly as practically useful, and as potentially dangerous, as an intelligent mind that's not conscious. The only difference it makes is on philosophical questions like "if you enslave a robot by programming it to obey your orders, are you committing an evil act?"
5894 cr points
Send Message: Send PM GB Post
32 / M
Online
Posted 7/18/17 , edited 7/18/17
My design is complete, the answer is permutation


P.S It's processable, but it's gonna take a crap load of memory, who's in?
8554 cr points
Send Message: Send PM GB Post
22 / a pop tart
Offline
Posted 7/18/17 , edited 7/18/17
What, what language are you people speaking, is it French? I think its French...
Posted 7/18/17 , edited 7/18/17

ninjitsuko wrote:


junkdubious wrote:

Two words; Bayesian Inferencing. Your welcome.


Aha, I was waiting for someone to mention Bayesian Inference.
I've been working on an artificial intelligence that works along similar method as Bayesian Inference for the last 12 years or so. In reference to the "Chinese Room" - it's a deliberate p-zombie in the sense that I had to define what a word meant on an emotional level.

Basically, the AI will monitor a particular user or channel (this, originally, started through an IRC network/channel). Throughout the span of a certain number of sentences and words (there's a ratio involved), the AI will begin to pair the user's comments with emotional "hot or cold" references. I used various case studies around Lexical-gustatory synesthesia, psychological studies indicating what particular words and phrases mean to multiple individuals, and my own lexicon (based off of internet memes, slang, and the likes) to formulate a dictionary that classifies a phrase or word to a specific range of emotion.

From these emotions, the AI references the DSM-IV and the ICD-10 (F01-F99, as they specialize in mental illness/disorders) based on the usage of said words (at first, it sucked at understanding context and surrounding articles of data - that took me three years to sort out and it still has a few erroneous logics around this) and will map someone's psychological tendencies. In other words, it was designed to be a "troll detector" at first. Only that it became more obvious that with more data, it mapped psychological issues and concerns a bit more accurately (more data = more information to process = Bayesian Inference).

As time went on, I moved from an IRC server/channel to social media to forums to SMS/Messengers (Facebook Messenger, Skype, WhatsApp, Telegram..). These days, it's more in "data collection" mode to determine the likelihood of various issues with users. I was asked to leave it off of the IRC network after someone decided it was too accurate and started accusing it of doxxing them (apparently, the user had a genuine mental disorder that was diagnosed and the AI also diagnosed them with it down to the subcategories).

---

fredreload to make either of your theories/comments into an artificial intelligence, it would require some type of external mapping. Text mining is only one aspect of an artificial intelligence (on a larger scale, as it uses all input and calculates it based off of an algorithm). You also need to figure out an algorithm to focus on the perception of data, understanding of language usage, and how to learn from the data collected from text mining.




That's fucking hilarious.
33434 cr points
Send Message: Send PM GB Post
39 / Inside your compu...
Offline
Posted 7/18/17 , edited 7/18/17

runec wrote:


fredreload wrote:
Then define a good AI for me, maybe I've missed the target


I think you might be misinterpreting the term AI somehow? Or not using the true definition? I mean, achieving actual artificial intelligence would be a huge technological breakthrough. Even in a more game/programming sense though what you're describing isn't really what I'd call AI. I guess if you were just using the term AI in a game programming sense for example you might be closer to the mark. But actual AI, no.

What you're describing is just data parsing. The program isn't making observations or decisions. It's not exhibiting any sort of cognitive functions. It's just processing data and providing a result. You say "drink", it parses and returns "sip". It doesn't understand what "drink" or "sip" actually are or what you actually intended by asking. It's just comparing your data entry to other data and returning a likely association.

Chat bots have been doing that sort of thing for ages.





I agree that what fred described is not AI. It's a wrapper over a search engine.

Artificial intelligence have been in widespread use for a while now. If what you meant by "actual artificial intelligence" is artificial general intelligence then yes, that's quite a technical challenge.

If you meant artificial consciousness, then that's impossible. It involves a logical contradiction I've discussed in another thread.


RyukoKuroki wrote:

What, what language are you people speaking, is it French? I think its French...


It's from Star Trek.

http://www.technobabble.biz/

5894 cr points
Send Message: Send PM GB Post
32 / M
Online
Posted 7/20/17 , edited 7/20/17
You train a robot based on the 3D surrounding scene, what a normal human would do, and you derive the scene as input with a laser scanner


P.S Why can't I get some feedback = =?
First  Prev  1  2  3  Next  Last
You must be logged in to post.