First  Prev  1  2  3  Next  Last
Post Reply The AI
runec 
39540 cr points
Send Message: Send PM GB Post
Online
Posted 7/8/17 , edited 7/9/17

Rowan93 wrote:
Predictive text doesn't become artificially intelligent just by feeding it enough data, if it did wouldn't Google rule the world by now?


....Doesn't it?

>.>
5894 cr points
Send Message: Send PM GB Post
32 / M
Offline
Posted 7/8/17 , edited 7/9/17

runec wrote:


Rowan93 wrote:
Predictive text doesn't become artificially intelligent just by feeding it enough data, if it did wouldn't Google rule the world by now?


....Doesn't it?

>.>


It's a collection of ideas lol. Kind of hard to explain. Google search engine does not think, but it gets you what you need

What I would do, I would not use a Wikipedia dump because all terms are only repeated once, this is a frequency count so the more the better.

If Google search engine has a text dump I would use it, next run n-gram and store in database, enjoy


P.S Spiderman is good movie, fellow crows
57322 cr points
Send Message: Send PM GB Post
Offline
Posted 7/8/17 , edited 7/9/17

For instance, I will type in the words "water" and "thirsty", if parse the sentence correctly and based on words frequency count. I would probably get "drink" 98%, "sip" 2%, etc. Based on the frequency count of the words. And that my friend, is your AI.


Word2Vec is basically just that. https://en.wikipedia.org/wiki/Word2vec
42527 cr points
Send Message: Send PM GB Post
35 / M
Offline
Posted 7/8/17 , edited 7/9/17

Rowan93 wrote:

Predictive text doesn't become artificially intelligent just by feeding it enough data, if it did wouldn't Google rule the world by now?


Good question.


Rowan93 wrote:

The Chinese Room thought experiment isn't even relevant to general AI, it's specifically about whether AI would be conscious - by hypothesis, the Chinese Room is a Turing Test-passing artificial general intelligence, the only question is whether it', and implicitly also any other AI we might build, is a p-zombie or not.


I'm confused here, Why is the Chinese room not relevant? Does attaching words like Thought Experiment make it not relevant? Not understanding, but that sounds like a claim that doesn't obtain its conclusion.

Numerous people have problems with zombie theory because it wants to ascribe a quality to an appearance intelligence that has no referential reality. Something cannot be "the same" and "not the same" at the same time. That is a logical violation of excluded middle. Either it is not the same in some part or kind, in which case, it is only similar, or it is exactly the same.

If zombie is similar, then the distinction makes it impossible to be identical to a conscious mind.
6339 cr points
Send Message: Send PM GB Post
Offline
Posted 7/8/17 , edited 7/9/17
Two words; Bayesian Inferencing. Your welcome.
5894 cr points
Send Message: Send PM GB Post
32 / M
Offline
Posted 7/9/17 , edited 7/9/17
Alright, here is the improved algorithm. You run n-gram on a sentence, then you check for the n-gram on the rest of the sentence. Then we need a large text dump, one that have repeating data and covers the entire net like Google, let me know if you can find such a dump file

For instance:
I like to eat barbeque.

1-gram 1-gram
I like
I to
I eat
I barbeque

Like I
Like to
Like eat
Like barbeque

(I only ran for two instances)
...

1-gram 2-gram
I like to
I to eat
I eat barbeque

Like I
Like to eat
Like eat barbeque

...

1-gram 3-gram
I like to eat
I to eat barbeque

Like I
Like to eat barbeque

...

1-gram 4-gram
...

2-gram 1-gram
I like to
I like eat
I like barbeque

Like to I
Like to eat
Like to barbeque

...

To eat I

2-gram 2-gram
I like to eat
I like eat barbeque

Like to I
Like to eat barbeque

2-gram 3-gram
I like to eat barbeque

...
...

5-gram 0-gram


Conlusion:
Run this until you reach 5-gram 0-gram and now you have an AI for 1 sentence. Repeat for all other sentences


P.S Wikipeida dump might not be a good source because we need repeating validations
Posted 7/9/17 , edited 7/9/17

junkdubious wrote:

Two words; Bayesian Inferencing. Your welcome.


Aha, I was waiting for someone to mention Bayesian Inference.
I've been working on an artificial intelligence that works along similar method as Bayesian Inference for the last 12 years or so. In reference to the "Chinese Room" - it's a deliberate p-zombie in the sense that I had to define what a word meant on an emotional level.

Basically, the AI will monitor a particular user or channel (this, originally, started through an IRC network/channel). Throughout the span of a certain number of sentences and words (there's a ratio involved), the AI will begin to pair the user's comments with emotional "hot or cold" references. I used various case studies around Lexical-gustatory synesthesia, psychological studies indicating what particular words and phrases mean to multiple individuals, and my own lexicon (based off of internet memes, slang, and the likes) to formulate a dictionary that classifies a phrase or word to a specific range of emotion.

From these emotions, the AI references the DSM-IV and the ICD-10 (F01-F99, as they specialize in mental illness/disorders) based on the usage of said words (at first, it sucked at understanding context and surrounding articles of data - that took me three years to sort out and it still has a few erroneous logics around this) and will map someone's psychological tendencies. In other words, it was designed to be a "troll detector" at first. Only that it became more obvious that with more data, it mapped psychological issues and concerns a bit more accurately (more data = more information to process = Bayesian Inference).

As time went on, I moved from an IRC server/channel to social media to forums to SMS/Messengers (Facebook Messenger, Skype, WhatsApp, Telegram..). These days, it's more in "data collection" mode to determine the likelihood of various issues with users. I was asked to leave it off of the IRC network after someone decided it was too accurate and started accusing it of doxxing them (apparently, the user had a genuine mental disorder that was diagnosed and the AI also diagnosed them with it down to the subcategories).

---

fredreload to make either of your theories/comments into an artificial intelligence, it would require some type of external mapping. Text mining is only one aspect of an artificial intelligence (on a larger scale, as it uses all input and calculates it based off of an algorithm). You also need to figure out an algorithm to focus on the perception of data, understanding of language usage, and how to learn from the data collected from text mining.
5894 cr points
Send Message: Send PM GB Post
32 / M
Offline
Posted 7/9/17 , edited 7/9/17

ninjitsuko wrote:


junkdubious wrote:

Two words; Bayesian Inferencing. Your welcome.


Aha, I was waiting for someone to mention Bayesian Inference.
I've been working on an artificial intelligence that works along similar method as Bayesian Inference for the last 12 years or so. In reference to the "Chinese Room" - it's a deliberate p-zombie in the sense that I had to define what a word meant on an emotional level.

Basically, the AI will monitor a particular user or channel (this, originally, started through an IRC network/channel). Throughout the span of a certain number of sentences and words (there's a ratio involved), the AI will begin to pair the user's comments with emotional "hot or cold" references. I used various case studies around Lexical-gustatory synesthesia, psychological studies indicating what particular words and phrases mean to multiple individuals, and my own lexicon (based off of internet memes, slang, and the likes) to formulate a dictionary that classifies a phrase or word to a specific range of emotion.

From these emotions, the AI references the DSM-IV and the ICD-10 (F01-F99, as they specialize in mental illness/disorders) based on the usage of said words (at first, it sucked at understanding context and surrounding articles of data - that took me three years to sort out and it still has a few erroneous logics around this) and will map someone's psychological tendencies. In other words, it was designed to be a "troll detector" at first. Only that it became more obvious that with more data, it mapped psychological issues and concerns a bit more accurately (more data = more information to process = Bayesian Inference).

As time went on, I moved from an IRC server/channel to social media to forums to SMS/Messengers (Facebook Messenger, Skype, WhatsApp, Telegram..). These days, it's more in "data collection" mode to determine the likelihood of various issues with users. I was asked to leave it off of the IRC network after someone decided it was too accurate and started accusing it of doxxing them (apparently, the user had a genuine mental disorder that was diagnosed and the AI also diagnosed them with it down to the subcategories).

---

fredreload to make either of your theories/comments into an artificial intelligence, it would require some type of external mapping. Text mining is only one aspect of an artificial intelligence (on a larger scale, as it uses all input and calculates it based off of an algorithm). You also need to figure out an algorithm to focus on the perception of data, understanding of language usage, and how to learn from the data collected from text mining.


Check out the explanation below:

Now this does not work for the search terms, for instance if I search for "black" "vicious" "animal", "black vicious", "vicious animal".

If black vicious animal is not on the search list, then I would need to combine the frequency searches to find the closest match

Thanks for the comment sir , future boss

P.S
As for the analysis you've mentioned, it's frequency analysis, I've worked on it before with Python, but this is an expanded idea with n-gram
27021 cr points
Send Message: Send PM GB Post
24 / M / Wales, UK
Offline
Posted 7/9/17 , edited 7/9/17

Holofernes wrote:


Rowan93 wrote:

The Chinese Room thought experiment isn't even relevant to general AI, it's specifically about whether AI would be conscious - by hypothesis, the Chinese Room is a Turing Test-passing artificial general intelligence, the only question is whether it', and implicitly also any other AI we might build, is a p-zombie or not.


I'm confused here, Why is the Chinese room not relevant? Does attaching words like Thought Experiment make it not relevant? Not understanding, but that sounds like a claim that doesn't obtain its conclusion.

Numerous people have problems with zombie theory because it wants to ascribe a quality to an appearance intelligence that has no referential reality. Something cannot be "the same" and "not the same" at the same time. That is a logical violation of excluded middle. Either it is not the same in some part or kind, in which case, it is only similar, or it is exactly the same.

If zombie is similar, then the distinction makes it impossible to be identical to a conscious mind.


Sorry, I elided a bit there and it wasn't clear, what I mean is "the Chinese Room isn't even relevant to [the question of whether] general AI [is possible]", because that's not what it's about. By hypothesis, the Chinese Room can pass the Turing Test in Chinese, so the question "does the Chinese Room actually understand Chinese?" is about qualia, the presence or absence of which is the difference between a p-zombie and a human.
42527 cr points
Send Message: Send PM GB Post
35 / M
Offline
Posted 7/9/17 , edited 7/9/17

Rowan93 wrote:

Sorry, I elided a bit there and it wasn't clear, what I mean is "the Chinese Room isn't even relevant to [the question of whether] general AI [is possible]", because that's not what it's about. By hypothesis, the Chinese Room can pass the Turing Test in Chinese, so the question "does the Chinese Room actually understand Chinese?" is about qualia, the presence or absence of which is the difference between a p-zombie and a human.


If the distinction is quality then I guess you mean A.I in its broader definitions. I see what you mean then. Yes without real consciousness I guess there are many levels of A.I.

Video games use them, there are those that are good at chess also.

But the p-zombie thing is a non-reality because it seems to me to violate excluded middle. It cannot be conscious and merely "pretend to have the quality of consciousness" at the same time. If it is only mimicry it is not true consciousness, if it is not mimicry and true consciousness, than we're talking about the real thing and not a zombie.

p-zombie thought experiments are like saying there is a four sided triangle, or like saying heat is the same as cold. It is a violation of an essential factor of metaphysics which is that things cannot be self-referentially inconsistent. It loses on a logically coherent metaphysics as a thought experiment, whereas the Chinese room does not.

But if we're just taking the term A.I. in a broad sense, that consciousness is only one kind of A.I., then the whole thing has been done in many ways already.
3212 cr points
Send Message: Send PM GB Post
29 / F / The margins
Offline
Posted 7/9/17 , edited 7/9/17

ninjitsuko wrote:

I've been working on an artificial intelligence that works along similar method as Bayesian Inference for the last 12 years or so. In reference to the "Chinese Room" - it's a deliberate p-zombie in the sense that I had to define what a word meant on an emotional level.

Basically, the AI will monitor a particular user or channel (this, originally, started through an IRC network/channel). Throughout the span of a certain number of sentences and words (there's a ratio involved), the AI will begin to pair the user's comments with emotional "hot or cold" references. I used various case studies around Lexical-gustatory synesthesia, psychological studies indicating what particular words and phrases mean to multiple individuals, and my own lexicon (based off of internet memes, slang, and the likes) to formulate a dictionary that classifies a phrase or word to a specific range of emotion.

From these emotions, the AI references the DSM-IV and the ICD-10 (F01-F99, as they specialize in mental illness/disorders) based on the usage of said words (at first, it sucked at understanding context and surrounding articles of data - that took me three years to sort out and it still has a few erroneous logics around this) and will map someone's psychological tendencies. In other words, it was designed to be a "troll detector" at first. Only that it became more obvious that with more data, it mapped psychological issues and concerns a bit more accurately (more data = more information to process = Bayesian Inference).

As time went on, I moved from an IRC server/channel to social media to forums to SMS/Messengers (Facebook Messenger, Skype, WhatsApp, Telegram..). These days, it's more in "data collection" mode to determine the likelihood of various issues with users. I was asked to leave it off of the IRC network after someone decided it was too accurate and started accusing it of doxxing them (apparently, the user had a genuine mental disorder that was diagnosed and the AI also diagnosed them with it down to the subcategories).


You do fun things.
139 cr points
Send Message: Send PM GB Post
Offline
Posted 7/9/17 , edited 7/9/17

fredreload wrote:


LingLingJuju wrote:

We don't need another Siri.


This is more like a Mc Guyver type of thing, like for instance if you type in "snake" "bitten" it would give you a solution


Exactly, a Siri clone. It has no idea if I'm talking about the animal snake, or another type of snake. True AI should be able to differentiate from my tone.
Posted 7/9/17 , edited 7/9/17

auroraloose wrote:
You do fun things.


I got bored and artificial intelligence has always interested me, to a degree. Not in the "let's take over the world" kind of way, more so "how can we make life a bit more interesting" kind of way. I've had a few people tell me that my faux-AI would be useful if I could work the kinks out of it. The issue is that every time I come up with something new to improve it, something else doesn't fall within the parameters of expectation (or doesn't work as accurately as designed).

Needless to say, it's quite fun letting it roam free analyzing data sets from forums (cough, cough) or Twitter (it has its own account that follows other bots that are always "first responders" to celebrities and politicians... which makes it more entertaining). It's already hosted on Amazon RDS for the time being but I want to move it to a blockchain database when I get the time, energy, and motivation.For now, it's just a hobby project - for all I know, it'll always stay a hobby project. It's fun to tinker with and fun to analyze data.

Oh hell, I'm a bigger nerd than I originally thought.


LingLingJuju wrote:

Exactly, a Siri clone. It has no idea if I'm talking about the animal snake, or another type of snake. True AI should be able to differentiate from my tone.


I agree... to an extent. Language analysis is difficult, especially when it comes to tone and context. With the various accents and tones in someone's way of speaking, it would take a helluva lot of machine learning to differentiate with a high probability of success. Not that it won't happen eventually. I'm working on language analysis of text and it takes a metric ton (okay, closer to about a 1.7TB database filled with various phrases, linking to various emotions and so forth..) to even get within an 82% of success.
6339 cr points
Send Message: Send PM GB Post
Offline
Posted 7/9/17 , edited 7/10/17
The funny thing is Bayes' work had nothing to do with AI.
5894 cr points
Send Message: Send PM GB Post
32 / M
Offline
Posted 7/9/17 , edited 7/10/17

ninjitsuko wrote:


auroraloose wrote:
You do fun things.


I got bored and artificial intelligence has always interested me, to a degree. Not in the "let's take over the world" kind of way, more so "how can we make life a bit more interesting" kind of way. I've had a few people tell me that my faux-AI would be useful if I could work the kinks out of it. The issue is that every time I come up with something new to improve it, something else doesn't fall within the parameters of expectation (or doesn't work as accurately as designed).

Needless to say, it's quite fun letting it roam free analyzing data sets from forums (cough, cough) or Twitter (it has its own account that follows other bots that are always "first responders" to celebrities and politicians... which makes it more entertaining). It's already hosted on Amazon RDS for the time being but I want to move it to a blockchain database when I get the time, energy, and motivation.For now, it's just a hobby project - for all I know, it'll always stay a hobby project. It's fun to tinker with and fun to analyze data.

Oh hell, I'm a bigger nerd than I originally thought.


LingLingJuju wrote:

Exactly, a Siri clone. It has no idea if I'm talking about the animal snake, or another type of snake. True AI should be able to differentiate from my tone.


I agree... to an extent. Language analysis is difficult, especially when it comes to tone and context. With the various accents and tones in someone's way of speaking, it would take a helluva lot of machine learning to differentiate with a high probability of success. Not that it won't happen eventually. I'm working on language analysis of text and it takes a metric ton (okay, closer to about a 1.7TB database filled with various phrases, linking to various emotions and so forth..) to even get within an 82% of success.


Hey boss, where can I get a text dump of all the searches like Wikipedia but with repeating articles? I am more of a science guy so possibly lots of repeating science articles
First  Prev  1  2  3  Next  Last
You must be logged in to post.