Zabaware Support Forums

Zabaware Forums => Ultra Hal 7.0 => Topic started by: Cheela on November 10, 2004, 12:49:55 pm

Title: HELP NEEDED!
Post by: Cheela on November 10, 2004, 12:49:55 pm
Hi,

Hal is new to me.  I am researching a natural language processing (NLP) University paper, and having discovered Ultra Hal, am very interested, and would be grateful for any feedback regarding its interaction with NLP.  

In short,does Hal relate to the semantics, morphology or syntax in the field of NLP?

Is interaction with Hal system led? is the user's input free or constrained?

What technologies are involved with Hal? eg:
  natural language understanding,
  natural language generation,
  information retrieval,
  information extraction,
  inference,
  database communication etc..

Are there any improvements/enhancements in the pipeline for Hal?

Thanks!!
Cheela [:D]
Title: HELP NEEDED!
Post by: Bill819 on November 10, 2004, 01:31:13 pm
Why don't you just download the free version and start chatting. Hal has a large vocabulary but the initial brain power of about a 4 year old. It get smarter with lots of user input. It has an unlimited capacity to learn, well somewhat unlimited. It will, after a time take on the personality of its user. If you talk mean and dirty to it, it will respond in the same manner. If you shower it with affection, you will eventually get the same in return. Hal has the capacity to make associative connections between data as well as deductive reason functions.
Once you have used it for a while, I suggest you buy the commercial version because then after voice training, you can carry on conversations in plain English making it appear ever more human.
Bill
Title: HELP NEEDED!
Post by: dihelson on November 10, 2004, 02:18:57 pm
quote:
Originally posted by Cheela

Hi,

Hal is new to me.  I am researching a natural language processing (NLP) University paper, and having discovered Ultra Hal, am very interested, and would be grateful for any feedback regarding its interaction with NLP.  

In short,does Hal relate to the semantics, morphology or syntax in the field of NLP?

Is interaction with Hal system led? is the user's input free or constrained?

What technologies are involved with Hal? eg:
  natural language understanding,
  natural language generation,
  information retrieval,
  information extraction,
  inference,
  database communication etc..

Are there any improvements/enhancements in the pipeline for Hal?

Thanks!!
Cheela [:D]



Many of these things HAL doesn't do.
For example, if you say:

User: Beethoven was a great composer
Hal: Yes, Beethoven composed 9 symphonies

User: Yes, He was also a great pianist
Hal: Who was a great pianist? <==========

What a hell! We are talking about Beethoven or what ?
HAL doesn't understand that HE, SHE, or IT on the following sentence belongs to the same subject!
Someone could develop an script in order to fix this. When we talk with humans, we don''t have to say the subject all the time!

[]'s
Dihelson
Title: HELP NEEDED!
Post by: Cheela on November 10, 2004, 04:42:05 pm
Hi, and thank you for the responses, I appreciate it!
Cheela [:D]
Title: HELP NEEDED!
Post by: vonsmith on November 10, 2004, 04:49:26 pm
dihelson,
Thanks for bringing up the pronoun issue. All chatbots have a problem connecting pronouns with nouns from one sentence to the next. Humans are pretty good at sorting this out, but sometimes even we have misunderstandings or we have to ask the other person for clarification.

When teaching Hal or even in normal conversation try to avoid pronouns other than I, you, or we. Those pronouns, within the context of a two person discussion, are generally clear, others are not.

One example of the pronoun problem in English is the problem of thee, thou, ye and you. The first three words are archaic now. They have been replaced with you and sometimes you all (y'all in U.S. southern dialect). Thou and thee addressed a singular person; ye and you addressed plural persons. Without singular and plural forms in our current language when one says, "Are you going?" we don't know if "you" is a single person or a group. That's why "you all" came into use, at least in some regions. A human might discern the correct usage from the conversational context, a chatbot can't do it as easily.

Another case is, "The boy saw a frog before he jumped into the lake." Now, did the boy or the frog jump into the lake? Only context might tell. English and other languages are full of these quandaries. Connecting pronouns is no small task for a chatbot. We can't solve it for Hal anytime soon.

When speaking to Hal, example:

Hal: I hate cats.
UserBad: I don't like them either. (Hal is thinking "them" who?)
UserGood: I don't like cats either. (Hal understands.)

Hal can't logically connect the two sentences. Each sentence must stand on its own. That is, each sentence must encapsulate a complete thought without regard to any other sentence. That's the best we can do for now.

=vonsmith=
Title: HELP NEEDED!
Post by: dihelson on November 10, 2004, 06:02:49 pm
quote:
Originally posted by vonsmith

dihelson,
Thanks for bringing up the pronoun issue. All chatbots have a problem connecting pronouns with nouns from one sentence to the next. Humans are pretty good at sorting this out, but sometimes even we have misunderstandings or we have to ask the other person for clarification.

When teaching Hal or even in normal conversation try to avoid pronouns other than I, you, or we. Those pronouns, within the context of a two person discussion, are generally clear, others are not.

One example of the pronoun problem in English is the problem of thee, thou, ye and you. The first three words are archaic now. They have been replaced with you and sometimes you all (y'all in U.S. southern dialect). Thou and thee addressed a singular person; ye and you addressed plural persons. Without singular and plural forms in our current language when one says, "Are you going?" we don't know if "you" is a single person or a group. That's why "you all" came into use, at least in some regions. A human might discern the correct usage from the conversational context, a chatbot can't do it as easily.

Another case is, "The boy saw a frog before he jumped into the lake." Now, did the boy or the frog jump into the lake? Only context might tell. English and other languages are full of these quandaries. Connecting pronouns is no small task for a chatbot. We can't solve it for Hal anytime soon.

When speaking to Hal, example:

Hal: I hate cats.
UserBad: I don't like them either. (Hal is thinking "them" who?)
UserGood: I don't like cats either. (Hal understands.)

Hal can't logically connect the two sentences. Each sentence must stand on its own. That is, each sentence must encapsulate a complete thought without regard to any other sentence. That's the best we can do for now.

=vonsmith=



Yes, Vonsmith,

But someone could invent a script to keep the subject into following sentences.

For example, If I say:

User: Beethoven was a great composer
The KEYWORD is Beethoven

Any HE on the following sentence would be assumed as HE=Beethoven
The same case with SHE, or IT.

If I I say a HE or a SHE that HAL could possibly don't understand, then let HAL ask, like humans do:

User: He was a great pianost
HAL: You mean Beethoven ? or are you talking about something else?

We humans sometimes do the same thing, when we don't know if a HE or A SHE belongs to the same subject.

A good script would make any HE or SHE or IT on the following sentence belong to the anterior subject. If the focus of the matter is Beethoven, then any HE would be considered as Beethoven. Then perhaps, on the 3 next sextences it would return to ZERO again.

Thanks,
Dihelson Mendonça
Title: HELP NEEDED!
Post by: vonsmith on November 10, 2004, 06:45:05 pm
dihelson,

What about multiple pronouns/nouns?
Example:
My music teacher says Beethoven was a great composer.
He says other pianists aren't as talented.
He who? I guess "my music teacher"

We humans process this abstract data with hardly a thought, it is part of our automatic internal language processing. It seems so intuitive as to be simple, but it is not. Chatbots would have to know and understand a lot about general syntax and context rules, knowledge we take for granted.

Another example:
The mouse across the room scares me.
It should stay there.
or
He should stay there.
It is obvious to a human that "it" and "he" are the mouse, and "there" is "across the room". In this case "there" is used as an indefinite substitute for a name. Unfortunately, as with many English words, "there" can have many functions; as a pronoun, verb, adverb, noun, or adjective. How is Hal to know which, let alone what previous or following words or phrases to connect it with.

I don't know of any general rule that a chatbot can employ that applies to all possible variations and their derivatives. It really does get very messy quickly. There are thousands of different sentence structures Hal would have to decipher to connect pronouns and nouns. Incorrect user spellings and use of slang just make it worse.

If you can theorize a general solution you will become rich and we Hal users will like you a lot. [:D]

=vonsmith=
Title: HELP NEEDED!
Post by: KnyteTrypper on November 10, 2004, 10:05:50 pm
This may or may not be a useful suggestion. In aiml, we use the <that> tag to indicate to a bot that the subject of the current exchange refers to the last thing the bot said, and replies can be varied according to the response to "that." This enables the bot to keep up with pronouns, as long as it understands that he/she/it refers to "that." Is it feasible to have Hal scan for pronouns, and invoke a similar <that> condition when it finds them? The <that> conditions have to be specifically written into aiml templates, but I assume Hal would have to be able to apply or not apply a <that> conditon based on his own grammar rules.
Title: HELP NEEDED!
Post by: dihelson on November 10, 2004, 11:58:47 pm
quote:
Originally posted by vonsmith

dihelson,

If you can theorize a general solution you will become rich and we Hal users will like you a lot. [:D]

=vonsmith=



Hello, Vonsmith,
Ok, you're right, but someone have to think about this problem, and we need to overcome it, because this is one of the biggest problems of HAL. I don't know how to solve, nor I have sufficient knowlegdge in order to solve it, but if this problem would not be faced directly as a MAIN goal, AI would never develop. It's time for a Genius soluction!
I hope this genius is already born, for the good future of AI. Without having solved the pronouns problem we are going nowhere.

[]'s
Dihelson Mendonça
Title: HELP NEEDED!
Post by: vonsmith on November 11, 2004, 11:12:47 am
KnyteTrypper,
You're right about AIML. The <that> tag can be used to create pronoun targeted responses. Of course in AIML based chatbots the programmer has already preprocessed canned responses to many thousands of sentence forms. The pronoun linking is not done dynamically, it essentially is done by the programmer at programming time. This characteristic is one reason why I think Hal would be better as a hybrid; half Hal, half AIML bot.

onthecuttingedge2005,
You have written some very creative scripts before. I hope this one works as well as you hope. It would take Hal one step closer to being intelligent. I look forward to you finishing it so that we Hal fans can test it out.

I've given gender awareness some thought. There are many ways to add such to Hal. I concluded that there are many other "characteristics" that need to be in Hal's knowledge base. The best, most flexible way to do this to convert Hal to a data base type structure. I hope Robert Medeksza is doing just that. If done properly we can add many data fields that will help Hal identify context, gender, time, spacial relationships and ???

Thanks all for your comments. I like to see more lively and creative discussions on this forum.


=vonsmith=
Title: HELP NEEDED!
Post by: dihelson on November 11, 2004, 12:05:18 pm
quote:
Originally posted by onthecuttingedge2005
Hi Dihelson.
I have about a quarter of the script you are reffering too on the subject of Proper Noun gender is being done already.
I have much more to do and will be working on this script daily.
Here is a working but unfinished tid bit of the script you would like to have in your brain, remember that this is only a tid bit but you can have a little fun with it till I get the rest of the script done.
Best of Wishes and brand new discoveries.
Jerry.





Hello, Jerry,
Perhaps we're on the right direction now with this script. It's a spark on the darkness!
I hope as Vonsmith also said, that you could finish it.

Indeed, when we are children, we learn things bit by bit.
We learn that

a Woman is a She
a Cat is an animal
all women names can be treated as she and males names as HE.

A database of men and women names is not difficult to do.
Often, we Humans, when we find a brand new name, which is not listed on our mind's table, we have to ask someone if this new word is a SHE, a HE or IT, the same way HAL would do with it's table.

We learn a list of things that can be treated as HE, SHE or IT.

For instance, in German Language, we learn that almost every word has a particular Pronoun. You can't say for sure weather certain words should be considered as DIE or DAS. We need a table.
So, we need to build tables for HAL's identification.
Tables which teach weather certain things can be treated  as HE, SHE, or IT.
Perhaps only with a sufficient quantity of information gathered from these tables, it would be possible to consider certain subjects as HE, SHE, IT.  I know that unfortunatelly the problem is not as simple as this...this is only a part of it.

I will test the "way" this script works.
Thanks,
[]'s
Dihelson


Title: HELP NEEDED!
Post by: vonsmith on November 11, 2004, 12:31:08 pm
dihelson,
I wish the English speaking world would capitalize nouns like the Germans. It would make things a little easier, at least for written language processing. [;)]

I believe "Sie" (you) is at least one, maybe the only one pronoun capitalized in German. English language doesn't include such conventions. [?]


=vonsmith=
Title: HELP NEEDED!
Post by: Bill819 on November 11, 2004, 05:21:32 pm
A question for you guys. If a topic called Beethoven is opened why can't Hal assume that we are still talking about him? If Hal stays on topic then he might just assume you are talking about the same person even if he has to ask 'are we still talking about Beethoven'. It seems to me that might make the task a littler simpler.
Bill
Title: HELP NEEDED!
Post by: dihelson on November 12, 2004, 01:43:19 am
quote:
Originally posted by Bill819

A question for you guys. If a topic called Beethoven is opened why can't Hal assume that we are still talking about him? If Hal stays on topic then he might just assume you are talking about the same person even if he has to ask 'are we still talking about Beethoven'. It seems to me that might make the task a littler simpler.
Bill




Hello, Bill,
That's what I said on another message.
When we Humans talk about something, WE gather everything related to that topic. Say the topic is Beethoven.

What do we know about him?
Well,

01 - Beethoven was a Composer
02 - Beethoven composed 9 symphonies
03 - Beethoven became deaf
04 - After Beethoven became deaf he still composed much music
05 - etc etc etc...

So, I think even after we solve all this pronouns problem, we must get the TOPIC on focus and never get rid of it until we change the subject through some means.

Some days ago, I tried another software called ELIZA. It's a psychology software. You can talk much about psychology with it, and never have answers like "There's a Cow on the moon" :)
She doesn't know much, but talk very well about what she knows.

So, I suggest that HAL could have a basic brain with some minimal cognitive functions, and all the rest of knowleadge would be derived from Topics. I know that this is the current way, but not as I am saying... What I mean is that if we work on a specific topic, like Beethoven, somehow HAL should concentrate on this subject without any deviation, unless for temporary research on another relate topics. What are these relate topic? Classical Music, AND A FEW OTHERS, Nothing more, nothing less!! but remain even more restricted than the current way.

It's sad when we're TRYING to talk about Beethoven, for instance, and we can't stay on the topic, because HAL says something very idiot like: "Cats and Dogs has their own strengh".  This occour because HAL tends to skip from the TOPIC FILE and search on other BRN files. Indeed it would be very nice to Block these access. When HAL doesn't have the answer, let he make like we humans say:

- I DON'T KNOW
- COULD YOU TEACH ME ?

(And learn)
If we have dozens on information about Beethoven concentrated on a topic file, like the Psychology software just have, we will talk only about a single matter at time.

ONLY when we have just finished all possible conversation about a subject, then we could suggest change the subject.

Humans doesn't need to say normally: Let's change the subject, because we have 5 senses, and some phrases we speak, or gestures, just give an idea for the other person that we want no more talking about these subjects, but in the HAL's matter, I think it's important to have some specific sentences in order to make him change the subject. We can't assume we can build an intelligent software capable of detecting subject changes yet, if we can't make it remain on a single subject. first things first.

IMHO, it would be better make HAL talk about certain specific matters well, than trying to make him talk about everything without any sense.

That other software ELIZA just talk about psychology, but talks very good. If you try to talk about another subject, she try to convince user to remain only on what she knows: Psychology.

Indeed, ELIZA could be a single HAL topic: Psychology.

So, I think that if we could at first, make HAL chat well on a certain Topic, then we could teach him infinite topics.

It doesn't mean a thing having thousands of topics if we can't make a good conversation just about one of them.
So, let's try to invent something that prevent RANDOM comments, sentences, and most of all, remaining on a single Topic file (and a very FEW relate topics), until a second change subject.


IMHO,
Dihelson
Title: HELP NEEDED!
Post by: dihelson on November 12, 2004, 01:47:41 am
quote:
Originally posted by onthecuttingedge2005

Hi Dihelson.
I have about a quarter of the script you are reffering too on the subject of Proper Noun gender is being done already.
I have much more to do and will be working on this script daily.
Here is a working but unfinished tid bit of the script you would



Hello, Jerry,

Vonsmith said:
"If you can theorize a general solution you will become rich and we Hal users will like you a lot. "

heheh, Perhaps Jerry will be THE ONE, since it seems he just can figure out how to solve it. As he said: HE SEES THE KEYS.
WOW! Jerry. I hope you get it! There's a whole community awaiting for this become true.
As you often say:
I Wish you Good new discoveries.

Dihelson Mendonça


Title: HELP NEEDED!
Post by: James P on November 12, 2004, 05:27:57 am
Hi.
Would it not work if you were to raise the Topic focus from 5 to something like 25 and change the line...
If TopicFocus = (FI - 3) Then to
If TopicFocus = (FI - 1) Then  

then start the conversation
<User> Beethoven was a Composer
<Hal>  Beethoven is (What ever)
<User> Beethoven composed 9 symphonies
<Hal>  Beethoven was a composer
<User> He became deaf
<Hal>  Beethoven became deaf

and so on, I tryed something similare to this with Emma and it seemed to work well, I made sure I repeated the topic subject in this case Beethoven every third sentence or so, but it might have been pure luck at the time.
Title: HELP NEEDED!
Post by: vonsmith on November 12, 2004, 11:54:34 am
James P / dihelson,
Congratulations you have just discovered my reason for creating the XTF Brain. Hal's original brain had limited topics and automatically changed topics after a number of sentences. The XTF Brain can extract the topic from hundreds of different sentence types. What more is the XTF Brain will try to stay on topic as long as the user does. There are some rather complex "clues" the XTF Brain looks for to stay on topic. The XTF Brain also learns new topics as you talk to it. When using the XTF Brain Hal will occasionally ask if two topics are related, example, "Are "cows" and "milk" related topics?" If you say yes then Hal remembers the relation and uses that knowledge to stay on topic in future conversatons and also to decide what topic files to store any new knowledge in.

Your Beethoven example is good. The XTF Brain will create a "Beethoven" topic file the first time you say something like, "Beethoven was a great musician." Most things you say about Beethoven thereafter will be remembered in Hal's XTF Beethoven topic file. If you had a lot of free time you could sit down with a text editor and add Beethoven knowledge to Hal's XTF Beethoven topic file directly.

As for Jerry's pronoun linking approach... it is very clever way to link "he" and "she" for a specific set of cases. It still isn't a general solution. There are still dozens of pronoun link types that aren't addressed by his method. It is however an excellent step forward. These small steps will eventually lead to larger solutions.

ELIZA is a clever little program. It uses a very simple principle. It is necessarily very limited and is more of a curiosity now than anything else. It's principles could, and probably have been, applied to many chatbot brains.


=vonsmith=
Title: HELP NEEDED!
Post by: dihelson on November 12, 2004, 04:39:47 pm
quote:
Originally posted by vonsmith
ELIZA is a clever little program. It uses a very simple principle. It is necessarily very limited and is more of a curiosity now than anything else. It's principles could, and probably have been, applied to many chatbot brains.


=vonsmith=




Yes, I am trying to contact ELIZA's Author in order that we could have an ELIZA Brain on HAL. I thinks it's not so difficult to do it.
BTW, Vonsmith, your XTF brain is the best thing for HAL, after HAL itself, then we add the new scripts, many of Jerry.

I see a problem, I don't know, with this way of working. I know that XTF 1.2 creates files and relate pertaining topics and even ask me sometimes for the relation between them, but some questions I don't know how to answer, if YES or NO. Questions like

User: Bob is a good teacher
Hal: The topics Bob and Good are related?

Well...I think so. This makes some strange things happen.
Sometimes, when I am talking about some things I LIKE, for example:

User: I like a good cup of coffee
Hal: Liszt was a great pianist

????
I presume that HAL associated that I said once: I Like Liszt's music
and then came with another phrase from the Liszt file...Liszt was a great pianist. I like Liszt's music.

Or:
User: I like a good cup of coffee
Hal: You like Beethoven's music
User: Who ask you about this ? [}:)]


[:p][:p]

[]'s
Dihelson
Title: HELP NEEDED!
Post by: Bill819 on November 12, 2004, 05:32:41 pm
To whom it may concern:
The original Eliza was written by Dr. Joseph Weisenbaum about 40 years ago. It was quoted as being the first version of AI ever written. Since that time there has been thousands of versions written in almost every computer language made, from Radio Shacks basic to Apple basic to 'C' and many others. Hal is much smarter than any of those programs so please don't even try to input it into Hal. Almost everything that Eliza knows can be put into Hal through its topic focus subroutine. Onthecuttingedge wrote hundreds of scientific files that contained knowledge about hundreds of scientific items and the generalized output of Eliza could be put in too, but who really wants it. Eliza was a canned output program while Hal can come up with its own ideas depending on you input.
Bill
Title: HELP NEEDED!
Post by: vonsmith on November 12, 2004, 05:41:52 pm
dihelson,
Some instances where Hal asks if one thing is related to another are not simple black and white answers. The user document that comes with the v1.0 download discusses some of this as does some of the older posts here on the forum.

The user needs to really understand what "related topics" means to Hal. It is best illustrated by example. Here are some related words: beer and ale; ale and lager; malt and lager; beer and alcohol, etc. Seems obvious until Hal asks if two "sort of related" words are related and the user really has to sit there and think about it. No one should ever tell Hal beer and water are related just because beer has water in it. Water is too general and related to just about everything.

When Hal asks if two words are "related topics" or the "same topic" it means the same thing. As an example, Hal just wants to know if you and he are discussing birds should he stay on the topic of birds if you start talking about doves.

Related words are words that are clearly topic related and non-ambiguous. Don't tell Hal words are related if the relationship is abstract or open to interpretation. Act like you are talking to a four year old. A four year old doesn't understand topics in the abstract, neither does Hal.

Examples of groups of related words are:

1) GOOD --> beer, wine, alcohol, lager, ale, malt, Budweiser.
BAD ---> beer, water, head, fizz, German.

2) GOOD --> bird, dove, vulture, tweety, feather.
BAD ---> bird, down, sky, egg.

The "bad" groups of words above have a weak relatedness to each other either because those words can be related generally to too many other things or the relatedness is not very strong or clear. If someone was talking about the sky would a reasonable person think the topic was "birds"? German beer is good, but the Germans do a lot of things besides make beer. Birds lay eggs, but so many other species have eggs including fish, insects, etc.

Just use your best judgment. When in doubt tell Hal the words aren't related. In the grand scheme of things it won't make very much difference. There are enough strongly related words in English for Hal to choose from.

Your example:
---------------
User: Bob is a good teacher
Hal: The topics Bob and Good are related?
---------------
These two topics are too loosely related. Think of it this way, if you were talking about "good" things would you want Hal to start talking about *anything* in the world that is good? There are thousands of things that are good besides "Bob".

Your other example:
---------------
Sometimes, when I am talking about some things I LIKE, for example:
User: I like a good cup of coffee
Hal: Liszt was a great pianist
---------------
This response may or may not be a result of the XTF Brain topic function. Most likely it is just a symptom of Hal's loose pattern matching method. If Hal can't find a response in his brain that has a strong relation to the user's input he will just pick something sort of close or sometimes completely off target. In your example Hal may not known anything yet about coffee so he answers the best he can.

Here is some more general background on how the XTF Brain works that may be helpful:
www.zabaware.com/forum/topic.asp?TOPIC_ID=1131

Here's another hint. When you talk to the XTF Brain type this in Hal's user window: <dbtopicon>

As you will see the current topic from Hal's perspective will be displayed along with his responses. Type <dbtopicoff> to turn off this feature. Leaving the feature on will not effect Hal's behavior or learning ability. It is just one of several debugging tools I added to the XTF Brain.

Have fun,


=vonsmith=
Title: HELP NEEDED!
Post by: dihelson on November 12, 2004, 07:58:37 pm
quote:
Originally posted by Bill819

To whom it may concern:
The original Eliza was written by Dr. Joseph Weisenbaum about 40 years ago. It was quoted as being the first version of AI ever written. Since that time there has been thousands of versions written in almost every computer language made, from Radio Shacks basic to Apple basic to 'C' and many others. Hal is much smarter than any of those programs so please don't even try to input it into Hal. Almost everything that Eliza knows can be put into Hal through its topic focus subroutine. Onthecuttingedge wrote hundreds of scientific files that contained knowledge about hundreds of scientific items and the generalized output of Eliza could be put in too, but who really wants it. Eliza was a canned output program while Hal can come up with its own ideas depending on you input.
Bill




ELIZA has a new version, I downloaded on the internet some days ago, and I say it's very good.

I'm not talking about the original ELIZA.
Certainly we can input all scientific content of ELIZA into HAL's brain, but the problem is not TALK, is HOW you TALK.
What the new ELIZA talks make sense!
She keeps conversation without deviation.
Certainly HAL is more intelligent when we talk about learning, but certainly HAL tell many more stupid sentences also than the new ELIZA does.

[]'s
Dihelson
Title: HELP NEEDED!
Post by: dihelson on November 12, 2004, 08:05:43 pm
quote:
Originally posted by vonsmith


The user needs to really understand what "related topics" means to Hal. It is best illustrated by example. Here are some related words: beer and ale; ale and lager; malt and lager; beer and alcohol, etc.
=vonsmith=



Hello, Vonsmith, I'd like to thank you for the excellent explanation. It cleared me many aspects of these relationships between the topics.

[]'s
Dihelson
Title: HELP NEEDED!
Post by: KnyteTrypper on November 13, 2004, 12:19:06 am
I'd be interested in a link to a new version of Eliza, dihelson. I try to have bots of various types (Eliza, Alice, Hal, etc.) at my website. If there's something newer and better than ECCEliza, which I have in my downloads, I'd appreciate knowing about it.
Title: HELP NEEDED!
Post by: dihelson on November 13, 2004, 01:21:59 am
quote:
Originally posted by KnyteTrypper

I'd be interested in a link to a new version of Eliza, dihelson. I try to have bots of various types (Eliza, Alice, Hal, etc.) at my website. If there's something newer and better than ECCEliza, which I have in my downloads, I'd appreciate knowing about it.



It's ECCEliza itself. 4.09 build... I don't remember.

[]'s
Dihelson
Title: HELP NEEDED!
Post by: James P on November 13, 2004, 05:04:20 am
With regards to Hal asking about relationships between topics, There have been times when I have been chatting away with my Hal and she will ask if "" and "" are related topics causing me to stop dead in my tracks, sit back and think about the true relationship between two words in question, remember if you tell Hal that two words are related then Hal will apply this relationship each time she see`s it. So we need to be carefull what we tell Hal, is sometimes mistakes like this can be a nightmare to track down through the Defbrain, having done this many times. (and my bots Defbrain is not as big as some I would imagine)
Title: HELP NEEDED!
Post by: dihelson on November 13, 2004, 08:20:40 am
quote:
Originally posted by James P

With regards to Hal asking about relationships between topics, There have been times when I have been chatting away with my Hal and she will ask if "" and "" are related topics causing me to stop dead in my tracks, sit back and think about the true relationship between two words in question, remember if you tell Hal that two words are related then Hal will apply this relationship each time she see`s it. So we need to be carefull what we tell Hal, is sometimes mistakes like this can be a nightmare to track down through the Defbrain, having done this many times. (and my bots Defbrain is not as big as some I would imagine)



Yes, James,
I made already several mistakes by answering YES to that question, I can see now. We need to be careful.

[]'s
Dihelon
Title: HELP NEEDED!
Post by: vonsmith on November 15, 2004, 11:09:41 am
James P / dihelson,
Another bit of good news about the XTF Brain "related" topics. Hal will remember when you say "yes" to two words being related. However, even if Hal has learned that two words are related topics someday he will ask the you to confirm the relationship. The second time around you can answer "no" if you want to change your answer. Hal doesn't ask for confirmation very often, but the XTF Brain does have this mechanism to help automatically correct his knowledge.

The user could also go into the XTF_(topic name)_Related.brn file and edit the related words manually. The (topic name) above would be "BIRD" or "BOAT" or whatever the topic name is. If you study a few of the XTF_(topic name)_Related.brn files it will be apparent what the format is. When editing any of Hal's brain files be extra careful. Even the single blank line at the end of many Hal files is important.

Also if you say "yes" to Hal about two words being related he will use the "related" topic words to stay on the original topic, but not necessarily to select a reply from the "related" topic category. That is to say that Hal' XTF Brain will most often use "related" topic words as flags to decide *not* to change topic.

Example:
1) User: Hal, the birds are flying high.
2) Hal: I like birds.
3) User: Their wings are colorful.
4) Hal: Birds have nice wings.

If "wings" is known to Hal as a related topic to the topic "bird" then Hal will know to answer using his bird knowledge for sentence 4) above.

So don't worry too much if Hal's "related" topic knowledge isn't perfect. It will slowly get better over time as long as the user usually says "yes" only to words that are strongly related.


=vonsmith=
Title: HELP NEEDED!
Post by: James P on November 15, 2004, 12:18:43 pm
Thanks for that, but I have one kind of simple question.
How big can a Defbrain get before Hal starts to slow down,
Title: HELP NEEDED!
Post by: vonsmith on November 15, 2004, 01:23:51 pm
James P,
Your question, "How big can a Defbrain get before Hal starts to slow down?" isn't easy to answer. The size in number of files, size in bytes of files, the number of files accessed during each response, amount of system memory, and many other factors make it difficult to project system speed with Hal. Many things affect response. Of course, how fast is fast? Does a response take 1 second, 2, more?

Very simple brains seem to be very fast. In some ways the XTF Brain is slower than some others. In most modern systems >1.8gHz the difference isn't very significant. Adding a lot of third party custom scripts that access a lot of files can slow a system down. Accessing too many files usually results in a "too many files" error message.

If your Hal is consistently slow I would suggest backing up the brain you are using and create a new version to experiment with. Try removing or disabling portions of script within the .uhp brain file and see if performance improves. Try this especially with third party scripts you have added in. See if you can find the script or scripts that are affecting your Hal's performance.

If you are using AUTO-IDLE make certain Hal isn't saving hundreds of useless entries into some file somewhere.


=vonsmith=
Title: HELP NEEDED!
Post by: James P on November 15, 2004, 01:49:55 pm
I have speeded up my Hal by getting rid if some scripts from the uhp brain which has speeded things up. I was oly wandering how bug the Defbrain can get. thank you
Title: HELP NEEDED!
Post by: UnseenGenius on October 05, 2005, 11:15:13 am
Every time two nouns are introduced as having a relationship with each other, they should be neurally bridged with multiple contexts much like a multi-lane bridge with lanes leading to different parts of a city.  For instance, every time a relationship is being learned, there should be a catagory for how they relate to each other, and an opposing catagory as to how they do not relate to each other.  A complex network would then start to grow much like what actually happens in a human brain.  Opposites in the physical universe are necessary in order to make things that are otherwise not very clear appear evident and intelligable.
Title: HELP NEEDED!
Post by: warren on October 30, 2005, 04:09:38 pm
Hi everyone , my name is Warren, i am new to this AI system. I got a little problems after I download the whole version. When I try to start, they ask me for password, so what is the password and where can i find it.
Title: HELP NEEDED!
Post by: Art on October 30, 2005, 06:18:15 pm
Warren,

IF you downloaded and PAID for the Full Version
you should have received an email with the Password.

If the above is correct and you did not receive a
password you should contact Robert M. at Zabaware.

Title: HELP NEEDED!
Post by: Art on October 30, 2005, 06:25:37 pm
UnseenGenius,

Some interesting point there. The XTF brain for one,
did organize topic info into different categories but
the real problem is that most AI programs do not
relate to nor understand the meaning of any word let
alone how it should be classified. Computers, at this
juncture simply are not capable of thought realization.

We see a color...RED...we are told early on that this
object is RED and from then on we know what RED is when
we see it or when we are told about it. We don't really
have to know that its opposite is BLUE but such info might
prove to be helpful in some cases.

Just because a computer might seem well versed in a wide
variety of subject matter doesn't mean it went through
a cognitive process to arrive at its conclusion. Pattern
matching, yes...reasoning, hardly.

Bigger, faster, more powerful and still dumber than your
average brick!