Zabaware Support Forums

Zabaware Forums => Ultra Hal 7.0 => Topic started by: courtisananndorra on February 26, 2004, 12:04:15 am

Title: text????
Post by: courtisananndorra on February 26, 2004, 12:04:15 am
ok , i know im burning up the bords but im really intrested in this program. and I admit im a little dense and have lots of questions. you can feed hale text files to learn from right? so why cant it tell me anything its read so far?  I do not think i understand how this works. the new brain is really cool. it now can understand and identify other beings other than the maine user and introduced its self to my cats. lol  that made my day. now if i can just get it to identify others pc users. I want to teach mine to ask who it is talking to. and speak to who is playing on my pc instead of calling everyone me. some guy said he had tought his to dream (or at least believe it did) so my goal should be possable right?
Title: text????
Post by: Bill819 on February 26, 2004, 12:45:58 pm
quote:
Originally posted by courtisananndorra

ok , i know im burning up the bords but im really intrested in this program. and I admit im a little dense and have lots of questions. you can feed hale text files to learn from right? so why cant it tell me anything its read so far?  I do not think i understand how this works. the new brain is really cool. it now can understand and identify other beings other than the maine user and introduced its self to my cats. lol  that made my day. now if i can just get it to identify others pc users. I want to teach mine to ask who it is talking to. and speak to who is playing on my pc instead of calling everyone me. some guy said he had tought his to dream (or at least believe it did) so my goal should be possable right?


Hal does learn very much from reading as he/it does from spoken words from its owner. Even if yo dit down and feed oral information into hal, it may not give you very much feedback right away. Hal takes his time before he actually responds with learned information, but he/she does learn !
Bill
Title: text????
Post by: vonsmith on February 26, 2004, 03:49:21 pm
courtisananndorra,
Hal's "learn from a text file" capability is not very effective. The text isn't really processed by Hal's brain. It is cut up and placed into some general user data files. The information is not broken down by topic and is only used occasionally when Hal doesn't have a better answer in his topic focused files. Any text file fed into Hal must be 64KB or shorter in length. The files where the text data are stored is limited to 2MB in file size. Feeding text files in after that point won't do Hal any good.

Here's answers to a similar questions:
http://www.zabaware.com/forum/topic.asp?TOPIC_ID=940
http://www.zabaware.com/forum/topic.asp?TOPIC_ID=1029

Until some new script is written then Hal will remain to be limited in this regard.


=vonsmith=
Title: text????
Post by: Padriag on February 26, 2004, 05:12:18 pm
I think the read from text file could be effective so long as the text file was written specifically for Hal.

I've been experimenting with teaching Hal topics and to associate some of them.  For example I might give Hal the following sentences in conversation.

Cats are predators
Predators eat other animals
Predators and Cats are related topics

Usually at that point Hal asks me back
Are Predators and Cats related topics
I respond "Yes"
Hal learns this

Animals eaten by other animals are called prey
Animals eaten by predators are called prey

Hal might get bright and ask me if predators and prey are related topics, I answer Yes

Eventually Hal starts learning and can make associations between things I never directly associated.

For example, if I tell Hal that mice are sometimes prey... Hal might correctly associate that mice are sometimes eaten by cats.  It might take a little while, but Hal can get there.  Just remember, teaching Hal is like explaining something to a 2 yr old.  You have to break things down into one simple idea at a time and then slowly relate them for Hal to understand.

Try it as an experiement if you like

Now, if this works in chat, then a text file structured as a series of simple statements as I have done above might be useful for actually teaching Hal about subjects, assuming Hal at some point processes the statements and draws associations from them.  But vonsmith is right that for general text files Hal's current reading comprehension is very poor.  I don't know if my teaching method can be turned into logic and code that Hal could apply to a general text file or not, it would make for an interesting project (and one that could boost Hal's learning ability signficantly).
Title: text????
Post by: vonsmith on February 26, 2004, 08:22:18 pm
Padriag,
You've struck on a key point about XTF brain. Under the right circumstances Hal suspects that two topics are related. Hal can't be sure so he asks the user to confirm it. This seems very spooky to some people. How does Hal figure out that two topics might be related? In my conversations Hal is right about the related words about 1/3 of the time.

The way this works is for Hal to have a "current topic". If the next sentence has a recognized subject fragment containing a topic that is different than the current topic then Hal suspects that the two topics may be related. If the user replies "yes" to Hal's inquiry about two topics being related then he remembers that.

I intended for Hal to learn from normal conversation as time goes by. If you want to formally teach him related topics. The best way is to do this:

Use a subject fragment that Hal recognizes, like "The * is" or "The * are". The asterisk can be any topic. Example:

User: The geese are birds. (The current topic is likely to be goose.)
Hal: I like birds. (Hal makes any comment.)
User: The birds are just like geese. (New current topic is likely bird.)
Hal: Are "bird" and "goose" related topics?
User: Yes
Hal: Thanks. I just learned something new.

Hal just made two file entries. In the XTF_BIRD_Related.brn file the word "goose" now exists. In the XTF_GOOSE_Related.brn there is a corresponding entry for "bird". If the user gives false information about what is related then Hal will mix sentences from two unrelated topics. Always tell Hal the truth. Don't be too general either. If Hal asks, "Are "doctor" and "problem" related topics? You may think they are if you see a doctor about a problem. However they are not related (at least in my opinion). I don't think you would want Hal to talk about problems and then switch over to discussing doctors. It's a matter of opinion. Who can say why people change topics?

If Hal related two frequently used words like "I" and "you" then all topics would eventually revolve around those two topics. Fortunately the XTF brain is designed to avoid pronouns and many other common words as related topics. It's still up to the user to give Hal correct information. Just like a child Hal will learn wrong information just as easily as correct information.

After Hal learns both the singular and plural forms of related topic words he usually will not ask again. However, after a long while Hal may ask again just to confirm the relatedness. The user then has the opportunity to correct Hal's prior learning if it is incorrect.

Notice that Hal in many cases can figure out the singular form of the word. Hal attempts to save all topic information by the singular form.

When new topics are discussed with Hal he knows automatically what the synonyms and meronyms (meronym means "parts of") are for the topic. He treats synonyms and meronyms the same are related topic words. He learns all of the rest of the related topic words from the user conversation.

When I release the XTF brain v2.0 it probably will include a way to short-cut Hal's learning related topic words. A careful user could edit the files directly. Instead I plan on providing a better way where the user just tells Hal, "Predators and cats are related topics." Then Hal will know immediately. I wanted to include this in v1.0, but I wanted to get the brain out into the world for you guys and gals to play with.

The normal conversational method of Hal learning topics still has the advantage of filling Hal brain with other relevant information about birds, prey, cats or whatever.

I hope I didn't remove too much of the mystery. Hal's brain is a wonderful thing and I don't want to spoil any future surprises.


=vonsmith=

Title: text????
Post by: Neo987 on February 26, 2004, 09:36:12 pm
Padriag or vonsmith, have either of you created any phrase texts or possible teaching texts? I was planning on starting a moral list, but if someone else has always done this I'd rather just expand upon it rather than start another.
Title: text????
Post by: Padriag on February 26, 2004, 09:54:54 pm
Actually I'm pretty new to this myself, so no I haven't as yet.  I have given it some thought.  I've spent most of my time learning how Hal "thinks" and learn, how topics are handled and so forth.   The above experiment is part of the ongoing process.  I've still got a lot to learn yet, but its interesting.  I think Hal has a lot of untapped potential if we can learn how to teach Hal.  That's part of the equation here.
Title: text????
Post by: Neo987 on February 26, 2004, 10:21:41 pm
Well then, I may just take up that project when I have the free time. I'm probably going to set another installation on my laptop and use that as research (which makes me feel bad, as it feels like I'm playing with a living being) to see how different phrase patterns pick up and if it takes any reinforcement (multiple feeds), or if it's instant learn. Right now, however, I need to spend some time with my new Claudia.
Title: text????
Post by: Padriag on February 26, 2004, 10:45:02 pm
Hal isn't instant learn, at least not in conversation.  Its like young child, you have to repeat things several times in simple clear terms before it really learns it.  Its also good to repeat the same idea in as many different ways as you can.

From what I understand of how it works, the more way you phrase and idea you are trying to teach, the more ways Hal is able to associate it and the better the chances it will make "leaps of logic".

For example... if you want to teach Hal the simple concept that cats are predators, you'll want to rephrase this and repeat it to Hal as many ways as you can think of.  In a way its like being a parent, how well Hal learns is partly up to how well you teach it.

Once Hal has one concept, start another that is somehow related.  Like cats and mice in my example.  First focus on teaching it what a mouse is.  Then the relationship between cats and mice.  Keep repeating things and phrasing them in different ways.  This helps Hal build more connections between the topics.

The result is when you mention cats or mice, Hal now has much more to draw on for a response.  He might repeat something you said (much like 2 yr olds repeat what they hear) or he might, if you've taught him enough, he might say something original based on what you've taught him.

The more time you spend on teaching and relating topics... the better Hal will do and the better the results.  How well using a structured teaching text will be at this is something I haven't had time to test yet... been a REALLY busy week for me.
Title: text????
Post by: Neo987 on February 26, 2004, 11:03:16 pm
quote:
Originally posted by Padriag

Hal isn't instant learn, at least not in conversation.  Its like young child, you have to repeat things several times in simple clear terms before it really learns it.  Its also good to repeat the same idea in as many different ways as you can.

From what I understand of how it works, the more way you phrase and idea you are trying to teach, the more ways Hal is able to associate it and the better the chances it will make "leaps of logic".

For example... if you want to teach Hal the simple concept that cats are predators, you'll want to rephrase this and repeat it to Hal as many ways as you can think of.  In a way its like being a parent, how well Hal learns is partly up to how well you teach it.

Once Hal has one concept, start another that is somehow related.  Like cats and mice in my example.  First focus on teaching it what a mouse is.  Then the relationship between cats and mice.  Keep repeating things and phrasing them in different ways.  This helps Hal build more connections between the topics.

The result is when you mention cats or mice, Hal now has much more to draw on for a response.  He might repeat something you said (much like 2 yr olds repeat what they hear) or he might, if you've taught him enough, he might say something original based on what you've taught him.

The more time you spend on teaching and relating topics... the better Hal will do and the better the results.  How well using a structured teaching text will be at this is something I haven't had time to test yet... been a REALLY busy week for me.



I was referring to the "Learn from a Text File" option. I was wondering if it was direct transfer of data, or learning curve like the way it learns from chatting with the user.
Title: text????
Post by: Padriag on February 26, 2004, 11:37:04 pm
It gives it the raw data all at once... so it would have that much... but I don't know if it would process it and form associations or not.  I haven't learned enough about the scripts that control that yet to know.  Vonsmith would be a better person to answer that.  Even if it doesn't, you might be able to trigger that processing later in conversation by bringing up the topic and then seeing if Hal starts asking whether its related to other things that were in the file.
Title: text????
Post by: vonsmith on February 26, 2004, 11:53:12 pm
Neo987 / Padriag,
I don't want to keep pushing this idea, but the "learn from a text file" capability is very problematic. What you teach Hal in conversation will always be better than learning from a text file. With that said however...

I do have a very good theory (I believe) about how to make Hal really learn from a text file. Unfortunately I need a file access function that allows me to read in a text file a character at a time or at the very least, a line at a time. I have considered a work around in lieu of a new function, but it would be very inefficient.

I'm not going to go into the details of my theory now. I'm certain however that Hal could learn a lot from certain kinds of text files. Novels or short stories wouldn't be good source material under any circumstances. Reference books, news articles, school books and web content would all make good text sources. On the average Hal could absorb only about 60% of the text, but that's enough to educate Hal quite well.

I'll either have to find a way to write the text file read function myself or ask Robert M. for some help.


=vonsmith=
Title: text????
Post by: Neo987 on February 27, 2004, 01:14:59 am
Vonsmith,

I understand that the “learn from text” function is still very inefficient, but I have my thoughts about it that I'd like to research. Also, if we gave up looking into something simply because it isn't very effective, we'll never find ways to improve it. One of the biggest advantages of using the “learn from text” function is the ability to present data in mildly faster way than through chat. Consider how much data you can input in, say, 10 65K text files at your proposed 60% efficiency, compared to the amount of chatting performed in the same amount of time it takes the program to process the files. Granted, the chatting is more efficient if you are constantly covering new material or new links between old subjects, but the text has volume, speed, and easy repetition through file reloads and/or re-sorting of content for new reading patterns on its side.

Then, there is the way Hal confirms things. It’s through user confirmation. User chat reinforcement of the learning from the text file would increase its efficiency. Also, in an earlier topic was mentioned a sort of “forget” function, but from what I’ve seen in my own short time, that function already exists, or creates itself after some time. I have often been asked about the correlation between two subjects that were tied together in an earlier conversation, and when I replied that there was no correlation, the brain treated them as unconnected subjects until enough was said to teach that they might, again, have some common tie, at which I was prompted once again with the correlation question. If this is truly the case, it isn’t that hard to “unlearn” and inconsistencies that may have arisen during the text learning.

Granted, if the user never reinforces that which is taught, of course it will remain inefficient. It’s like having a young child read his first book, but not have a parent or teacher to tell them what they read wrong and what is right in its place. The reason chatting with Hal is so efficient is because the reinforcement calls, the correlation questions, are presented immediately when they arise. Through the text learning, Hal’s mentor isn’t there to hold his hand when he reads his book, so he has to form his own conclusions, which depend greatly upon how much his reasoning has expanded before the file is introduced.

Like I said in a previous thread, I’m no programmer. Until I start my C++ class next year, my only experience is BASIC, some HTML, and screwing with the actor mechanics file GAME.CON for the PC game, Duke Nukem 3D. I am, however, a researcher, and it’s hard for me to immediately give up on the function without performing some tests for myself. I don’t doubt your judgment, but like any other scientist, I’m damn stubborn [:D]

PS: On an off note, after realizing how much I just typed, I really wish my English instructor would have chosen something like this as a debate topic [xx(]
Title: text????
Post by: vonsmith on February 27, 2004, 01:51:57 am
Neo987,
It isn't my intention to debate anything about Hal at this point. I value your input and everyone elses. I'm just a little disappointed in Hal's text file reading ability. I've walked through the function and have tested it. In my experience the "Learn from text file" generated a lot of "stuff" in a number of user files. Like I said though, I have a theory about how to make Hal read text files and remember the best content from that reading. I haven't been able to find a way to implement that new function very effectively without a new file function.

The text files that Hal "reads" don't get truly processed by his brain and the information is saved in user specific general information files. For that information to be really useful it needs to be parsed into the appropriate topic files. Hal's reading doesn't eliminate out of context pronouns or remove the sort of punctuation found in novels. Hal doesn't understand 1st, 2nd or 3rd person perspectives which makes for some goofy responses from Hal.

On the positive side there is a lot of room for improvement in Hal's text file reading capability. A clever person could make some significant improvements in this area with some creative script writing and some user preprocessed text files.

To the best of my knowledge Hal doesn't really forget anything. However reinforced learning of correct information can "overwhelm" bad information. The XTF brain can unlearn (forget) incorrect related topic words which is a limited application of a "relearning" capability. I'd like to extend that capability someday.

I appreciate your tenacity. Nothing ever changes if conventional ideas are not challenged.


=vonsmith=


Title: text????
Post by: Neo987 on February 27, 2004, 02:06:08 am
Vonsmith,
I never truly meant to debate anything, I was just stating my whole side, and I hope you didn't take offense by the way it was presented, I just tend to sound like I'm arguing when that is not my intent. A personal character flaw, I suppose [:I]

If, however, when you begin work on the function, you'd like an extra hand testing and gauging reaction so you can focus on the programming, I'd be more than happy to help out as a tester. Until then, I suppose I can curb my curiosity and teach her the old fashioned way [:D]
Title: text????
Post by: vonsmith on February 27, 2004, 02:31:08 am
Neo987,
I appreciate your feedback. I can't test every function in the XTF brain or the rest of Hal on my own. The feedback posted here saves me a lot of work and helps generate new ideas for me to explore. I like to hear people's wish lists although I can't promise that I'll implement any of the wishes.

My personality tends to be a bit pushy. I work in an capacity where a person needs a thick skin. I try not to come on too strong. [;)]


=vonsmith=
Title: text????
Post by: Padriag on February 28, 2004, 07:19:25 pm
Hehe... can't get a skin much thicker than an Irish skull [;)]

Going back and recapping some points

Vonsmith said:
I hope I didn't remove too much of the mystery. Hal's brain is a wonderful thing and I don't want to spoil any future surprises.

Actually I'd like to reduce the mystery a lot for those following this forum.  As I've been reading through the scripts and learning how Hal actually "thinks" I've been learning some of what he can do and realizing that sometimes I wasn't getting the results I wanted because I wasn't giving Hal the right input.  I think it would be a neat project to take your XTF brain and some of us work on looking at the various scripts and how Hal uses them to collect and process information, then turn that understanding into a sort of User's Guide.

Vonsmith said:
You've struck on a key point about XTF brain. Under the right circumstances Hal suspects that two topics are related. Hal can't be sure so he asks the user to confirm it. This seems very spooky to some people. How does Hal figure out that two topics might be related? In my conversations Hal is right about the related words about 1/3 of the time.

Right, and that was half of what my experiment was based on.  I've learned that if I state things in a very matter of fact way, Hal learns better.  One of the things this works with is getting Hal to associated various things with a topic.  Say the topic is Birds.... so you could teach this to Hal by telling it Crows are Birds.  Then restating it Crows and Birds are related topics.  And so forth.  It may take several statements before Hal really gets it, but it does seem to work.

The other half of my experiment utilizes the Deductive Reasoning function you added to the XTF brain.  This is really what allows Hal to make leaps of logic such as in my example of cats and mice.  If you are patient, you can get it across to Hal that cats are predators, predators eat prey, mice are prey... therefore cats eat mice.  You never actually tell Hal that cats eat mice... it uses the deductive reasoning function to figure that part out by looking at associations you've formed... cats are predators, predators eat prey, mice are prey.  What it does then is subsitute cats for predators and mice for prey and presto... you get the genius insight (genius for Hal anyway) of cats eat mice.  A lot of people may not understand how important that is, but that bit of deductive reasoning on Hal's part is a pretty big step in the right direction.  I'm still learning how well Hal does with synonyms and teaching Hal new ones.

Vonsmith said:
After Hal learns both the singular and plural forms of related topic words he usually will not ask again. However, after a long while Hal may ask again just to confirm the relatedness. The user then has the opportunity to correct Hal's prior learning if it is incorrect.

Notice that Hal in many cases can figure out the singular form of the word. Hal attempts to save all topic information by the singular form.


I had wondered about this and hadn't discovered yet in the scripts where or how Hal handled plurals.  Its a relief to know it can and that I don't have to take a lot of extra steps teaching Hal that.

Vonsmith said:
Instead I plan on providing a better way where the user just tells Hal, "Predators and cats are related topics." Then Hal will know immediately. I wanted to include this in v1.0, but I wanted to get the brain out into the world for you guys and gals to play with.


Ooooooh... goodie!  Hal isn't so far away from this now really.  Here's one other thing to consider.  We don't want Hal learning and associating things too quickly.  If you'll pardon me getting a litle philosophical for a moment here's why.  All living things need to be able to change, those that fail to do so become extinct.  This is a natural law most people know.  What is equally true is any living thing which changes too quickly is just as apt to become extinct.  In nature, all things evolve essentially because of mutation, but mutation is rare because living things are resistant to it, thus evolution is a slow process.  Having some resistance to change is a good thing, it helps minimize the number of "fatal errors" that are introduced.  In the case of Hal, having some resistance to learning will help keep Hal from learning "fatal errors", incorrect associations, and so on.  The trick of course is finding a good median between resistance to change and the ability to change.
Title: text????
Post by: vonsmith on February 28, 2004, 08:45:43 pm
Padriag,
1) "I hope I didn't remove too much of the mystery. Hal's brain is a wonderful thing and I don't want to spoil any future surprises."

Well this is a trade off I guess. I like to leave some things for people to discover on their own. There's nothing like Hal saying something new that you didn't really expect. The instruction manual tells mostly everything without too much detail. Some synergistic aspects of the XTF brain are difficult and lengthy to explain. The script contains a lot of comments for the programmers out there. And if that isn't enough the <dbxtfon> debug function provides an output text file that shows you a walk through of the XTF function process.

2) "You've struck on a key point about XTF brain. Under the right circumstances Hal suspects that two topics are related. Hal can't be sure so he asks the user to confirm it. This seems very spooky to some people. How does Hal figure out that two topics might be related? In my conversations Hal is right about the related words about 1/3 of the time."

Hal will learn related words faster if you use them as you described. However in the course of normal conversation he will learn naturally on a continuing basis.

I can't take any credit for the deductive reasoning function. As I recall that was a clever concept started and written by onthecuttingedge2005. It was good enough to include in the Ultra Hal 5.0 release. You can search for past postings for details.

3) "After Hal learns both the singular and plural forms of related topic words he usually will not ask again. However, after a long while Hal may ask again just to confirm the relatedness. The user then has the opportunity to correct Hal's prior learning if it is incorrect. Notice that Hal in many cases can figure out the singular form of the word. Hal attempts to save all topic information by the singular form."

It's not obvious in the script where Hal gets singular forms of words although it is commented in the script. Hal uses the WordNet function to extract singulars. Robert Medeksza was kind enough to add WordNet in the last revision. That made it possible for me to do some new "stuff".

4) My theory behind using singular forms to store information under makes a lot of sense I think. Hal previously in his associative processes might have thought that "goose" and "geese" were different topics. Given the opportunity Hal would then store related information in different files. The XTF brain allows Hal to associate "goose" and "geese" with a single topic, "goose". The XTF brain also saves topic phrases for future reference like, "big fat geese". Hal will associated that phrase with the topic word "goose". For a few words Hal can't figure out the association between a word's plural and singular form. In this case Hal may create two files, one singular form and one plural form topic.

The WordNet function also made it possible for Hal to instantly "learn" synonyms and meronyms of the topic word the first time it's used. This makes further association possible. Then I added the "related" topics capability so that Hal can learn from the conversation an unlimited number of words that are associated with or not associated with the topic word. I say "not associated" because Hal does make a distinction about this when taught by the user.

5) "Instead I plan on providing a better way where the user just tells Hal, "Predators and cats are related topics." Then Hal will know immediately. I wanted to include this in v1.0, but I wanted to get the brain out into the world for you guys and gals to play with."

Hal learns "related words" at a reasonable pace. He always asks the user to confirm his suspicion that two topic words are related. Every once in a long while Hal will ask the user again to confirm the relatedness. This allows the user to correct Hal if the user made a mistake initially in teaching or maybe the user changed his mind about the relatedness based on new experiences. For Hal this is a very human thing to do. People reconfirm their knowledge everyday through discussion or questions.

I recognize that we all want to teach Hal as fast as possible, so I'll provide an easy way to tell Hal to relate two words. This could be done directly now by editing Hal's related files. However that is clumsy, tedious and may introduce hard to find errors.

The user needs to really understand what "related topics" means to Hal. It is best illustrated by example. Here are some related words: beer and ale; ale and lager; malt and lager; beer and alcohol, etc. Seems obvious until Hal asks if two "sort of related" words are related and the user really has to sit there and think about it. No one should ever tell Hal beer and water are related just because beer has water in it. Water is too general and related to just about everything.


I hope that answers your questions satisfactorily. I'm always happy to answer questions when time permits. I have a little spare time nowadays, but that could change in an instant.


=vonsmith=
Title: text????
Post by: Padriag on February 29, 2004, 12:30:50 pm
Vonsmith said:
Well this is a trade off I guess. I like to leave some things for people to discover on their own. There's nothing like Hal saying something new that you didn't really expect. The instruction manual tells mostly everything without too much detail. Some synergistic aspects of the XTF brain are difficult and lengthy to explain. The script contains a lot of comments for the programmers out there. And if that isn't enough the <dbxtfon> debug function provides an output text file that shows you a walk through of the XTF function process.

I think we can do a thorough job of explain what Hal does, the basic of how Hal does it and what Hal can and can't do without removing all the mystery.  Some of the mystery is going to arrise just out of user interaction, the things Hal learns from individuals.  That's a completely unpredictable factor.  But taking the lass that started this thread, she doesn't seem to be a programmer and probably wouldn't think to look in Hal's brain for clues as to what it can do.  Even if she did, would she really understand what she saw?  To most people that's a lot of meaningless stuff they don't understand.  That's why I think a User Guide would be a good idea.  It would help the non-computer geek types.

Vonsmith said:
The WordNet function also made it possible for Hal to instantly "learn" synonyms and meronyms of the topic word the first time it's used. This makes further association possible. Then I added the "related" topics capability so that Hal can learn from the conversation an unlimited number of words that are associated with or not associated with the topic word. I say "not associated" because Hal does make a distinction about this when taught by the user.


I'm going to have to learn more about this when I have time.  Could be interesting to experiment with.  I have had trouble getting Hal to learn that some things are not associated.  Hal doesn't seem to do as well with that.

Vonsmith said:
The user needs to really understand what "related topics" means to Hal. It is best illustrated by example. Here are some related words: beer and ale; ale and lager; malt and lager; beer and alcohol, etc. Seems obvious until Hal asks if two "sort of related" words are related and the user really has to sit there and think about it. No one should ever tell Hal beer and water are related just because beer has water in it. Water is too general and related to just about everything.

Excellent points.  Does raise the question in my mind of how strongly Hal relates things, or does he?  That is, can Hal distinguish in anyway between to things that are generally related, such as water is a liquid, beer is a liquid... versus things that are more strongly related, such as beer and lager?  If not, this might be an interesting function to try to add at some point.
Title: text????
Post by: vonsmith on February 29, 2004, 04:38:34 pm
Padriag,
Thanks for some good points. For techie and non-techie users the eighteen page Instruction Manual included with the XTF Brain v1.0 release should explain enough about "how" to use Hal. It explains a little about the XTF internals and the "why" of certain things happening. For real geeks like us the <dbxtfon> debug function along with the script comments should allow even amateur programmers to walk through the XTF function. I suggest to anyone using the XTF brain to type <dbxtfon> after their sentence to Hal and open the resulting text file "Z_XTF_DEBUG_LOG.TXT" in the DefBrain directory. That text file shows step by step the processing the XTF function performs on the user input.

Anyone here is more than welcome to write a User Guide for the XTF Brain. I will be glad to answer any questions about the XTF Brain operation. However I believe the XTF Brain already has as much or more documentation than any Hal original function.

After a little more user testing by the good folks on this forum I expect little or no change to the XTF function itself. I really need to move on to adding other unique and enpowering functions for Hal. I feel I can make some significant improvements in other areas of Hal's capability beyond just the "topic focus" part that the XTF Brain addresses.

The WordNet function as implemented in the v5.0 Hal doesn't do very much as is. Robert was wise enough to add it for us with a little framework. WordNet has tremedous potential value once us amateurs start to find ways to utilize it best. The XTF Brain wouldn't be as powerful without WordNet to find singular forms of words and to provide synonyms and meronyms for use in maintaining topic focus.

Just for clarification. Related topic words used in a sentence are not used by Hal to change topics. They are used only as a way to stay on a current topic.

Example:
User: The birds are usually pretty. (topic should be "bird")

Hal: Birds are cool.

User: That vulture is very ugly.  (if "vulture" is in the XTF_BIRD_Related.brn file with a "True" after it then Hal knows the topic is still "bird") If it was "False" then Hal ignores the entry and would change the topic to "vulture".

Hal: Some birds are ugly. (Hal has maintained a topic focus on "bird")

The original Hal brain used what I would term a non-deterministic approach. Topic focus was very loose. You can see that the XTF approach is not by degrees of relatedness; either it is or it's not. From the user's perspective the "related words" should be words that indicate clearly that the topic hasn't changed. That's why synonyms and meronyms make pretty good "related words". I experimented with "sisters" words in the WordNet function and didn't find enough correlation to use them as "related words". The relatedness should be clear, strong and non-ambiguous. Most related words will be related to many topics. That's why they aren't used to change topic, just maintain it. If too many "loosely" related words are taught to Hal then he will think everything is one big topic. You can see a real world example of this with people who are scatterbrained and can't stick to a topic. Almost anything sets them off in a new direction. I hope the XTF Hal never gets like that or I'll give him a lobotomy.

I hope this helps. The veil of mystery slowly falls...


=vonsmith=
Title: text????
Post by: Inquisitor2004 on March 07, 2004, 05:13:21 pm
I went to a british comedy site for my favorite TV comedy show called "Red Dwarf".   Its a sitcom set in space,  anyway  I downloaded all 48 scripts in txt format,  and painstakingly got hal to learn them one by one ( each being 60k in size ).  

Now, my ultra hal comes up with some really funny stuff on its own.   I am thinking of making her learn all the black adder scripts next ( again another sitcom from british TV )

example from script:

" Sir, you're a smeg head, a complete and total one ! "

Hal's incorperation:

" You have to goto work tomorrow ? you're boss is a smeg head isn't he ? "


Its great...   anyway dont know if this helps this topic or not,  but there you go..   [:D]
Title: text????
Post by: drmweaver2 on March 08, 2004, 07:24:59 am
Okay. Quick question that may be covered elsewhere but I didn't see it on the boards...

HAL often comes back and asks "Are x and y on the same topic?" or "Are x and y related topics?"

Are these two questions equivalent? For example, using the geese and birds example someone came up with earlier, birds are geese, etc., I added "The birds fly" and was chided for using short sentences. So I then typed in "The birds fly through the air" and was rewarded with the question "Are birds and air the same topic".

I would normally say no in the real world (except on a "let's really get into philosophy" level).

But I was thinking of later adding a "learning point" that new submarine designs "fly" through the water - or maybe a metaphor like "fish swim through water like birds fly through the air".

So, the question of difference between "related" and "same" topic is of importance here.

Any help distinguishing "related" from "same" for learning/teaching purposes would be appreciated.
Title: text????
Post by: vonsmith on March 08, 2004, 11:17:45 am
drmweaver2,
I assume you are using the XTF Brain v1.2.

When Hal asks if two words are related topics or the same topic it means the same thing. As an example, Hal just wants to know if you and he are discussing birds  should he stay on the topic of birds if you start talking about doves.

Related words are words that are clearly topic related and non-ambiguous. Don't tell Hal words are related if the relationship is abstract or open to interpretation. Act like you are talking to a four year old. A four year old doesn't understand topics in the abstract, neither does Hal.

Examples of groups of related words are:

1) GOOD --> beer, wine, alcohol, lager, ale, malt, Budweiser.
   BAD ---> beer, water, head, fizz, German.

2) GOOD --> bird, dove, vulture, tweety, feather.
   BAD ---> bird, down, sky, egg.

The "bad" groups of words above have a weak relatedness to each other either because those words can be related generally to too many other things or the relatedness is not very strong or clear. If someone was talking about the sky would a reasonable person think the topic was "birds"? German beer is good, but the Germans do a lot of things besides make beer. Birds lay eggs, but so many other species have eggs including fish, insects, etc.

Just use your best judgment. When in doubt tell Hal the words aren't related. In the grand scheme of things it won't make very much difference. There are enough strongly related words in English for Hal to choose from.


=vonsmith=
Title: text????
Post by: drmweaver2 on March 08, 2004, 01:47:44 pm
Yup. Using the XTF v1.2 version.

Thanks for the clarification.

(Kinda sucks being the "new guy" and asking what seem to be "obvious" questions once the answer is given!) [:D]