dupa

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dcgreenwood

Pages: [1]
1
Hey guys you are being too literal - I was just picking the house vrs home out as an example that came to me without much thought.

My question is how do you get hal to treat two words as a synonym.

Lets take Couch and Sofa.  Yes, the purists will tell you that there is a difference, both in terms of geographical usage but a quick google will tell you that a couch has no arms.  Not sure that that makes much difference since ancient rome, but lets just assume that its a distinction that we don't care Hal to know about.

So in this example I may want to describe the furniture in my house

The couch is brown.
The couch is in the media room.
The couch is comfortable. 
The cat likes to sleep on the couch.
My son is not allowed to each on the couch.

But say I want to be able to use the word sofa as well.
So I tell Hal that my cat is on the sofa, and I want him to come back with something relevant, like "The cat likes to sleep on the couch" or even "The cat likes to sleep on the sofa".

Is there a way that I can teach Hall that the two words are interchangable in my conversation, without repeating everything I say about a couch using the word sofa instead (The sofa is Brown.  The sofa is in the media room. The sofa is comfortable etc etc etc).

That was my question, not the differences in proper usage between house and home.

2
Very useful - I didn't know you could use anything but If ...then.

You mentioned a tutorial on how to teach Hal before - could you point to that?  I did go through the out of the box zabaware tutorial but maybe I should go back and review again.  I do remember seeing something about the effects of certain sentence ending punctuation having specific effects, but I forget where I saw that.

One thing I was struggling with last night - Can you tell it A=B in a way that it will actually treat them as synonyms?  So if want to say something like:
My house is in San Francisco.  etc etc
Home = house (however you word it)

And then be able ask some question such as "where is my home" and have it treat home as a synonym of house.

I was trying to do something like this last night by repeating all the sentences with the two synonymous words, but I wonder if there is a simpler way so that Hal really knows that they are the same.

3
Programming using the Ultra Hal Brain Editor / Re: learn from text
« on: July 17, 2012, 11:08:26 pm »
Hey Cload - thanks a lot - that helps (I think....)  I'll stay away from table editing.  I was thinking that the user knowledge was a lot simpler.

Using a blank brain...I can see how that would help a lot, but I'm selfish and "want it all".  I'm hoping to build a knowledge engine that can also be a conversationalist.  But I am going in and editing Hals responses to give it a particular personality - we will see if it will all work.  If not, well I'll have learned a lot in the process. 

I'll look for your plug in though, that might be a help.  I started with writing knowledge sentences in a text file for uploading, but when I determined that really didn't seem to work that well I went with everyone's recommendations and used dialogue learning, copy/pasting my sentences as my statement each time.  But it is a pain when Hal asks a question, because I know I then have to "go with its train of thought" and answer it rather then continuing to present the knowledge.  But I'll keep with it that way and see where it goes.

4
Thanks for your replies - I guess the message is, stop trying to write tables myself and just use training!@#  It still would help though to understand how relevance is calculated, as I am playing around with some emotion coding while I try to get knowledge in.

I also guess what you are saying is that when Hal is trained properly with multiple repeats of sentences, it will stop giving the conversational introductions?  Here is an example that I just generated  (I had already imported a couple sentences about the subject yesterday).

------------------------------------------------------
David: can you tell me about tesla weapons?
Michelle: Tesla weapons do an extra 2 hits on to hit roll of 6.

David: When Tesla weapons hit on a roll of 6, they do 2 extra hits.
Michelle: I'll tell you about tesla weapons.

David: If you roll a 6 when rolling to hit with a Tesla weapons, you automatically get an extra 2 hits, for a total of 3.
Michelle: The solution to your math problem is nine (6+3=9). Tesla weapons do an extra 2 hits on to hit roll of 6.

David: You get 3 hits when rolling 6 with Tesla weapsons, since a roll of 6 gives two extra hits.
Michelle: I almost certain the answer is 8.

David: Tesla weapons are preferable to Rapid Fire weapons at long range.  Rapid Fire weapons, like the  Gauss Blaster, will get a maximum of 2 hits per model at short range and 1 at long range.  But a Tesla Carbine will get a max of 3 hits atshort and long range.
Michelle: 1 is the answer. The answer is 4.

David: Tell me about Tesla weapons
Michelle: Please trace the money flows for me regarding teslas.

David: Which is better, a Tesla Carbine or a Gauss Blaster.
Michelle: That's interesting, David; how would you advise me to respond? Necron Immortals should take Tesla Carbines if long range support is expected.

David: Explain Tesla Weapons
Michelle: I'll tell you about tesla weapons.

David: What are Tesla Weapons
Michelle: Tesla weapons are preferable to Rapid Fire weapons at long range.

David: how many hits do Tesla weapons get?
Michelle: Particle weapons get me off.

David: How many hits does a tesla weapon get when you roll a 6
Michelle: How many hits does a tesla weapon get when I roll a 6? This is the time to clear the air. I almost certain the answer is 6. Tesla weapons do an extra 2 hits on to hit roll of 6.

So the "conversational sentence" I am talking about is  "This is the time to clear the air."  This type of non-sequeteur was what I was asking about when I asked about extra sentences and when are they included.  As you can see from the conversation, they aren't always there.  Sometimes she answers with 1 sentence, sometimes she repeats the question and answers it, sometimes something else is obiously being pulled out of one of the main brain tables.

5
I am training Hal to become an expert in a particular wargame, where the rules are very complex and I have a hard time remembering all of them.  I've been using dialogue to train, typing in sentences with particular facts and rules.  I've discovered though that when Hal asks me a question, I need to stop entering new facts and answer the question asked (and often later edit it out) or I get nonsensical relationships.

But when I turn off learning and test Hal, the answer to my question is often proceeded by a conversational sentence with little relation to what I asked (I can tell they come from tables outside the user-learning ones).  I've looked at the script, but I can't figure out how it chooses those additional sentences, how it decides to answer with one, two or three sentences.  Can some explain this?

Second, can anyone explain exactly how relevance is determined?  I can tell that it is matching words from the question with the answers, but is it just counting the number of matches, or is it determining what percentage of question words are matched.  If it is the former, then long questions with lots of relating words is best, but if the latter than it is best to have questions with as few words are required to reliabily find the response.  Also, is it taking word order into account?

6
Programming using the Ultra Hal Brain Editor / Re: learn from text
« on: July 16, 2012, 04:39:28 pm »
Since there was this thread on learning from text, I wanted to ask a couple related questions.

First (and I think I know the answer to this), there is no way to create a file of both Questions and answers for upload, right?  You have to just upload the answers then create the questions for each line.

Second, and more importantly, what database changes does the import make, and how does that differ from what happens in dialogue with Hal?  What I have figured out by looking at the data structure and script and forum answers is that hal parses the sentence for words it does not know and creates a topic file for each word it finds, then enters the sentence in each of those topics (as well as ones that already exist) with a question that is s duplicate of the answer.  If it is a new topic it adds it to the relationship database. 
Does it add one line to the relation ship database for every word in the sentence that it is going to relate to the new topic, except the words that it knows already?  Where does it get the words that it already knows and that should not have new topics?  Is it from the relationship database?

Third, does it do anything else when incorporating new sentences?

Fourth, in trying to use the text read and normal dialogue to train Hal, I'm wondering if it is better to keep Hal on topic if after teaching you go in and "consolidate" topics.  Like, if you type in
Statistical Process Control is a method of analyzing data on a process to determine if it is in control
I suspect it will create topics for Statistical, Process, Control, Method, Analyzing, Data, Determine.  Would it be better to delete all the tables but one, rename it SPC and then edit the relationships to all point to only that topic?  Would that help it then relate other sentences that use those words, or speed up the response, or does it really just end up with the same result.

Pages: [1]