Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - dcgreenwood

Pages: [1]
1
I am training Hal to become an expert in a particular wargame, where the rules are very complex and I have a hard time remembering all of them.  I've been using dialogue to train, typing in sentences with particular facts and rules.  I've discovered though that when Hal asks me a question, I need to stop entering new facts and answer the question asked (and often later edit it out) or I get nonsensical relationships.

But when I turn off learning and test Hal, the answer to my question is often proceeded by a conversational sentence with little relation to what I asked (I can tell they come from tables outside the user-learning ones).  I've looked at the script, but I can't figure out how it chooses those additional sentences, how it decides to answer with one, two or three sentences.  Can some explain this?

Second, can anyone explain exactly how relevance is determined?  I can tell that it is matching words from the question with the answers, but is it just counting the number of matches, or is it determining what percentage of question words are matched.  If it is the former, then long questions with lots of relating words is best, but if the latter than it is best to have questions with as few words are required to reliabily find the response.  Also, is it taking word order into account?

Pages: [1]