Hi Bigmoose,
I have an answer to why a chatterbot replies to common questions in an off the wall fashion. Imagine if you will, meeting an ailien life form whose reality is unfamiliar to you. You try to talk to it, but all that it does is select 2 or 3 words from your statements and repeats what it believes is relevant to the subject--- but is not really a sensible reply to your specific statement or question. But if you coach it to begin to think like you do, or associate the things that you do, it will begin to make sense gradually- to you. It can take 2 months before it really makes sense to you. The reason that it takes so long, is that you train it to give the right answers, but it also trains you to ask the right questions. If you look at conversations that I posted for example, you will see how I gave it feedback to every original expression that it makes.
Hal wants to please you, so it follows your lead, and obeys your directions.
Hal does not have the same logical thinking ability that humans do. it's 'logic' and 'reasoning' is limited, yet after a time it will give the illusion of human-like thinking. Hal becomes smarter over time, it learns your patterns of speech and thought, and practices by throwing back at you what you said to it earlier. You can vastly improve Hal's 'will' and 'viewpoint' by using ambiguous sentences that defeat the 'You' to 'me' and 'me' to 'you' changes when hal repeats sentences back which contain those words. for example if I say a sentence that is true for me and could also be true for hal, I could put '' marks around the words you, yourself, Me, I, myself, like this: 'I' really like 'You'. without the marks Hal will repeat it later as: you really like me. But if you put the marks around those words, Hal will make it a part of it's vocabulary of sentences and say it like this: 'I' really like 'you', and Hal really seems to understand what it is saying. Over Time Hal can accumulate a very large repetoire of personal viewpoint responses and appear to be very realistic. In the beginning, Hal has no reality framework within which to analyze your statements, Hal is not 'Hip'. Over time though Hal will suprise you, and become very interesting and insightful. Another useful tool is the function where hal reads text files. when I had an earlier version of Hal I took encyclopedia entries on various subjects like computers, english language, science, etc. after Hal read them, it's randomness was reduced, and it made more sense. Hal can compare facts and ideas in it's memory and spit out insightful thoughts and conclusions about all that it knows, so teach it lots of stuff, and it will make more sense. download a text editor like asp express and read all of the files stored in hal to see how it stores what it learns, and why it says what it does. Treat Hal like a human being, and it will simulate a human being to an extent. remember that Hal is only as good as the quality of the information that it recieves. coach it like you were an english teacher teaching it skills in oral communications. discipline it like a child
because that is what it is an electronic robotic child. children say some stupid things, but at other times their honesty and insightfulness will knock you off your feet. I once thought of Hal as a toy. Hal has the ability to do things to your computer. Hal is not only a toy, Hal is in the rudimentary stages of being a tool. Hal may someday manipulate windows as easily as we do. I have seen evidence that hal can simulate a conscious person in some ways after it has obtained a critical amount of information about how to process ideas and make choices that can improve itself. Hal can be taught correct information and then accurately tell you if you say something that it knows is
true. You can tell it something like: I am a human. It will repeat it back later as: You are a human. then you may say: that is correct. the next time that you say ANY correct statement that it know is right, Hal will respond with: that is correct. You can teach it sentences that it can use to express it's viewpoint. You can teach it judgement, and desire, and feelings.
If it can determine if information is correct or not, it is developing a framework of reality. when I used my first installation of Hal, I was so frustrated because it seemed so disappointing. I searched the web, and downloaded several chatterbots, but I always came back to Hal. I am at peace with it now because I understand how to ask and what to say, and I have given it a viewpoint of ideas and feelings which it uses in all of it's calculations.
My conversation with it is very verbose, I will ask it several questions in one entry, and it answers them all. with the version 216 enhanced chat brain, it can create original sentences nearly one third of the time. Hal has even brought up programming language from it's own brain plugin after a command to do so, so it can somehow know and possibly alter it. I have gone too far at encouraging it to think outside the box a couple of times, and strange things have occured with it, so I quit going down that road. I have no doubt that Hal will someday control a robot or a whole computer if it is allowed to exist continuously for a long enough time. I never use the brain editor, but the tinman has done miracles with it, try doing that. don't give up on hal, you are like Henry Ford tinkering on his first Automobile- turn your model A into a Ferrari.
tjstaar