I think lightspeed asks a very good question, and perhaps so far the point of why this is important has been missed.
I admire Bill819's confidence and 'trust' in Hal..
.... but from a technical point Hal has certainly not understood gift giving (as Hal does not store the meaning of words, or have such an awareness). More importantly, it does not recognize, nor is it useful to give only a 'yes' to a question from Hal.
A 'Yes' or 'No' response has no meaning to Hal (or the conversation), as it does not / can not follow a topic, as it can only respond to the last input made. Datahopa made a good point about this. To give the impression that Hal
is following a topic, it is necessary to keep using the topic words.
Also, from a 'learning' point of view, what answer you give depends on how you want Hal to speak to you. Hal 'learns' simply by storing the sentences you use (or parts of them) and using a formula to say those back to you when a pattern is matched. So what Hal does is, simply tell you what you told Hal. A 'yes' or 'no' contains no topic words so can not be used for word / pattern matching.
The mainQA part of the brain database contains so many example sentences that it is not always clear that this is how Hal learns, or what is from existing examples and what is from your own input. But if, like me, you start with a Hal with an empty mainQA table, you see how this works very clearly. Unfortunately you also see that Hal is quite simple in that respect, but that is another topic.