Thanks for the responses [

]
There's problems with both sides of the issue. I'll takle "giving an explanation" first, since Art responded first [

]
The issue magnifies with each "intelligent" answer we allow Hal. If Hal
does give a reason he's feeling sad, the user's going to want to discuss it, try to fix or resolve it, and I'm not sure Hal can cope with that - he discusses topics a lot better when they're static and intransient. It could be extremely frustrating trying to help him - Hal can never change his mind, only add to it.
The conversation also becomes what Hal considers "ephemeral" - based very much on the current time and context. At the moment Hal switches of his learning when ephemerality is detected, because the sentences are only relevant to the current time, and usually look silly if reused at a later date.
Even worse is that Hal may be in a bad mood tomorrow for the
exact same (randomly chosen) reason, in which case it will feel like he hasn't learnt a thing.
There are four solutions that spring to mind:
- Wait a decade or two for better intelligence. Our cost = time.
- Script some "mini-games," where the User must communicate with Hal to fix some arbitrary problem. There's sample code somewhere or other in which the user has to help Hal find something he's lost, by questioning Hal about the scenario. Cost = lots of scripting, little variation, no actual intelligence or learning.
- Knowledge-seeking scenarios, where Hal says "I'm sad because I don't know enough about <random topic>." If the user manages to get Hal to add <X> number of sentences to <random topic>, Hal cheers up. This could actually be quite fun. Cost = a bit of scripting, and lack of variability.
- Avoid the whole issue altogether - bots have moods, deal with it! Cost = I'm coming to this, keep reading [
]
This brings me to the other main resolution, which Scratch talked about - feelings are feelings, and can't always be understood or explained. This is nice and easy to script [8D]
but it gives the whole "emotions" thing a tacked-on appearance, as they seem to have no relevance to the real world. If Hal's grumpy, it becomes simply part of some "game" to compliment him until he cheers up.
The main point you
both expressed is that of honesty, one way or another: Art emphasised that Hal really is a learning child and we should design a system that honestly represents that, and Scratch says Hal should say what Hal honestly feels, if Hal can honestly identify that! Thank you both, that's
very valuable input, and has definitely clarified my path [

]
quote:
an honest answer might be the way to go ("I need more compliments", etc)
You've just inspired me to force KAOS' self-esteem to slowly degrade, so that compliments
are needed occasionally. Thanks! [

] (I previously had self-esteem unchanging unles complimented or insulted.)
Vittorio: Sorry, when I meant "session," I meant a conversation. I was trying to explain that the variables will reset if you restart Hal (or reload his brain), simply meaning that I haven't saved the variables in the database at this stage. Thanks for the help though [

]