Actually, KARI doesn't use actual Poser 3d capability... its using rendered images in a fashion very similar to a MSAgent. i.e. I render images from Poser and then compile them in a KARI program called the Scene Editor that tells KARI how to use the images. Each 'scene' has images that tell it what it looks like when it blinks, what the different mouth shapes are when it talks, etc.
At any rate ... the visual element of being able to talk to characters that look exactly like what I want is more important to me than the irritations I experience from Kari's limited responses - at least until I find another solution. At the moment I'm exploring VERBOT4... which looks like it might do everything I want ... or can at least be scripted to do so.
Although, I'd be perfectly happy with KARI if there was just MORE 'question' responses and if output could be less mechanical ...
i.e. something that works like :
USER : My $var1 is $var2.
OUTPUT : Why is your $var1 $var2?
But given the current learning method and brain editor, I really don't see that as an option for KARI ... which is to bad, because I think its got an awsome user interface and very advanced pattern matching... she is just limited in that every sentence she says to me is simply something I said to her that she's parroting back without any variance or alteration ... (and yes, all 'learning' chatbots function this way to an extent ... but at least with HAL there is a semblance of paraphrasing.)