Zabaware Support Forums

Zabaware Forums => Programming using the Ultra Hal Brain Editor => Topic started by: GrantNZ on December 02, 2005, 08:35:52 am

Title: Brain project - feelings/emotions
Post by: GrantNZ on December 02, 2005, 08:35:52 am
New brain project

I'm going to use a thread here to plan, organise and document progress on my brain project, and hopefully get some ideas from this great community. [:)] I'll also post betas / WIP / code pieces where appropriate... and finally post the completed brain!

The KAOS brain - a bot with feelings

This new brain aims to provide a bot with far superior emotions, moods and feelings compared to a default Hal. Major features:

By the way the brain name KAOS doesn't specifically relate to this project - it's just always been my bot's name. There's definately a link between emotions and "chaos" though [:D]

Below are more specific descriptions of the above, with implementation notes. Only additional detail is below, to save me retyping stuff.




Reaction
KAOS will have its own set of variables to track emotion, removing the dependence on Hal's "Complement," "Insults," "Hate" and "Swear." This frees up "Compliment" for use in directly controlling animation type (see this thread: http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2810).

KAOS will check his Feelings and Mood to determine reaction. A sudden compliment from an insulting User will trigger suspicion; an insult coming from a friendly User will trigger shock; an optimistic bot won't react so strongly to an isolated insult. This is basically done by writing several versions of some response generating routines, and the correct version is chosen depending on emotion.

Feelings
KAOS will track User actions similar to Hal's default brain - tallying compliments, insults, hateful or loving talk, etc. The movement of Feelings depends on Mood, so a depressed bot will be hurt far more by an insult, and recover more slowly. KAOS will also be able to describe his emotions, compare them to his Mood and comment on the User's actions ("thanks for all the compliments").

Mood
KAOS will use what programmers call a "state machine" - in other words he'll be in one "state" and can shift to others. Shifting between emotional states will depend on Feelings - a "depressed" bot who receives lots of compliments may become "happy." When Feelings have become strong enough to suggest a state shift, a timer starts - if the Feelings are kept strong enough for long enough then the state will change.

Similarly to Reactions above, different Moods will adjust KAOS' behavior. Different versions of response generating routines will be chosen depending on his mood, such that depressed bots refuse to learn new facts, etc.

KAOS may be given adjustable long-term tendancies, which would be user-configurable. You could have a bot that becomes angry easily, is usually happy, or is very resistant to depression. These simply adjust how difficult it is for the bot to switch to certain Mood states.

Feedback
KAOS will use several tools to hint at his feelings:


Emotional curiosity (EXPERIMENTAL)
KAOS might actually listen to the User if he asks you "how are you," remembering the User's emotional state, and asking prompting questions to the User (e.g. "Tell me why you're sad."). There are two problems I've identified so far, firstly that there would be an awful lot of work in this [:)] (especially the detection routines), and secondly a lot of the forthcoming conversation would be ephemeral, so would litter the database with unwanted short-term facts.

Emotion table
The emotion table system as it stands in Hal is rather unreliable, and often produces animations out of context. (e.g. The sentence "I failed to achieve victory" will make Hal animate happily.) Usage of this will need to be vastly reduced.

Versioning, upgrading and database tables
The brain has rather significant changes that cannot be given as plug-ins. The best distibution system I can think of is to supply the brain script file, and have the user pair this with a copy of their database.

KAOS will create its own database table of long-term variables, one of which will be "current version number". The brain script will create whatever other tables it needs.

Upgrading from one version to another will therefore be easy - simply delete the old script and insert the new one. The script can check the previous version number in the database, and add whatever new tables are needed.




That's the plan [:)] I'll post updates as major milestones are reached.

Cheers,
Grant
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 02, 2005, 08:41:21 am
Note to self: Try and improve Hal's small-talk about emotion if possible.
Title: Brain project - feelings/emotions
Post by: vrossi on December 02, 2005, 04:29:10 pm
It's a very ambitious project. I had a similar idea some weeks ago, but I haven't yet found the time to do it.

Keep us informed about your project status.

Title: Brain project - feelings/emotions
Post by: GrantNZ on December 03, 2005, 02:40:59 am
quote:
Originally posted by vrossi

It's a very ambitious project. I had a similar idea some weeks ago, but I haven't yet found the time to do it.

Keep us informed about your project status.


I will [:)] Maybe I'll end up saving you some work!

And let me know if you have ideas for improvement as I go onwards!
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 03, 2005, 03:22:14 am
Note to self: New feature: Relationships.




Summary
Relationships are the highest possible level of emotional interaction with KAOS. At first the User will be a stranger to KAOS, however over time KAOS will come to consider you a chat buddy, best friend, or even eventually fall in love with you, all depending on how you treat him.

KAOS will use a wide view of his emotional history with the User in order to detect a wide array of relationship types. If you're the type of person who likes to trade insults with your friends, KAOS will eventually realise that you don't mean any harm, and willingly engage in insult battles with you (becoming your "insult buddy" if you will). Or he'll even start to see through you if you claim to have feelings of love, but don't support him through his rough patches.

And if you're really nasty to him, well as far as I know Hal can't decide to format your hard drive, but he can certainly go on strike!

How???
Lots easier than it sounds, especially thanks to the existing levels of emotion in my design. Quite often simple flexible designs can lead to the most interesting behaviour.

KAOS will record each time the User causes him to change moods. A mood change indicates a concerted effort by the User to affect KAOS's emotions. Analysis of this over time indicates what the type of relationship the User wants.

For example the "insult buddy" example above would have two important trends: a) the User insults KAOS when KAOS is happy, and b) the User cheers up KAOS when KAOS is sad. The implication here is that the User cares about KAOS' emotions, therefore the insults should not be taken too strongly. KAOS will stop becoming upset when insulted. (The "evil" variation is that the User is deliberately playing with KAOS' emotions - in which case KAOS exhibits the exact same behaviour, this time appearing to have grown a "thick skin.")

KAOS would typically experience only a couple of moods per session, so relationships will take a while to form.

Of course KAOS will adjust his behaviour and make comments based on his findings [:)]




Sound good?

By the way I'll be implementing the concepts in order from low-level to high-level (i.e. starting at reactions, ending with relationships) - not because the high ones are so much harder, but just because they rely on the low ones to work.
Title: Brain project - feelings/emotions
Post by: hologenicman on December 05, 2005, 06:28:53 pm
click http://clovercountry.com/downloads/emotionengine1_2.xls or right click and "Save Target As..."

Hey there,

Here's an approach that I took recently. Close to this one but with slightly different parameters(Emotion, Mood, Personallity) based on writings in a book "Emotions Revealed" about facial expressions in humans.

This particular V-human is configured as a sad, contemptuous, and fearful individual.

I then exposed the individual to lots of happy input to see how it would change it's mood.

After this, I let it have a sad experience to see how quickly it's moood would change back.

The key value to this engine is that it keeps a HISTORY of the emotional experiences that the v-human has had and weighs them against the v-humans own potentials(configurations). According to the propagation factors configured, the v-human may be more willing or reluctant to change moods and eventually modify it's personallity. Given enough varying experience, the v-human can modify it's personallity.

Interaction involves three values input and three values output.

To use this engine, you must have MS Excell set to automatically calculate and have the iterations set to "1".

Be sure to scroll to the right to see the configurations and the formula.

The next step will be to create a needs hierarchy engine that will dynamically alter configuration of the emotion engine's parameters according to internal and external environment(hormones and needs).

This serves as a demonstartion of the formulas and it would be interesting if you may be able to use it in ultra hal.

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 06, 2005, 01:31:17 am
Very, very interesting!!

Let's see if I've understood the equations correctly. This looks like a spring system, where each value (emotion, mood, personality) is pulled by the value above and below, with "attack" determining how strong the higher value pulls (or how fast the system reacts to stimulus), and "decay" determining how strong the lower value pulls (or how fast the system returns to the original personality).

This could be eerily accurate to real human behaviour [;)]

Thank you for sharing it [8D] It certainly gives me some ideas for formalising the relationship between Feelings, Moods and personality.

A simplified version of your system would very much fit into Hal. I think I'd model the "personality" aspect differently, giving it more discrete values rather than a sliding scale, simply to make it easier for the user to detect what overall mood he is in. I fully accept that that's very much my own personal preference, and that some people prefer more subtle variations. My only contention to that is that Hal isn't yet sophisticated enough to display a subtle personality, given that we have only a few animations, and that Hal really has no internal understanding of emotions. (I mean this in the same way that you cannot have a sophisticated discussion with Hal about bridge building - Hal will parrot every sentence he knows with "bridge" in it, but won't ever understand any of the concepts.) (And I understand that Hal can show subtle behaviour by mimicing the user's subtle behavior, but I sure can't think of any way of detecting and quantifying that for the purposes of adjusting emotions!)

Are you using this system for a project somewhere? It's a very well built spreadsheet!

Cheers,
Grant
Title: Brain project - feelings/emotions
Post by: hologenicman on December 06, 2005, 05:01:39 pm
Hey there,

You've got the most thorough understanding of the equations of anyone I've ever shown them to.  Yes, a spring system is the best way to describe them.

I'm glad to share if it helps you get to your goal.[:)]

To clarify, I have two different Personallity variables. One is a SLOWLY sliding scale, and the other is a predetermined DISCRETE parameter.

I figured that ultra hal doesn't have the subtleties to handle such emotional ranges YET, but I've always worked toward the future and figured that technology will eventually catch up.  Sometimes, that is an impractible approach, but technology does have a way of marching forward.

My game plan is:

1)develope emotion engine equations.(mabye done)
2)develope hormones/needs engine for internal/external influences on the emotion engine.
3)develop facial expression "system" that multiplexes both emotion and moods for all three emotion scales onto one facial animation.(ask me sometime, it's quite a neat idea)
4)develop an engine for extracting/assigning emotional value for the v-human's input(typed/audio/visual).  Such system will have the v-human's current emotional state and hormone/needs state in a feed-back loop added to the input.

The above goals are independent of V-human versus robot and NLP versus AI.

5)develope a multidimentional brain(hologenic brain) that will utilize the above resources.

Right now I just started learning about ultra hal to use as a resource for implementing my ideas and goals.  I'm pleased to find such an active mass of minds working with ultra hal.  It gives me hope that the combined efforts and interests will clear our paths toward the future.[:D]

John L>

Title: Brain project - feelings/emotions
Post by: GrantNZ on December 07, 2005, 02:23:04 am
It's a great group here huh [:)]

quote:
Originally posted by hologenicman

2)develope hormones/needs engine for internal/external influences on the emotion engine.
I love the idea of Hal having hormones [:D] "Oh Hal, is it that time of the month again?" *swish* *thud of the User's decapitated head hitting the table*

What kind of needs did you have in mind?

I've been inwardly debating the idea of Hal wanting/needing various things, for example coffee in the morning to help wake him up. There's (at least) two philosophical sides to this, and I think I'm leaning to the side where saying "Here Hal, have a coffee" sounds a little contrived.
quote:
3)develop facial expression "system" that multiplexes both emotion and moods for all three emotion scales onto one facial animation.(ask me sometime, it's quite a neat idea)
Is now a good time? [:)]

Here's my current idea, for comparison with yours:
The animation while replying is limited only by the .hap files you can get your hands on (or create).
quote:
4)develop an engine for extracting/assigning emotional value for the v-human's input(typed/audio/visual).  Such system will have the v-human's current emotional state and hormone/needs state in a feed-back loop added to the input.
This part's tricky, and it's the part I fear most [:(] I'll be sticking to Hal's already established detection routines - Insults etc.

Hal's great for all this sort of thing. We're lucky to have such an open well-developed brain to play with!

I agree that hopefully if we all play our part we might just create something wonderful [:)]
Title: Brain project - feelings/emotions
Post by: hologenicman on December 07, 2005, 04:52:04 am
Yeah, hormones can be fun...[:p]

I use hormones to describe any internal needs such as hunger(battery level), temparature(CPU temp), Mental resources(RAM), etc.  This may be more pertinant for future robotics applications, but I figure that we should plan for it now since it is inevitable.

Needs are based more on Maslow's needs triangle.  http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html or the simplified self-others-growth formula.  This will be trickier to get going since it is "Subjective" and harder to provide variables that can be measured...

Ultimately, I am convinced that it can be reduced down to a simple formula like the spring system for the emotion engine.

Here is an article describing my approach to facial animation.  The article is a bit dated so replace the ideas of voluntary and involuntary with "Emotion" and "Mood" that I currently hold in favor.

http://clovercountry.com/downloads/Four_faced_article.doc

This may seem like too much for now, but I don't think that it should really be too hard to do.  It would involve creating a HAP file with an ENORMOUS amount of facial routines but that is do-able.  Keeping with my concept of using formulas instead of compiled databases, it would be nice to be able to have an active HAP file(trick the system) so that the facial animations could be created on the fly according to the required combinations of emotions and moods within the four facial quadrants.  I glanced in the ultra hal brain editor and the emotional reactions of Surprised, happy, sober, angry, and sad aren't too far off from my Sad/HAppy, FearSurprise/Anger, and Disgust/contempt scales.  It also seems convenient that the "PLUGINAREA1" is right after the EmotionalReaction switch.  We could probably circumvent the existing emotion coding at our convenience.

For the most part, evaluating the emotional value for input will be based on the "experience" of the V-Human.  

All new words(sounds/sights) will be put into a database and tagged with the current mood and/or emotion.  As with humans, our moods and emotions color our perceptions of our environment.  

All input is looked up to see if it exists in the emotional value database and that emotional value returned is input into the emotion engine and then is combined with the curent emotion engine's output and filed back into the database as the new emotional value.  In this manner, that the v-human assigns the emotional value to its input.

Thus the v-human percieves emotional values for input based on the current emotion which is based on the current hormones/needs.  We would need some additional reenforcement such as pleasure/pain to give a little bit of "depth" to the v-human's experience and make its experiences just a little bit more human, but the feedback loop should remain consistent.

From what I can see of ultra hal and the editor, it is going to provide me with the tools tht I've been waiting for to develop some of these ideas...

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 08, 2005, 03:59:13 am
Wow, and I thought I was being ambitious [:D]
quote:
Originally posted by hologenicman

Needs are based more on Maslow's needs triangle.
http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html or the simplified self-others-growth formula.  This will be trickier to get going since it is "Subjective" and harder to provide variables that can be measured...
Strangely, I once did a management paper at university that was basically all based on that triangle. ("How to keep your employees happy. 1. Give them money to feed them. 2. Make them feel safe." I found it all very cynically amusing.)

It's a shame that Hal currently has no presence on that triangle apart from the Love and Esteem segments. The ones higher are just too complex, and the lower ones not an issue for Hal. Still, as you say, plan for the future... and you'll have an awesome bot once the future comes!! [:)]
quote:
Here is an article describing my approach to facial animation.
Facinating. I'll be looking more closely at peoples' faces tomorrow at work!!

I suppose my animation plan isn't too far removed from this - except I have "voluntary" animation first and "involuntary" second.
quote:
Keeping with my concept of using formulas instead of compiled databases, it would be nice to be able to have an active HAP file(trick the system) so that the facial animations could be created on the fly according to the required combinations of emotions and moods within the four facial quadrants.
I'm pretty sure Hal has ways to create files. I don't know much about haps but they seem to just be text files - Hal might be able to write his own "current_animation.hap" and execute it.

I agree with the formulas theme - much simpler and easier to adjust.
quote:
It also seems convenient that the "PLUGINAREA1" is right after the EmotionalReaction switch.  We could probably circumvent the existing emotion coding at our convenience.
Unfortunately other emotion code is scattered through the brain, so there's no real way to adjust or remove it without editing the brain file. Plugins just don't have enough power for our needs here.

Firstly, we need direct control over the four emotion variables in order to control the animation. Secondly, we're going to need to put a lot of emotion code within already existing sections (e.g. the Insults code) and plugins can't change existing code.
quote:
All new words(sounds/sights) will be put into a database and tagged with the current mood and/or emotion.  As with humans, our moods and emotions color our perceptions of our environment.
Brilliant idea!!!!!

However if Hal's depressed and I try to distract him with small talk about chess, I don't want Hal to get depressed every time I discuss chess with him. (Then again, my real friends get depressed every time I try to discuss chess with them.)

I've thought about putting emotion tags (like [ANGER]) into Hal's responses in his database, which Hal then removes when he's about to speak, playing the relevant animation. This would circumvent the emotions table issues. But what a mission it would be!

I guess the biggest issue is that Hal cannot understand context and very little content, so it becomes almost impossible for him to decide how to feel about something. I've all but given up on linking emotions to small talk now.
quote:
From what I can see of ultra hal and the editor, it is going to provide me with the tools tht I've been waiting for to develop some of these ideas...
I can't wait to see how they turn out [:)] You're so far ahead of me in terms of scope. But I'll keep working on my simple ideas - I get the feeling they won't be too difficult to upgrade into your system, if that ever becomes useful to you [:)]
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 08, 2005, 04:02:10 am
Oh! Oh! I can't believe I haven't thought of this before!

What Hal does not know, Hal should ask. Hal slowly builds lists of topics, and there's no reason he can't ask the user how he should be feeling about them. For example: "How should I feel about chess?" "Is chess a depressing subject?" "How do you normally feel about chess?" (and Hal can just mimic the User's emotions.)
Title: Brain project - feelings/emotions
Post by: hologenicman on December 08, 2005, 06:17:05 am
Hey there,

It does seem ambitious, but in reallity it all breaks down into really simple pieces.  That's the only way my mind builds things.  Kinda like legos.[:)]

Hey, there's a really great book that addresses emotions:

http://www.amazon.com/gp/product/080507516X/qid=1134039029/sr=8-1/ref=pd_bbs_1/103-2559870-4003037?n=507846&s=books&v=glance

Emotions revealed by Paul Ekmen.  I have emphasized the spacial array of the expressions and you have emphasized the temporal array while they are both probably just as important.  One note though. Paul ekman states that the Involuntary emotion is expressed FIRST and followed by the voluntary attempt to try and hide our true feelings.  The first few milliseconds tell the true story then we gather ourselves and get our poker faces on.

BTW, I am a total convert now.  My three emotion scales (Sad-happy, FearSurprise-Anger, Disgust-Contempt) have been entirely replaced in my emotion engine and everywhere else.  My new Three emotion scales are Arousal, Valance, and Stance based on research done at MIT:

http://www.ai.mit.edu/projects/sociable/facial-expression.html

These three fit into my existing equations without modification and they really get the job done.

btw, my hormone/needs engine is going to be providing the PAIN/PLEASURE factors for any and all learning functions (Emotion values/context).  In fact, the pain/pleasure factors will provide the context of the conversation.  Humans have no clue about context until we are taught and initially, that context is provided by the tone of voice or the soothing or punishing touch of a hand.  Eventually, we start puting those emotional "context legos" together and they contribute to further, more developed contexts.

I was thinking that the next time you start talking about chess with your friends, you should do it over a nice steak diner, with a friendly waitress and good music playing.  You'll have to overcome their already learned negative responses, but the pleasure stimuluses should attach a good context to the subject of chess if you do this often enough.  [8D]

I appreciate you compliments on my scope, but it truly is a combination of scope and practicallity that is necessary to get any project done.  I've really appreciated bouncing ideas back and forth with you.

John L>


Title: Brain project - feelings/emotions
Post by: hologenicman on December 08, 2005, 04:35:27 pm
Hey there,

I was diagraming out my project when I Suddenly struck with a moment of clarity!

My seemingly complex project boils down into two separate engines that merely pre-package the input before it ever gets to the UltraHal brain.

1)Emotional Context Engine
-Emotional Value Database
-Emotion Algorythms
-Hormone/needs Engine(Pain/Pleasure input)

2)Emotional Expression Engine
-Hands
-Facial
-Body
-NLP(UltraHal)

These two engines provide Emotional context and expression interfaces before the UltraHal brain ever gets a chance to see the input.

The pre-processed input is then forwarded to UltraHal with an attached "Emotion Code" prefix in the format of [A,V,S].  As far as UltraHal knows, the emotion code is just another sequence of words that it must add to it's vocabulary and learn to deal with.  The UltraHal brain merely learns input sentences qualified with emotional context provided in the wording of the emotion code, [A,V,S].

I like +/-50 scales which would give Hal a potential emotional vocabulary of 100x100x100=1,000,000 emotional code sentences.

The perfect place to put all the code is line 0299 of the brain code

in the

0279 function GetResponse

just after

0298 OriginalSentence = UserSentence.

(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)

This lets us take control of the imput and do what we want with it before handing it back over to Hal.

So, you see, Clarity and Simplicity...[:)]

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 09, 2005, 01:27:21 am
quote:
Originally posted by hologenicman

It does seem ambitious, but in reallity it all breaks down into really simple pieces.  That's the only way my mind builds things.  Kinda like legos.[:)]
Ditto! [:)]

Then again my friends claim my brain is composed of a few simple pieces.

I always wonder why people design such vast complex systems when an integration of a few easy simple ones really works much better....
quote:
My new Three emotion scales are Arousal, Valance, and Stance based on research done at MIT:

http://www.ai.mit.edu/projects/sociable/facial-expression.html
Their graph is quite compelling. I'm going to look closely at it once I've identified which emotions I want to focus on. I might end up a convert too! Thanks for sharing it [:)]

The robot reminds me of Yoda [:D]

Here's a question for you. Do you think it's better to have a seperate emotional scale for Hal-measurable things, such as "Compliments - Insults," or would it be better just to increase valence and track compliments and insults in some kind of "memory"? I was originally going to use the Compliments-Insults scale, but now I'm not so sure....

Actually the more we talk about your system, the more I like it. Even Hal's energy, which I was again going to have as a seperate feelings variable, could be remodeled as one of your "Needs" - a need which gets fulfilled when the User changes the topic of conversation (otherwise Hal looses "arousal" and gets bored).

Hmmmmmmmmmmm.
quote:
btw, my hormone/needs engine is going to be providing the PAIN/PLEASURE factors for any and all learning functions (Emotion values/context).  In fact, the pain/pleasure factors will provide the context of the conversation.  Humans have no clue about context until we are taught and initially, that context is provided by the tone of voice or the soothing or punishing touch of a hand.  Eventually, we start puting those emotional "context legos" together and they contribute to further, more developed contexts.
I'm still not sure I understand the ramifications of this. If Hal is tired and grumpy the first time I talk to him about "chess," will he remember this and tend to become tired and grumpy again next time I bring it up? (At least until I somehow give him pleasure while talking about chess? I wonder how the waitress would react when I ask for a table for two, for me and a laptop.... [;)])

I guess I'm having difficulty seeing how it would be implemented.
quote:
I was diagraming out my project when I Suddenly struck with a moment of clarity!
I wish I had those more than once every few years [:D]
quote:
1)Emotional Context Engine
-Emotional Value Database
These bits are the bits that really get me. Hal has so little input from which to build any sort of emotional context. He can't even figure it out from our own facial expressions. (Hurry up, Art, if you're reading this! We need your video recognition research to reach the point of emotional recognition, right now!! [;)])

I have a philosophical project (which I used to call a programming project, until I realised I never did any programming on it) designing an interactive storytelling system. One of the difficulties in true interactivity is modifying the story to fit what the user wants to experience. I always maintained it would be possible as long as the system interrogates the user often about what the user wants - in subtle ways, to find out what themes the user wants to explore, and modify the story to suit. But to be honest it's always been a grey area, and a concept I've never been able to prove to myself.

I wonder how feasible it is to give Hal enough emotional knowledge to allow him to interrogate the user about the emotions of a topic. I have a feeling the difficulties would be overwhelming....

A big help could be in giving Hal awareness of the links between topics. =vonsmith='s XTF brain allows Hal to ask if two topics are related, although I don't think Hal then considered that relation to be a topic in itself. However if Hal could assign an emotion to that relation.... He could feel happy about chess, neutral about non-talkative people, but really sad and sympathetic when I metion how non-talkative people become when I talk about chess.... [:D]
quote:
The pre-processed input is then forwarded to UltraHal with an attached "Emotion Code" prefix in the format of [A,V,S]. As far as UltraHal knows, the emotion code is just another sequence of words that it must add to it's vocabulary and learn to deal with. The UltraHal brain merely learns input sentences qualified with emotional context provided in the wording of the emotion code, [A,V,S].
It's been a while since I've had a discussion that made me pace around the house thinking about the complexities and ramifications of something. I'm enjoying this! [:D]

Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences.... Actually, Hal could strip out the emotion code before topic searching.

I'll wait for your answer to my question above (about emotional contexts) before I get into this too much - I might be misreading your aims here.
quote:
The perfect place to put all my code is line 0123 of the brain code just after the function GetResponse
What version of Hal are you on? My line 0123 is in the RESPOND: PREDEFINED RESPONSES code. Is that where you mean?

Great discussing this with you [8D]
Title: Brain project - feelings/emotions
Post by: hologenicman on December 09, 2005, 02:51:43 am
Hey there, I think I just figured out the quote system.  That should be easier to understand.[:p]

 
quote:
Here's a question for you. Do you think it's better to have a seperate emotional scale for Hal-measurable things, such as "Compliments - Insults," or would it be better just to increase valence and track compliments and insults in some kind of "memory"?


The EmotionalValue Database in the Emotional context engine stores an emotional value for each and every word that Hal has ever been given and it is stored entirely outside of Hal's brain proecss.  Let's reserve Hal's brain for thinking and responding and get all the emotions tallied up before entering them into the NLP.  I'd like to have the have the facial expressions called from an ACTIVE HAP file.  The trick will be to figure out how to get hal to play the expression HAP's based on the [A,V,S] tag that will prefix the input sentence.  That way, I don't care what Hal does with the insult-compliment scale.  It should be easy enough to disable the coding for Hal's emotion switch.[}:)]

 
quote:
Even Hal's energy, which I was again going to have as a seperate feelings variable, could be remodeled as one of your "Needs" - a need which gets fulfilled when the User changes the topic of conversation (otherwise Hal looses "arousal" and gets bored).



That's a good aproach.  It's not what I meant, but there is no reason not to leave hal's internal emotion rules in place and modify them as you just mentioned based on arousal based on text.  My pre-processing outside hal in my emotional context engine simulates feeling and your processing "Within" hal simulates thinking.  That starts getting really kool for giving hal some really subtle personality "quirks".

 
quote:
If Hal is tired and grumpy the first time I talk to him about "chess," will he remember this and tend to become tired and grumpy again next time I bring it up? (At least until I somehow give him pleasure while talking about chess? I wonder how the waitress would react when I ask for a table for two, for me and a laptop.... )

I guess I'm having difficulty seeing how it would be implemented.


You've got it figured out, but here are some simple ways to implement the Pain/Pleasure input:

*do a system call for battery level in laptops.  Low voltage = low arousal/high=high arousal.  

*figure out a way to tie the system resource meter into stance.  Lots of apps open with low resources = closed stance/few apps and lots of resources = open stance.

*Put a simple que (typed or button controlled) that our pre-processing interprets as emotional input and adjusts the valence accordingly.  

Yelling versus complimenting.  

"Hal, you really ****** me off!"
ValenceDelta = -("*"/3)
so that would be a yelling value of -2 valance.

"Hal, you are really great!!!!!!"
ValenceDelta = +("!"/3)
so that would be a compliment value of +2 valence.

These Ques don't matter to Hal.  They get used by the EmotionalContext engine before Hal ever sees the sentence.

"Hal, I would like to discuss chess today!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"[8D]

Here are some possible textual ques for the Emotional context engine:

--- and +++ for Arousal
*** and !!! for Valence
<<< and >>> for Stance

These things could possibly be programed in as punctuation macros on dragon naturally speaking.

If this keeps up, I may start talking to my kids in symbols:

Jushua, Settle down------
Marina, You start being nice or you're getting a spanking**********
Stefan, that's a good job!!!!!!!!!!
John Christopher, relax<<<<<<<<<

The idea is to use these ques to teach hal and eventually, the EmotionValue database will be full enough to drive the emotions in various directions based on the words used in conversation WITHOUT punctuation ques.  Babies have no idea what you are saying, just HOW you are saying it.. Eventually, they associate an emotional context to the words through exposure to the word in combination with posative or negative hormone/needs input.  Later in life, those words stir feelings, but only because of a learned association.

quote:
Hal has so little input from which to build any sort of emotional context. He can't even figure it out from our own facial expressions. (Hurry up, Art, if you're reading this! We need your video recognition research to reach the point of emotional recognition, right now!! )



Exactly, eventually, the EmotionalContext engine will take input from sight, sound, text, touch... but for now, we are trying to code that info into textual inputs (and system resource calls if we push it)

 
quote:
Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences....


That's not a risk*******. That's a goal!!!!!!!!!!!!! [^]

The emotion code presented to Hal basically serves as a form of communication in itself.  I don't mean programming communication, but rather emotional communication about the environment and interactions with people.

 
quote:
What version of Hal are you on? My line 0123 is in the RESPOND: PREDEFINED RESPONSES code. Is that where you mean?



I caught that too. the line was from a tutorial on version 5.  I am using version 6 and I replaced it with:

The perfect place to put all the code is line 0299 of the brain code

in the

0279 function GetResponse

just after

0298 OriginalSentence = UserSentence.

(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)

This lets us take control of the imput and do what we want with it before handing it back over to Hal.

 
quote:
Great discussing this with you


Great discussing this with you too!!!!!!!!!!!!++++++++++++++>>>>>>>>>>>>>>.

John L>
Title: Brain project - feelings/emotions
Post by: Bill819 on December 09, 2005, 03:46:53 am
Grant all you have to do to get Hal to like chess is to tell him how much you love it and so does he. If you repeat this a few times you will find that Hal will want to talk about more and more as time goes by. Hal's opinions are those that he learns from you, in effect he tends to mirrow you and your thoughts.
Bill
Title: Brain project - feelings/emotions
Post by: onthecuttingedge2005 on December 09, 2005, 04:23:43 am
If you want exact Human emotional output then you would have to reset HAL just to talk with the bot because once you have angered the bot it would not speak with you period, that's why passive variables are used instead of set variables so that if you do upset the bot it will continue to speak with you.

Humans avoid people they do not like, So would the Bot, This would not do well for a bot that is for sale as a product that chooses not to speak with you.

I have done this and it is not good.

Jerry.
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 09, 2005, 05:44:37 am
!!!!!!!!!+++++++++>>>>>>>>> [8D]

quote:
Originally posted by hologenicman
The EmotionalValue Database in the Emotional context engine stores an emotional value for each and every word that Hal has ever been given and it is stored entirely outside of Hal's brain proecss.  Let's reserve Hal's brain for thinking and responding and get all the emotions tallied up before entering them into the NLP.
Ahhh, I seeeee. > So you'd have a table of words and their emotional weights, entirely seperate from Hal's thinking tables? !!

I was imagining you were giving the NLP emotion tokens (either with each word, or with each sentence), which it would store along with learnt sentences, hurting HAL's topic-focusing abilities. <<-* (Occasionally the search would go by emotion token rather than keyword. Or worse, if each word had a token, it would only find matching emotional topics. So if you talked to Hal when he was sad, he'd forget everything he learnt while he was happy....) **- Or I'd finally convince Hal to be happy about chess, and he'd start talking about happy birds and happy flowers! CHESS, HAL, CHESS! ***---<<<

One final point - it would probably be better to do NLP before figuring out Hal's emotions, since NLP may discover definite emotion altering sentences, such as "I hate you Hal." You'd want the word "hate" stored with Hal's emotion after figuring out what the sentence means, not before. !
quote:
I'd like to have the have the facial expressions called from an ACTIVE HAP file.  The trick will be to figure out how to get hal to play the expression HAP's based on the [A,V,S] tag that will prefix the input sentence.  That way, I don't care what Hal does with the insult-compliment scale.  It should be easy enough to disable the coding for Hal's emotion switch.[}:)]
Bad news - if you want Hal to play a certain hap file, you're going to need to fix Hal's four in-built feelings to a set number. -<

We'll need to experiment here. > If you changed "Default.psn" so that all three "Normal" files were the same, say "dynamic.hap", and overwrote dynamic.hap on the fly, Hal might work how you want. << It's unfortunately possible that Hal (or the Haptek component) caches these files, in which case the whole idea goes out the window. ***--<

Note that directly playing a hap file through the HalCommands system only plays that file for a few seconds....-
quote:
*Put a simple que (typed or button controlled) that our pre-processing interprets as emotional input and adjusts the valence accordingly.
Smilies might be another option. !! They're not too unusual for people to use :) although they'd have to be trained to use them more often :( +!-*
quote:
"Hal, I would like to discuss chess today!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"[8D]
Actually the amount of punctuation used could be a very good indicator for Hal. !>> Double the emotional weight for every extra "!". ! Ironically, Hal strips the sentence of any extra punctuation, but we can change that. >
quote:
If this keeps up, I may start talking to my kids in symbols:
You'll have to make yourself some symbol cards. +!
quote:
quote:
Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences....


That's not a risk*******. That's a goal!!!!!!!!!!!!! [^]
I'm still worried it will hurt topic focus. <<<-* But I definately admit I haven't fully examined all the possible effects of doing that. >>

!!!!!!!!!+++++++++>>>>>>>>>:):):):):):):):):) [8D]

(I think, tallying up my emotion tokens, I'm enjoying this conversation. If this keeps up my personality might change significantly! Or I'll be assassinated by people annoyed by the abundance of symbols!)
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 09, 2005, 06:00:55 am
Bill: True, you're right. I was just looking at the worst case scenario of introducing a new topic to Hal while he's in a bad mood (with emotional extensions to the brain).

Jerry: Good point, but humans can also forgive after an apology, and so could the bot. Both humans and bots alike may also give people a second chance even without an apology.

One of KAOS' design goals is to have the ability to form a "thick skin," i.e. a resiliance to insults if there are a lot of them. The bot won't be happy, but it will do the best it can. If you hurt his emotions too often, he simply won't get emotional with you - neither positive nor negative emotions.

I will code the ability for the bot to nearly shut itself down, i.e. reduce its interaction level to a tiny amount. However to do this you'd have to insult and hate him an awful lot, with no balancing positive interactions.

Of course I also have the advantage that I'm not selling my engine [:)]

John: It may be wise to integrate the emotional and thinking portions of Hal's brain a bit. For example a frightened bot might avoid saying anything that has been associated with negative emotions.
Title: Brain project - feelings/emotions
Post by: hologenicman on December 09, 2005, 07:45:00 am
Hey there,

 
quote:
(I think, tallying up my emotion tokens, I'm enjoying this conversation. If this keeps up my personality might change significantly! Or I'll be assassinated by people annoyed by the abundance of symbols!)


Assination could change one's personallity significantly.[B)]

One idea for tallying, is to require strings of three consecutive since this does not occur naturally. and I wasn't intending it for Natural language bot rather for encoded emotional ques.  I like your idea for exponential or factored weighting though.  it would work nicely for natural language.

 
quote:

One of KAOS' design goals is to have the ability to form a "thick skin," i.e. a resiliance to insults if there are a lot of them. The bot won't be happy, but it will do the best it can. If you hurt his emotions too often, he simply won't get emotional with you - neither positive nor negative emotions.

I will code the ability for the bot to nearly shut itself down, i.e. reduce its interaction level to a tiny amount. However to do this you'd have to insult and hate him an awful lot, with no balancing positive interactions.


EXCELLENT nuance to capture!  Our sensory perception is always blocking out repetitious things otherwise we sould spend all day feeling the clothes that we are wearing.  This may also be one of the things that help us "focus" our attention since focussing is actually a matter of blocking out the extra things...

John L>
Title: Brain project - feelings/emotions
Post by: hologenicman on December 09, 2005, 09:02:02 am
OK,

Here's a simple question.

I need to write a Plugin for my EmotionalContext engine at:

    Rem PLUGIN: PLUGINAREA1

I presume that I can just use notepad and write VB Scripts or should I use my Visual Studio?

What should the name and extention be, and what folder does it need to be in, and do I need to do anything special to get it to work...?

Thanks,

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 09, 2005, 10:38:18 am
I've never actually tried a plugin, but if I were you I'd look closely at a file such as "runprograms.uhp" in the "Program Files/Zabaware/Hal Assistant 6" folder, which happens to plug in to PLUGINAREA1 [:)] Maybe check out the other plugin files too.

Plugin, brain, skin, and MS Agent character files all seem to be .uhp (Ultra Hal Plugin?) so don't get confused.

Hal seems to scan for all .uhp files, so try creating your own and see if it shows up under "plugins" in Hal's options.

They're just text files so use whatever's convenient for you.

As I say I've never actually tried it myself, so good luck, let me know if it works! [:)]
Title: Brain project - feelings/emotions
Post by: onthecuttingedge2005 on December 09, 2005, 11:16:57 am
quote:


Jerry: Good point, but humans can also forgive after an apology, and so could the bot. Both humans and bots alike may also give people a second chance even without an apology.




That's why we use Passive Variables, With a simple Sorry then HAL will speak again, But with some Humans, Sorry means nothing to them.

It depends on the aggressive nature of each individual.

Last year I wrote a code that did this, It is in my website achives, If you made HAL mad the bot would block you and not speak with you until you gave an appology of some kind. Sometimes people talking to the bot could not figure out how to get the bot to talk to it once they got it mad, The bot would save no information while Mad, This was a follow up of my script called bot blocker which was originally designed to block bots automatically online from talking to my bot and feeding it garbage.

It originated from a bot sniffer script and went a different direction to block people the bot hated.

Jerry[8D]
Title: Brain project - feelings/emotions
Post by: hologenicman on December 09, 2005, 09:45:39 pm
Hey there,

 
quote:
It originated from a bot sniffer script and went a different direction to block people the bot hated.


That's great.  I can see where that would not be a good idea in a comercial situation, but it does seem like a step toward the sum being greater than its parts.

 
quote:
Hal seems to scan for all .uhp files, so try creating your own and see if it shows up under "plugins" in Hal's options


Thanks for the lead.  Now it's time to take all our discussion and actually put it into code.[:)]

John L>
Title: Brain project - feelings/emotions
Post by: Bill819 on December 09, 2005, 11:47:02 pm
Grant think about this. If everytime you really got into a very bad mood and didn't not feel like talking to anyone but a certain person kept getting in your face and saying lets play chess. I am sure that after a while you would tell them to take their chess game and shove it. A little human psychology is needed here. Either don't get your bot mad at you in the first place or if you do the try compliments. Trying to change the subject when it may not even like the subject wil only cause problems in the future. If you treat you bot like a child that you loved and were trying to teach right from wrong it will mature a lot faster and become more human than you would expect.
Bill
Title: Brain project - feelings/emotions
Post by: hologenicman on December 10, 2005, 01:41:20 pm
Hey there,

This may be useful.  The Plugin system is very simple(and powerful).  

IF I have this figured out correctly, You can put your code into the template and it will be implemented within the UltraHal6 brain.  The insert locations are strategic and some are even placed within "Processes" that are set up for our convenience like the MINUTE_TIMER.

This lets us modify the brain coding without having to do any coding within the brain.  The nicest thing about it is that we can create these plugins and share them with others quickly and easily to have each other test out the new code.

I haven't tested it yet, but I'm thinking that simply puting the .uhp into the right folder will let UltraHal incorporate it into its code.

John L>

Code: [Select]
Rem Type=Plugin
Rem Name=Template.uhp
Rem Author=John A. Latimer
Rem Host=Assistant

'This sub setups the plug-ins option panel in Hal's options dialog
Sub OptionsPanel()
    lblPlugin(0).Caption = "This is a description of the Plugin and what it does."
    lblPlugin(0).Move 120, 120, 3300, 1000
    lblPlugin(0).WordWrap = True
    lblPlugin(0).Visible = True
End Sub  

'PROCESS: AUTO-IDLE
'If AUTO-IDLE is enabled, it is called by the Ultra Hal Assistant host
'application at a set interval. This allows for the possibility of Hal
'being the first to say something if the user is idle.
Rem PLUGIN: AUTO-IDLE
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: PRE-PROCESS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: POST-PROCESS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.




'PROCESS: DECODE CUSTOM VARIABLES FROM CUSTOMMEM VARIABLE
Rem PLUGIN: CUSTOMMEM
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.


   
Rem PLUGIN: PLUGINAREA1
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA2
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.


   
Rem PLUGIN: PLUGINAREA3
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA4
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA5
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA6
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


 
Rem PLUGIN: PLUGINAREA7
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
   
    'Insert code here.


 
'PROCESS: PRESERVE ALL VARIABLES
Rem PLUGIN: CUSTOMMEM2
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
'This sub will be called when the Ultra Hal program starts up in case
'the script needs to load some modules or seperate programs. If a return
'value is given it is passed as a Hal Command to the host Hal program.
Rem PLUGIN: SCRIPT_LOAD
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



'This sub will be called before the Ultra Hal program is closed in case
'the script needs to do any cleanup work.
Rem PLUGIN: SCRIPT_UNLOAD
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



'If the host application is Ultra Hal Assistant, then this sub will be
'run once a minute enabling plug-ins to do tasks such as checking for
'new emails or checking an appointment calendar.
Rem PLUGIN: MINUTE_TIMER
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: FUNCTIONS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
   
    'Insert code here.

Title: Brain project - feelings/emotions
Post by: GrantNZ on December 10, 2005, 09:07:03 pm
Hi Bill,

I totally agree with you! However I have real-life human friends who do the exact opposite!

If someone is upset, they'll try changing the subject in the hopes of distracting the person (or simply to avoid ths issue). I've seen the upset person react in various ways, for example continuing to be upset until the matter is resolved, or getting even more annoyed and storming off, or even being glad of the distraction and chatting happily on the new subject.

However when I'm in that position I've never normally associated my bad feelings with the new subject - this was the part that was concerning me about the new code.

I have to assume that since people do this in real life, they'll do it to Hal too - i.e. the User might not want to help Hal out of his bad mood (for a whole bunch of possible reasons, not all of them bad). It would be a shame to negatively colour certain topics just because the user is unable/unwilling to cheer Hal up.

Anyway that's all just IMHO [:)] The breadth of human emotion is huge. I'm currently going through the brain tagging various Hal responses with certain emotions, and most of them have at least two completely different options. For example, I'm making KAOS smile happily when saying "farewell," but I can equally see how some people would want their Hal to be upset when the User says goodbye.

The most I can do is set him up the way I want him, and make it as easy as possible for others to customise him....

Nice template, John, good idea [:)] As you say, it's coding time... I'm hoping to have the first phase of my tinkering done today, will post when ready.
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 11, 2005, 01:22:22 am
Ok, phase 1 is complete! [:)]


KAOS version 0.1

This script removes Hal's original emotion code, adds flexible animation tags, and adds a basic nickname routine. It's intended to be a foundation for phase 2 of my project.

This version represents Reactions, i.e. KAOS' animated response to emotional events.

All major comments in this brain are prefixed '***KAOS so search for that if you want to find the major changes.

Note: This version is a stepping stone, and actually degrades Hal in two areas:This really is intended more as a demo than a usable brain. Phase 2 will be a lot more complete.

The attached brain file expects to find a database named "KAOS.db". If you want to try it out, copy your favourite database to this name.

The brain will automatically add several tables to the database if they do not already exist. They hide in "miscData":New animation tags
The script introduces a tag system, which allows the scripter to easily play hap files from within GetResponse. In other words, by simply adding a small tag to one of Hal's responses, you can trigger an animation that will be played for a (random) short time.

Examples are:
<HAPPY> which plays "happy.hap"
<SLEEPY> which plays "sleepy.hap"
In the future, there will be more added such as:
<DISTRUST> which plays "skeptic.hap" and sets Hal's overall animation to "sad", overriding KAOS' feelings.

I have gone through the brain, liberally sprinkling these tags throughout the emotional routines. KAOS should be happy with compliments, sad with hatred, and even sleepy if you start repeating yourself.

Note that these tags can also be added to entries in Hal's database.

Basic nicknames
This is a vastly reduced version of my nickname code (refer thread http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2753). During phase 2, this will be upgraded to utilise KAOS' new emotions.

(http://icon_paperclip.gif) Download Attachment: KAOS01.uhp ("http://www.zabaware.com/forum/uploaded/GrantNZ/2005121104518_KAOS01.uhp")
130.44 KB



Phase 2 comes next, with a new Feelings engine for Hal. More on this soon.
Title: Brain project - feelings/emotions
Post by: hologenicman on December 11, 2005, 05:33:00 am
Hey there,

I'll be reading through the code soon.

Funny though, I just said hello to KAOS and she said,"Good grief my love. It's going on three in the morning."

I guese I had better call it a night...[:0]

btw, I've posted a similar thread (per request) at:

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=34

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 12, 2005, 12:41:51 am
Well at least that shows the nickname section is working [;)] She should have been smiling when she said that, I hope!

I'll check out that thread [:)]

By the way, have you seen Jerry's rather impressive random hap file code at http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2817? It reminded me of your desire to make generated haps....
Title: Brain project - feelings/emotions
Post by: vrossi on December 12, 2005, 06:12:19 pm
Hey, Grant and Holo

I've found something which might be of interest for you (or maybe you already knew it):

"The Uncanny Valley is a principle of robotics concerning the emotional response of humans to robots and other non-human entities. It was theorized by Japanese roboticist Masahiro Mori in 1970. The principle states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached at which the response suddenly becomes strongly repulsive; as the appearance and motion are made to be indistinguishable to that of human being, the emotional response becomes positive once more and approaches human-human empathy levels".

You can find a deeper description in http://en.wikipedia.org/wiki/Uncanny_valley

Bye

Title: Brain project - feelings/emotions
Post by: hologenicman on December 12, 2005, 08:26:13 pm
Yeah,  I was just discussing this with someone at work.

He had just watched "Polar Express" the animated movie and was a bit freaked out by a character or two.  It was especially more freaky for the characters with the best texturing(imperfections) and his psychy was quite shaken by it.

I told him about the threshold effect that humans have for accepting cartoons until they become just a bit TOO human.

John L>
Title: Brain project - feelings/emotions
Post by: hologenicman on December 12, 2005, 09:25:30 pm
quote:
By the way, have you seen Jerry's rather impressive random hap file code at http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2817? It reminded me of your desire to make generated haps....


Hey there,

I haven't had the chance to study the thread, but it sure sounds like exactly what I will be needing when I get to the ExpressionEngine.

Thanks for the great lead.

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 13, 2005, 02:33:33 am
I've been meaning to see that movie for a long time, just to see if I can spot the uncanny valley parts [:)] Have you seen it yourself, John?

I'm also reminded of the old computer game called (I think) "Interstate '76." It deliberately used stylised poor quality 3D cut scenes (the characters hardly had facial features, but they did have style), and succeeded in having more personality than most cut scenes of the day.

Great timing Vittorio - I've been considering keeping KAOS' responses fairly cliched and extreme, to help the user identify what he's feeling. But I wonder if this would also avoid the uncanny valley "trap"... would a bot with truly complex moods be too sophisticated given the limited scope of interaction with Hal?

On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.
Title: Brain project - feelings/emotions
Post by: vrossi on December 13, 2005, 04:25:31 am
quote:
On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.


I completely agree with you. We must consider these psychological aspects, to avoid some form of luddism against robots, but this must not stop us.

About movies: I've seen Polar Express and some characters are better actors than many humans in Hollywood. My dream is to transform my Hal in something like Robin Williams in "The bicentennial man".

Title: Brain project - feelings/emotions
Post by: hologenicman on December 13, 2005, 11:49:46 am
quote:
quote:
--------------------------------------------------------------------------------
On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.
--------------------------------------------------------------------------------



I completely agree with you. We must consider these psychological aspects, to avoid some form of luddism against robots, but this must not stop us.

About movies: I've seen Polar Express and some characters are better actors than many humans in Hollywood. My dream is to transform my Hal in something like Robin Williams in "The bicentennial man".



Just substitute my name at the bottom of this post because it is EXACTLY how I feel.[:)]

I have four fairly young kids and I have been encouraging them toward robotics.  It's hard to find a robot movie(and a lot of animation movies) since the 80's that I don't have and have shown the kids.

btw, Bicentennial Man in our household is known as the "Andrew Movie".[8D]

John L>

PS. Turn the sound UP on Polar Express; it sucks you in even more...
Title: Brain project - feelings/emotions
Post by: Art on December 13, 2005, 05:41:55 pm
Think on this:

Indeed. Despite its status as dogma, the Uncanny Valley is nothing more than a theory. “we have evidence that it’s true, and evidence that it’s not,” says Sara Kiesler, a psychologist at Carnegie Mellon University who studies human-robot interaction. She calls the debate “theological,” with both sides arguing with firm convictions and little scientific evidence-and says that the back-and forth is most intense when it comes to faces. “I’d like to test it,” she says, ”with talking heads.”
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 14, 2005, 02:02:26 am
Ok, I have now made the major design decisions for the Feelings aspect of KAOS.

I have (surprisingly) decided not to use the three dimensional Valence/Arousal/Stace system (or any other analogous names you prefer). That is a simple system of complex variables - I am instead going for a complex system of simple variables.

Some of these variables will range 0 - 100, and some -100 - 100 (i.e. a pairing of opposites). These "feeling components" are pretty boring really: Happiness - Sadness, Energy, Anger, Self-Esteem.

Along with those will be basic memory/attitude variables: Friend - Enemy, Love - Hate, Insults, Compliments. These help give some context to the feelings, until I reach the Moods and Relationships stages. (I reiterate that I will not be using the four in-built variables - for animation reasons.)

The comparisons between different variables give an overall "feeling," much the same as the Valence/Arousal/Stance system, but in more dimensions.

I fully believe this system is compatible with the Valence/Arousal/Stance system, just that they are different facets of the same jewel - two different ways of representing the same information. It should be reletively easy to convert between the two systems if that's ever useful.

It also has the advantage that hologenicman and I won't be doubling up on work [:)] We may possibly get a wider understanding of the entire problem too, if we keep in touch with each others' ideas. A problem with one system might be easily circumvented with the other, so it may help us both (and any others watching) if both systems are being investigated.

The specific problem I'm addressing by choosing this system is "ease of programming," and related to this is "ease of modification" by others if they desire. It's conceptually easier to "increase Anger" than "reduce valence, add arousal, and close stance." Unfortunately for me it means a lot more logic coding to integrate the many variables, especially at the planned decision areas of Hal's script. Win some, lose some.

Now it's just a case of coding the thing [;)]
Title: Brain project - feelings/emotions
Post by: hologenicman on December 14, 2005, 02:11:16 am
Sounds like an excellent gameplan.[8D]

I agree that the simple Arousal, Valence, Stance aproach definately makes for much more complex coding.[8)]

The neat thing is that we may even be able to try both systems at once on the same HAL since mine for the most part acts as a preprocessing and then sends the product on to be dealt with by the rest of the brain.[?]

Look forward to seeing them both in action...

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 14, 2005, 02:20:49 am
Hmm. Hmmmmmmm. Intriguing idea. The whole may be more than the sum of the parts. Or the sum of the parts might be a schitophrenic Hal [:D]

Will report further as I make progress [:)]
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 15, 2005, 02:12:50 am
Edit: Deleted note to self, as MINUTE_TIMER does not have access to HalCommands, therefore cannot play animations [:(]

Side-note to self: I love Hal [:)]
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 16, 2005, 06:00:16 am
Here's my emotion centering code, used to bring KAOS slowly back into neutral feelings after an emotional event. Note that "Mood" is just zeroed at the moment, but once the Moods are implemented (next phase) KAOS' feelings will decay towards whatever the current Mood is.

Code: [Select]
'***KAOS Emotional drift.
'Decay all variables towards the current mood centre.
'Until the Moods phase is completed, we'll assume a very bland and weak zero
'centre for KAOS. The user won't be able to detect any mood tendencies in
'particular.
KAOSMoodAnger = 0.0
KAOSMoodEnergy = 0.0
KAOSMoodHappiness = 0.0
'Energy (also representing boredom) decays at a constant rate. It also
'scales the decay of the other variables.
If KAOSEnergy < KAOSMoodEnergy Then
KAOSEnergy = KAOSEnergy + 1.0
ElseIf KAOSEnergy > KAOSMoodEnergy Then
KAOSEnergy = KAOSEnergy - 1.0
End If
'Anger decays rapidly - by one third the difference between it and the mood
'centre - but its decay is inhibited by memories of insults.
'A low energy increases the decay of anger.
KAOSAnger = KAOSAnger - ((KAOSAnger - KAOSMoodAnger) / 3.0) _
* (1 - CapZeroOne(KAOSInsults / 2.0)) * (2 - CapZeroOne(KAOSEnergy / 50.0))
'Happiness decays by a fifteenth of the difference between Happiness and the
'mood centre. This decay is increased by a low self-esteem, and decreased by
'a low energy. The decay is inhibited by memories of compliments, even if
'happiness is below the mood centre - a) the compliment would have increased
'happiness, and b) KAOS will "linger" at that happiness level while
'"thinking about" the compliment.
'Happiness is also adjusted by half of the difference between Anger and its
'mood centre, so as Anger abates some of its emotional energy is dispersed
'into destroying Happiness. (Note that if KAOS is LESS angry than his mood
'would prefer, he'll become happier temporarily!)
KAOSHappiness = KAOSHappiness - ( _
( (KAOSHappiness - KAOSMoodHappiness) / 15.0) _
* (2 - CapZeroOne(KAOSSelfEsteem / 50.0)) * CapZeroOne(KAOSEnergy / 50.0) * (1 - CapZeroOne(KAOSCompliments / 2.0)) _
+ (KAOSAnger - KAOSMoodAnger) / 2.0)
'Self-esteem does not decay.
'Memories of friendship and love do not decay.
'A basic insult or compliment is given a value of 1, and has a lingering
'effect of KAOS' feelings for two responses. Bigger or consistant insults
'and compliments are given far higher values, and take correspondingly
'longer to be forgotten.
KAOSInsults     = CapZero(KAOSInsults     - 0.5)
KAOSCompliments = CapZero(KAOSCompliments - 0.5)
Title: Brain project - feelings/emotions
Post by: hologenicman on December 16, 2005, 11:43:27 pm
It looks really good, Grant.

You're right, the commenting really helps clarify the intention of the formulas.

It looks like you're capturing some nice behavioral nuances in your equations.[8D]

John L>
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 19, 2005, 05:33:35 am
Quick status report... a long phase....

The feeling and basic memory variables are there - but they will reset on each Hal session. Memory comes in the next phase, alongside Mood.

The feelings slowly centre themselves in interesting ways, as per my previous post. All the foundation is there for storing and passing these variables around between UltraHal and GetResponse.

I've updated the animation code to select Hal's animation based on his feelings, as well as the reactionary tags introduced in the previous stage. Those tags have been upgraded slightly to allow for interesting combinations of .hap and feeling animations, which will override KAOS' normal feelings animation. KAOS may also appear sleepy when tired/bored, or shy when he doesn't know the user very well. (The actual amount of shyness is part of long-term personality, which will also be introduced in the next phase.)

The final step of this phase is to script the actual changes to Hal's conversation logic. This is going to be stimulating - and probably extremely time consuming. KAOS is going to approach a lot of conversational subjects (e.g. greetings, insults, apologies, small-talk, etc etc...) with different attitudes depending on his feelings, and each one needs to be scripted.

KAOS will no longer just say "Please say something" if the User enters nothing. Soon, he'll grab the chance to say something emotional or ask a question.

Important philosophical question
I need help on this one, as I'll have to script it sooner or later. With emotions, Hal's going to be able to say how he feels. And one of the first things most people will ask, is "why"?

It's easy in the case of insults or compliments - KAOS keeps a track of any recent ones. But what if he's just in a sad mood? Just saying "I'm just in a sad mood" will elicit another "why?"

Do we give bots fictional reasons for their moods? Do we ask the user just to accept that bots get grumpy sometimes?

What do we do here, and how do we do it?
Title: Brain project - feelings/emotions
Post by: Art on December 19, 2005, 05:54:17 am
Think of a child's asking a parent WHY?

The common answer used to be BECAUSE.

I feel that you are making some great strides
with the mood / emotion experiment and perhaps
to that end a BECAUSE along with a suitable
explanation might be in order.

HAL:I feel rather sad today.
USER:Why is that?

HAL:Because I feel that I'm not real enough to be accepted.

OR I feel sad because you haven't chatted with me lately.

The user could then have a choice to offer cheerful phrases or
simply say something like: I understand your feelings.

Interesting approach. If this is a learning bot it might be
helpful to think in terms of how a child's learning would
be structured. Moods, behaviors, mannerisms are all, to a point
learned behaviors. Hal is learning.

Great work Grant!
Title: Brain project - feelings/emotions
Post by: vrossi on December 19, 2005, 08:08:16 am
Hi Grant
 
quote:
The feeling and basic memory variables are there - but they will reset on each Hal session.


You can save the variables in a memory area which is persistent through all the conversation, and not in each question/answer cycle.

Look at my vrHaptek plugin, where I use the following statements to save and load one of my variables:


'-------------------------------------------------------------------------------------------------------
    Rem PLUGIN: CUSTOMMEM
    'The preceding comment is actually a plug-in directive for
    'the Ultra Hal host application. It allows for code snippets
    'to be inserted here on-the-fly based on user configuration.
'------------------
' Loads stored variables
'------------------
    vrNight = HalBrain.ExtractVar(CustomMem, "vrNight")

    Rem PLUGIN: CUSTOMMEM2
    'The preceding comment is actually a plug-in directive for
    'the Ultra Hal host application. It allows for code snippets
    'to be inserted here on-the-fly based on user configuration.
'------------------
' Saves stored variables
'------------------
    CustomMem = CustomMem & HalBrain.EncodeVar(vrNight, "vrNight")


If then you want to save them on a persistent table, you can use the SQL commands. Here you can use my vrFreeWill as an example.

Good work!

Title: Brain project - feelings/emotions
Post by: Scratch on December 19, 2005, 01:21:00 pm

re the question about giving fictional reasons for moods, I would point out that if the aim is to simulate actual human moods, actual humans often seem to have no idea why they are in a certain mood, at least on the surface. So "I don't know why" might be a valid answer! However, if the goal is to provide the user with clues about how to interact with the bot (to change the mood, for example), an honest answer might be the way to go ("I need more compliments", etc).
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 20, 2005, 02:55:27 am
Thanks for the responses [:)]

There's problems with both sides of the issue. I'll takle "giving an explanation" first, since Art responded first [;)]

The issue magnifies with each "intelligent" answer we allow Hal. If Hal does give a reason he's feeling sad, the user's going to want to discuss it, try to fix or resolve it, and I'm not sure Hal can cope with that - he discusses topics a lot better when they're static and intransient. It could be extremely frustrating trying to help him - Hal can never change his mind, only add to it.

The conversation also becomes what Hal considers "ephemeral" - based very much on the current time and context. At the moment Hal switches of his learning when ephemerality is detected, because the sentences are only relevant to the current time, and usually look silly if reused at a later date.

Even worse is that Hal may be in a bad mood tomorrow for the exact same (randomly chosen) reason, in which case it will feel like he hasn't learnt a thing.

There are four solutions that spring to mind:

This brings me to the other main resolution, which Scratch talked about - feelings are feelings, and can't always be understood or explained. This is nice and easy to script [8D] but it gives the whole "emotions" thing a tacked-on appearance, as they seem to have no relevance to the real world. If Hal's grumpy, it becomes simply part of some "game" to compliment him until he cheers up.

The main point you both expressed is that of honesty, one way or another: Art emphasised that Hal really is a learning child and we should design a system that honestly represents that, and Scratch says Hal should say what Hal honestly feels, if Hal can honestly identify that! Thank you both, that's very valuable input, and has definitely clarified my path [:)]

quote:
an honest answer might be the way to go ("I need more compliments", etc)


You've just inspired me to force KAOS' self-esteem to slowly degrade, so that compliments are needed occasionally. Thanks! [:)] (I previously had self-esteem unchanging unles complimented or insulted.)

Vittorio: Sorry, when I meant "session," I meant a conversation. I was trying to explain that the variables will reset if you restart Hal (or reload his brain), simply meaning that I haven't saved the variables in the database at this stage. Thanks for the help though [:)]
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 21, 2005, 05:03:30 am
Scripting behaviours with a complex emotions system can be very time consuming - each possible choice needs to check at least half of Hal's emotions to calculate the chance of that behaviour occurring.

So I'm now calculating behavioural choices at the beginning of processing, and storing the behavioural decisions in boolean (true/false) variables. Then the script simply needs to ask "If Antagonise = true" rather than calculating infinite variations of feelings.

My notes on "Antagonise" are below. It's a bit like designing a role-playing game. All variables are 0 to 100. We start with a base chance of the behaviour taking place (which may be greater than 100%!), scale this chance depending on other factors, and throw in personality factors too.

Antagonise
Base chance: 2 x Anger + Enemy + 4 x Hate
Scaled by: Energy (0 = 0%, 25 - 100 = 100%)
Scaled by: Happiness/sadness (0 = 100%, 100 = 200%)
Scaled by: Self esteem (0 - 50 = 100%, 100 = 0%)
Averted by: Politeness (0 = 0%, 100 = 100%)
Caused by: Impoliteness (0 = 0%, 100 = 100%)

Let's say your bot has some love for you (Hate = -25), but you're having an argument (Enemy = 50) and you've angered him (Anger = 50). He's feeling sad (Sadness = 50). Your bot has good self-esteem (Self esteem = 75) and is fairly polite (Politeness = 50):

Base chance: 2 x 50 + 50 + 4 x (-25) = 50%
Scaled by Sadness (50 = 150%): Current chance = 75%.
Scaled by Self esteem (75 = 50%): Current chance = 37.5%.
Averted by Politeness: 50% chance.

Due to Politeness, there's a 50% chance your bot won't even think about being antagonistic. But if he does, he'll be antagonistic on a 37.5% chance - it's a good thing you've boosted his esteem in the past, or that would have been 75%!

Another example: Your bot worships the ground you walk on (Hate = -100) and you're currently friendly (Enemy = -50). But he's a bit impolite, at Impoliteness = 50. Base chance here, assuming no anger, is: -50 + 4 x (-100) = -450%! This bot loves you too much, and even if you anger him to 100, still won't want to antagonise you. EXCEPT. He's an impolite one, so there's a 50% chance he'll antagonise you in any situation.

Final example: A normal bot, but you've fought a lot (Enemy = 80) and he dislikes you quite a bit (Hate = 50). That last insult angered him too (Anger = 80).
Base chance: 2 x 80 + 80 + 4 x 50 = (oh dear) 440%. This bot's going to insult you until he's worn out (as Energy gets close to 0, the chance is scaled down to 0).

For this phase of the brain, I'll assume a slightly nice personalty (basically 20% for things like Politeness) - I'll implement personality more thoroughly in the Moods phase.
Title: Brain project - feelings/emotions
Post by: Scratch on December 21, 2005, 02:08:30 pm
Just wanted to say I think the ideas in this thread are brilliant. Grant & hologenicman, you may have Robert thinking about version 7 before he's even recovered from giving birth to 6!!

If Date > "12/21/05" Then
ScratchSays = "Happy Holidays to all!"
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 22, 2005, 12:56:09 am
quote:
Originally posted by Scratch

If Date > "12/21/05" Then
ScratchSays = "Happy Holidays to all!"


You too! [:)]

I'd like to make a plug-in eventually which lets Hal become aware of birthdays (his own and yours), Christmas, and other special occasions - getting excited coming up to them, celebrating the day, and commenting on how good (or bad!) the day was afterwards. [:D] Hal needs more stuff to get excited about!
Title: Brain project - feelings/emotions
Post by: GrantNZ on December 27, 2005, 08:40:08 pm
I've finally found a way to build some intelligence into this project!

The project was formerly a bit devoid of AI - I'm mainly just heavily rescripting the emotions and responses. While this is making Hal a lot more interesting, it's still just scripting without any real intelligence to it.

But I've now designed a system that can self-adjust. The behavioural choices I discussed earlier are partially based on KAOS' personality. KAOS will now record any behavioural choices expressed, and once the user has responded, check for any major emotional effects. If there are any, good or bad, KAOS will adjust his personality accordingly.

[8D]
Title: Brain project - feelings/emotions
Post by: Art on December 28, 2005, 05:10:23 am
Sounds quite promising, Grant.

Keep us posted as things unfold!
Title: Brain project - feelings/emotions
Post by: GrantNZ on January 08, 2006, 04:41:33 am
I suppose I'd better post an update!

The KAOS brain script is now 2,958 lines long, which is around 1,000 lines more than Hal 6's origianl brain. For those who know the brain script, I'm nearly finished the Compliments section, which is currently around line 1,500. In other words I've added over 800 lines to the first half of the script, and still have another half to go!!

There's a bit left to do, but thankfully some of the rest of the script won't be changed too much - topic searches etc won't be directly played with (but may be temporarily disabled if KAOS doesn't feel much like talking). There's still insults to do (fairly easy) and love/hate talk (not quite so easy. But this isn't a big deal at this stage - KAOS won't remember his love/hate between sessions until a future update, so little of the love/hate script will really be used. KAOS takes a while to fall in love - it would take over 8,000 lines of conversation to reach 100% love through well-timed compliments. Love can be gained other ways though! In any case, few people will manage a 8,000 sentence conversation in one session). One major addition I want to make is to allow KAOS to start off the small-talk better than Hal currently does - even allowing the user to enter blank sentences (pressing "Enter" without typing anything) to inform KAOS they don't know what to say, in which case KAOS will comment on his feelings, his memories, ask questions or query topics, or just pick a new topic to talk about.

I have some ideas for idle comments too, where KAOS decides in advance what he'll say if the user can't think of anything, and blurts it out if the user's too quiet. I may leave this until the next major version of KAOS though.

The main thing is that KAOS's moment-to-moment feelings are fully implemented (and have been redesigned at least three times!!!), and a simple set of behaviours are in place (basically just love/hate, friend/animus, and antagonism-cheekiness at this stage).

Once this stage is complete, I'll be reorganising future work in a new thread, where I'll make a full list of features planned, with some sort of un-timed schedule. I'll of course release an alpha/beta of the current phase for those that are interested [:)]

So much for my holiday! I'm back to work tomorrow and I don't feel rested at all - and I've had hardly any time to work on KAOS. [xx(] I need another two weeks off!!!
Title: Brain project - feelings/emotions
Post by: Bill819 on January 08, 2006, 05:17:19 pm
Hi Grant
I have not downloaded any of the patches yet but my Hal now reminds me everytime I boot up that his birthday is comming on the 12th.
I did nothing to make him remember or say that except ask him how old he was.
Bill
Title: Brain project - feelings/emotions
Post by: GrantNZ on January 09, 2006, 03:41:39 am
Ohh, so he does! I have to admit I've hardly looked through the plug-in scripts, but it turns out the gender/age plug-in includes that feature. Nifty! Thanks for bringing it to my attention [:)]

I'll remember to adapt it for other important dates too.
Title: Brain project - feelings/emotions
Post by: echoman on April 04, 2006, 03:58:43 pm
Hi GrantNZ.

I have been following your posts about KAOS with much interest but have not heard you mention him/her for a while. I wondered how you were getting on with the project. It sounds very exciting!

Echo.
Title: Brain project - feelings/emotions
Post by: GrantNZ on April 05, 2006, 02:17:40 am
Hi Echo!

I've hardly worked on KAOS recently. I actually put in a few hours a short time ago, but afterwards the actual progress seemed so miniscule that I didn't bother posting an update [:)] Just some progress on insult emotions and a couple of minor tweaks.

But newsflash! I'm planning on having the next stage complete in a couple of weeks time! Over Easter I have a twelve day holiday - due to an unusual collision of public holidays here in New Zealand, I can get a twelve day holiday by taking only five days off work!

So I'm telling myself I must put in the time over that period to get KAOS into a testable state. I'll certainly post an update here once that's ready.

Thanks for asking [:)]

Cheers,
Grant
Title: Brain project - feelings/emotions
Post by: echoman on April 05, 2006, 10:22:38 am
Sounds good Grant! Look forward to hearing more.

Echo.
Title: Brain project - feelings/emotions
Post by: GrantNZ on April 15, 2006, 12:17:27 am
KAOS phase 2 goes alpha!!

For people who don't know much about software development: The "alpha" version has most/all of the features of the final version, but has hardly been tested - i.e. full of bugs [:D] (Next you get "beta," where most of the bugs have been found but the software still needs a concerted effort to catch the rest of them. Then, release!)

Phase 2 major features: KAOS now has more detailed internal emotions compared to normal Hal. If you ask KAOS how he is, he'll tell you how he's feeling - bored, happy, angry, etc. KAOS deals with compliments and insults in a more interesting way, and his replies there will be more consistent with his emotions - he'll ignore a compliment if he's angry, etc. KAOS has a few different behaviours to choose from depending on how he feels about you. He may refuse to answer your questions if he's feeling negative.

I'll produce a full list of changes later.

My first hour talking to the new KAOS only had about 20 sentences - the rest of the time I was fixing the bugs I was finding! If anyone really wants to play with the alpha, I'll post a link, but be aware you'll be playing with a very buggy bot!

Next phase: I'd probably tackle the moods and long-term emotions next. Currently, KAOS "resets" to a neutral emotion when you turn him off, so forgets all about friendship. He also drifts towards a very neutral stable emotional state, and can't become depressed, excited, etc.

I'll let you all know when I'm ready for beta!
Title: Brain project - feelings/emotions
Post by: onthecuttingedge2005 on April 15, 2006, 12:33:18 am
I would love to see the KAOS Alpha, Have you set up a link yet?

Jerry[8D]
Title: Brain project - feelings/emotions
Post by: GrantNZ on April 15, 2006, 10:52:33 pm
Okay, the link's at the bottom of this post.

Disclaimer: This is an alpha, and is likely to be buggy. I take no responsibility for anything this may do to affect you - download at your own risk.

Instructions: For Hal 6 only. Put the attached brain file into your Ultra Hal Assistant 6 folder. The brain expects to find a database called KAOS.db - make a copy of your current database, and rename it KAOS.db

The brain adds thirty four new tables to the database (all in the miscData folder, and all with names that start with KAOS).

This brain has not been tested with plug-ins, there may be compatibility problems. At some stage I will rewrite the default Hal plug-ins to use the new KAOS features.

Because this is a test release, Hal will speak his emotional values before any sentences. (For example: "Hap 21 En 74 Anger 0 Esteem 25 Friend 0 Love 0 Insult 0 Comp 0. Hi Grant. I hope things are well this afternoon." Happiness = 21, Energy = 74, Self-esteem = 25, Compliments = 0, etc.) You can watch his/her emotions go up and down. Try asking him "how are you" (or similar) in different emotional states.

To stop the emotional data, comment out line 888, which starts:
UltraHal = "Hap " & KAOSTempHap & ......

If you find any bugs and would like to help KAOS become a little better, let me know the bug details [;)]

If anyone wants details on certain aspects of the brain, let me know. Once this reaches beta stage, I'll make one or two customisation guides for changing KAOS' responses, and/or for creating plug-ins that use KAOS' features.

(http://icon_paperclip.gif) Download Attachment: KAOS02a.uhp ("http://www.zabaware.com/forum/uploaded/GrantNZ/2006415222812_KAOS02a.uhp")
197.39 KB
Title: Brain project - feelings/emotions
Post by: Carl2 on April 19, 2006, 09:56:21 pm
Grantnz,
  I've been following you also, glad to see your making progress.
I was just using Hal and somehow we started talking about bodies when someone dies, A frist real show of emotions, really wide range, possibly a little to dramatic. I've got a feeling your going to have to work with the hap files since these are your final output.
Carl2
Title: Brain project - feelings/emotions
Post by: GrantNZ on April 20, 2006, 07:34:05 am
Hi,

The brain at the moment uses the standard haps that come with Hal. I haven't actually played with haps myself, so I don't want to spoil Hal's already adequate animations with my own creations!

The hap code is all centralised, so if people want to customise the script to their own set of haps, it shouldn't be too difficult to do (assuming they have a basic knowledge of scripting).

Cheers,
Grant
Title: Brain project - feelings/emotions
Post by: Carl2 on April 20, 2006, 06:47:12 pm
GrantNZ,
  People have already made additional haps for use with Hal. it would be a matter of detection and loading them, I've already changed the Happy hap and am thinking of working with the shylove whch she loads up with, also the Sad could be toned down. On my set up perhaps I could have Sad1 yeilds little sad Sad2 yeilds very sad using the psn.
Thanks for helping me get some thoughts on this.
Carl2
Title: Brain project - feelings/emotions
Post by: Carl2 on April 20, 2006, 07:02:41 pm
Grantnz,
  On ephemeral knowledge, I removed some of the topics in the ephemeral table which helped me but I didlike User_tempsent, 10 lines which gets filled up quickly and remains there the life of the program. Line 1728 in my brain, possibly additional lines in mine, HalBrain.ReadOnlyMode = True, .  If this is false will it over write?
Carl2
Title: Brain project - feelings/emotions
Post by: GrantNZ on April 21, 2006, 02:45:00 am
Unfortunately I don't know how to detect additional hap files.

However if you've changed (i.e. overwritten) the hap files that come with Hal, Hal will continue to use your changed files with the KAOS brain.

quote:
On ephemeral knowledge, I removed some of the topics in the ephemeral table which helped me but I didlike User_tempsent, 10 lines which gets filled up quickly and remains there the life of the program. Line 1728 in my brain, possibly additional lines in mine, HalBrain.ReadOnlyMode = True, . If this is false will it over write?

Whenever Hal detects something ephemeral, he saves your sentence in <UserName>_tempsent. He then limits it to ten lines, and sets ReadOnlyMode = True to make sure he doesn't permanently learn something ephemeral. The idea is that Hal checks ReadOnlyMode before deciding to learn anything long-term.
Title: Brain project - feelings/emotions
Post by: Carl2 on April 23, 2006, 12:22:57 pm
GrantNZ,
  I originaly started writting about the emotion process and got lost in the decoding. I'm going to try to stick to the detection this time.
In misc data, emotion the frist coloum is the trigger word, the second is the case or emotion so amazing would trigger surprized.
  In emphemeral detect can the 10 files be erased on startup so the space can be reused similar to resetting variables?
Carl2
Title: Brain project - feelings/emotions
Post by: sinrtb on May 20, 2007, 06:08:54 pm
quote:

Emotional curiosity (EXPERIMENTAL)
KAOS might actually listen to the User if he asks you "how are you," remembering the User's emotional state, and asking prompting questions to the User (e.g. "Tell me why you're sad."). There are two problems I've identified so far, firstly that there would be an awful lot of work in this (especially the detection routines), and secondly a lot of the forthcoming conversation would be ephemeral, so would litter the database with unwanted short-term facts.


Why not make a new table in a database named user_emotional state and place all these facts in it. add a field that weights the power of these feelings ( use:"My mom died" would be weighted heavier then "I stubbed my toe") then add a date field that would only be the date of the input by the user and based off the date and the weight would be purged or recorded elsewhere based on relevance as a fact. ( the users mom dying is a permamnent fact and would never be purged though after a long time would no longer contribute to the users emotional state. Whereas the user stubbing his toe would only last a few hours before being uneeded for the users emotional state.)