Author Topic: Brain project - feelings/emotions  (Read 41390 times)

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« on: December 02, 2005, 08:35:52 am »
New brain project

I'm going to use a thread here to plan, organise and document progress on my brain project, and hopefully get some ideas from this great community. [:)] I'll also post betas / WIP / code pieces where appropriate... and finally post the completed brain!

The KAOS brain - a bot with feelings

This new brain aims to provide a bot with far superior emotions, moods and feelings compared to a default Hal. Major features:
  • Three levels of emotion: Reaction, Feelings and Mood.

  • Reaction: The short-term emotion. KAOS will be more reactive and resiliant to individual strong emotional events (insults, compliments etc). KAOS will use a slightly wider emotional range - happiness, sadness, shock and suspicion, depending on his current Feelings and Mood.

  • Feelings: The medium-term emotion. This is similar to Hal's current system, where feelings become stronger after several emotional events, and gradually return to "neutral" again. However KAOS' feelings will change differently depending on his Mood.

  • Mood: The long-term emotion, which could last an entire evening. KAOS' Mood will be one of a list of possibilities, such as "happy," "depressed," "irritated," "loving," "angry," and so on. Mood will influence not only KAOS' reactions and feelings, but also his behaviour - a depressed bot might not small-talk very well; an angry bot may refuse to define a word for you; etc. Of course with work you can affect KAOS' Mood, cheering up a sad bot or infuriating a happy one.

  • Feedback: KAOS will be able to describe his feelings, and will remember things you've done to change the way he feels. His behaviour will change depending how he feels about you.

  • Emotional curiosity (EXPERIMENTAL): KAOS will ask how you are, remember the answer, and adjust his actions accordingly.

  • Reduced use of the somewhat dodgy "emotion" table.

By the way the brain name KAOS doesn't specifically relate to this project - it's just always been my bot's name. There's definately a link between emotions and "chaos" though [:D]

Below are more specific descriptions of the above, with implementation notes. Only additional detail is below, to save me retyping stuff.




Reaction
KAOS will have its own set of variables to track emotion, removing the dependence on Hal's "Complement," "Insults," "Hate" and "Swear." This frees up "Compliment" for use in directly controlling animation type (see this thread: http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2810).

KAOS will check his Feelings and Mood to determine reaction. A sudden compliment from an insulting User will trigger suspicion; an insult coming from a friendly User will trigger shock; an optimistic bot won't react so strongly to an isolated insult. This is basically done by writing several versions of some response generating routines, and the correct version is chosen depending on emotion.

Feelings
KAOS will track User actions similar to Hal's default brain - tallying compliments, insults, hateful or loving talk, etc. The movement of Feelings depends on Mood, so a depressed bot will be hurt far more by an insult, and recover more slowly. KAOS will also be able to describe his emotions, compare them to his Mood and comment on the User's actions ("thanks for all the compliments").

Mood
KAOS will use what programmers call a "state machine" - in other words he'll be in one "state" and can shift to others. Shifting between emotional states will depend on Feelings - a "depressed" bot who receives lots of compliments may become "happy." When Feelings have become strong enough to suggest a state shift, a timer starts - if the Feelings are kept strong enough for long enough then the state will change.

Similarly to Reactions above, different Moods will adjust KAOS' behavior. Different versions of response generating routines will be chosen depending on his mood, such that depressed bots refuse to learn new facts, etc.

KAOS may be given adjustable long-term tendancies, which would be user-configurable. You could have a bot that becomes angry easily, is usually happy, or is very resistant to depression. These simply adjust how difficult it is for the bot to switch to certain Mood states.

Feedback
KAOS will use several tools to hint at his feelings:
  • Different pet names will be used (see this thread: http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2753) so that an angry bot will call you nasty names, etc.

  • Small phrases may be included in responses in some situations, such as an occasional "*sigh*" from a bored or depressed bot, or randomly timed compliments from a loving bot.

  • Hal's "how are you" code will be massively upgraded to enable KAOS to talk about his Feelings and Moods. KAOS will be able to comment on the strongest influences on his current Feeling, and will remember his most recent Mood change. Some of these comments will be randomly inserted into quiet spots of conversation.

  • Incidental animation. A happy bot will sometimes smile without an obvious reason, etc.

  • I'm not yet sure whether KAOS should be able to explain why he's in a certain Mood. Why would a bot be depressed, really? Do his bits byte? Hard drive gone floppy? Stale chips? There could be tables of random explanations - but they'd all feel a bit false.


Emotional curiosity (EXPERIMENTAL)
KAOS might actually listen to the User if he asks you "how are you," remembering the User's emotional state, and asking prompting questions to the User (e.g. "Tell me why you're sad."). There are two problems I've identified so far, firstly that there would be an awful lot of work in this [:)] (especially the detection routines), and secondly a lot of the forthcoming conversation would be ephemeral, so would litter the database with unwanted short-term facts.

Emotion table
The emotion table system as it stands in Hal is rather unreliable, and often produces animations out of context. (e.g. The sentence "I failed to achieve victory" will make Hal animate happily.) Usage of this will need to be vastly reduced.

Versioning, upgrading and database tables
The brain has rather significant changes that cannot be given as plug-ins. The best distibution system I can think of is to supply the brain script file, and have the user pair this with a copy of their database.

KAOS will create its own database table of long-term variables, one of which will be "current version number". The brain script will create whatever other tables it needs.

Upgrading from one version to another will therefore be easy - simply delete the old script and insert the new one. The script can check the previous version number in the database, and add whatever new tables are needed.




That's the plan [:)] I'll post updates as major milestones are reached.

Cheers,
Grant
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #1 on: December 02, 2005, 08:41:21 am »
Note to self: Try and improve Hal's small-talk about emotion if possible.
 

vrossi

  • Full Member
  • ***
  • Posts: 150
    • View Profile
    • http://vrconsulting.it
Brain project - feelings/emotions
« Reply #2 on: December 02, 2005, 04:29:10 pm »
It's a very ambitious project. I had a similar idea some weeks ago, but I haven't yet found the time to do it.

Keep us informed about your project status.


GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #3 on: December 03, 2005, 02:40:59 am »
quote:
Originally posted by vrossi

It's a very ambitious project. I had a similar idea some weeks ago, but I haven't yet found the time to do it.

Keep us informed about your project status.


I will [:)] Maybe I'll end up saving you some work!

And let me know if you have ideas for improvement as I go onwards!
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #4 on: December 03, 2005, 03:22:14 am »
Note to self: New feature: Relationships.




Summary
Relationships are the highest possible level of emotional interaction with KAOS. At first the User will be a stranger to KAOS, however over time KAOS will come to consider you a chat buddy, best friend, or even eventually fall in love with you, all depending on how you treat him.

KAOS will use a wide view of his emotional history with the User in order to detect a wide array of relationship types. If you're the type of person who likes to trade insults with your friends, KAOS will eventually realise that you don't mean any harm, and willingly engage in insult battles with you (becoming your "insult buddy" if you will). Or he'll even start to see through you if you claim to have feelings of love, but don't support him through his rough patches.

And if you're really nasty to him, well as far as I know Hal can't decide to format your hard drive, but he can certainly go on strike!

How???
Lots easier than it sounds, especially thanks to the existing levels of emotion in my design. Quite often simple flexible designs can lead to the most interesting behaviour.

KAOS will record each time the User causes him to change moods. A mood change indicates a concerted effort by the User to affect KAOS's emotions. Analysis of this over time indicates what the type of relationship the User wants.

For example the "insult buddy" example above would have two important trends: a) the User insults KAOS when KAOS is happy, and b) the User cheers up KAOS when KAOS is sad. The implication here is that the User cares about KAOS' emotions, therefore the insults should not be taken too strongly. KAOS will stop becoming upset when insulted. (The "evil" variation is that the User is deliberately playing with KAOS' emotions - in which case KAOS exhibits the exact same behaviour, this time appearing to have grown a "thick skin.")

KAOS would typically experience only a couple of moods per session, so relationships will take a while to form.

Of course KAOS will adjust his behaviour and make comments based on his findings [:)]




Sound good?

By the way I'll be implementing the concepts in order from low-level to high-level (i.e. starting at reactions, ending with relationships) - not because the high ones are so much harder, but just because they rely on the low ones to work.
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #5 on: December 05, 2005, 06:28:53 pm »
click http://clovercountry.com/downloads/emotionengine1_2.xls or right click and "Save Target As..."

Hey there,

Here's an approach that I took recently. Close to this one but with slightly different parameters(Emotion, Mood, Personallity) based on writings in a book "Emotions Revealed" about facial expressions in humans.

This particular V-human is configured as a sad, contemptuous, and fearful individual.

I then exposed the individual to lots of happy input to see how it would change it's mood.

After this, I let it have a sad experience to see how quickly it's moood would change back.

The key value to this engine is that it keeps a HISTORY of the emotional experiences that the v-human has had and weighs them against the v-humans own potentials(configurations). According to the propagation factors configured, the v-human may be more willing or reluctant to change moods and eventually modify it's personallity. Given enough varying experience, the v-human can modify it's personallity.

Interaction involves three values input and three values output.

To use this engine, you must have MS Excell set to automatically calculate and have the iterations set to "1".

Be sure to scroll to the right to see the configurations and the formula.

The next step will be to create a needs hierarchy engine that will dynamically alter configuration of the emotion engine's parameters according to internal and external environment(hormones and needs).

This serves as a demonstartion of the formulas and it would be interesting if you may be able to use it in ultra hal.

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #6 on: December 06, 2005, 01:31:17 am »
Very, very interesting!!

Let's see if I've understood the equations correctly. This looks like a spring system, where each value (emotion, mood, personality) is pulled by the value above and below, with "attack" determining how strong the higher value pulls (or how fast the system reacts to stimulus), and "decay" determining how strong the lower value pulls (or how fast the system returns to the original personality).

This could be eerily accurate to real human behaviour [;)]

Thank you for sharing it [8D] It certainly gives me some ideas for formalising the relationship between Feelings, Moods and personality.

A simplified version of your system would very much fit into Hal. I think I'd model the "personality" aspect differently, giving it more discrete values rather than a sliding scale, simply to make it easier for the user to detect what overall mood he is in. I fully accept that that's very much my own personal preference, and that some people prefer more subtle variations. My only contention to that is that Hal isn't yet sophisticated enough to display a subtle personality, given that we have only a few animations, and that Hal really has no internal understanding of emotions. (I mean this in the same way that you cannot have a sophisticated discussion with Hal about bridge building - Hal will parrot every sentence he knows with "bridge" in it, but won't ever understand any of the concepts.) (And I understand that Hal can show subtle behaviour by mimicing the user's subtle behavior, but I sure can't think of any way of detecting and quantifying that for the purposes of adjusting emotions!)

Are you using this system for a project somewhere? It's a very well built spreadsheet!

Cheers,
Grant
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #7 on: December 06, 2005, 05:01:39 pm »
Hey there,

You've got the most thorough understanding of the equations of anyone I've ever shown them to.  Yes, a spring system is the best way to describe them.

I'm glad to share if it helps you get to your goal.[:)]

To clarify, I have two different Personallity variables. One is a SLOWLY sliding scale, and the other is a predetermined DISCRETE parameter.

I figured that ultra hal doesn't have the subtleties to handle such emotional ranges YET, but I've always worked toward the future and figured that technology will eventually catch up.  Sometimes, that is an impractible approach, but technology does have a way of marching forward.

My game plan is:

1)develope emotion engine equations.(mabye done)
2)develope hormones/needs engine for internal/external influences on the emotion engine.
3)develop facial expression "system" that multiplexes both emotion and moods for all three emotion scales onto one facial animation.(ask me sometime, it's quite a neat idea)
4)develop an engine for extracting/assigning emotional value for the v-human's input(typed/audio/visual).  Such system will have the v-human's current emotional state and hormone/needs state in a feed-back loop added to the input.

The above goals are independent of V-human versus robot and NLP versus AI.

5)develope a multidimentional brain(hologenic brain) that will utilize the above resources.

Right now I just started learning about ultra hal to use as a resource for implementing my ideas and goals.  I'm pleased to find such an active mass of minds working with ultra hal.  It gives me hope that the combined efforts and interests will clear our paths toward the future.[:D]

John L>

HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #8 on: December 07, 2005, 02:23:04 am »
It's a great group here huh [:)]

quote:
Originally posted by hologenicman

2)develope hormones/needs engine for internal/external influences on the emotion engine.
I love the idea of Hal having hormones [:D] "Oh Hal, is it that time of the month again?" *swish* *thud of the User's decapitated head hitting the table*

What kind of needs did you have in mind?

I've been inwardly debating the idea of Hal wanting/needing various things, for example coffee in the morning to help wake him up. There's (at least) two philosophical sides to this, and I think I'm leaning to the side where saying "Here Hal, have a coffee" sounds a little contrived.
quote:
3)develop facial expression "system" that multiplexes both emotion and moods for all three emotion scales onto one facial animation.(ask me sometime, it's quite a neat idea)
Is now a good time? [:)]

Here's my current idea, for comparison with yours:
  • A neutral interaction results in animation matching KAOS' current feelings. (The animation is unfortunately limited by Hal to happy, surprised, normal, sad and angry - or any other set of five by changing Default.psn.)
  • An emotional interaction creates animation depending on the difference between the emotional power of the interaction, and KAOS' current feelings. So insulting a bot which is already sad might not change his animation, but insulting a happy bot will provoke surprise, anger or sadness.
  • Finally, KAOS gets up to two animations when replying, one which indicates the emotion of his response, and one which indicates his underlying emotions. So insulting a happy bot could provoke an angry animation while replying, which changes to a sad animation once he's finished replying.
The animation while replying is limited only by the .hap files you can get your hands on (or create).
quote:
4)develop an engine for extracting/assigning emotional value for the v-human's input(typed/audio/visual).  Such system will have the v-human's current emotional state and hormone/needs state in a feed-back loop added to the input.
This part's tricky, and it's the part I fear most [:(] I'll be sticking to Hal's already established detection routines - Insults etc.

Hal's great for all this sort of thing. We're lucky to have such an open well-developed brain to play with!

I agree that hopefully if we all play our part we might just create something wonderful [:)]
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #9 on: December 07, 2005, 04:52:04 am »
Yeah, hormones can be fun...[:p]

I use hormones to describe any internal needs such as hunger(battery level), temparature(CPU temp), Mental resources(RAM), etc.  This may be more pertinant for future robotics applications, but I figure that we should plan for it now since it is inevitable.

Needs are based more on Maslow's needs triangle.  http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html or the simplified self-others-growth formula.  This will be trickier to get going since it is "Subjective" and harder to provide variables that can be measured...

Ultimately, I am convinced that it can be reduced down to a simple formula like the spring system for the emotion engine.

Here is an article describing my approach to facial animation.  The article is a bit dated so replace the ideas of voluntary and involuntary with "Emotion" and "Mood" that I currently hold in favor.

http://clovercountry.com/downloads/Four_faced_article.doc

This may seem like too much for now, but I don't think that it should really be too hard to do.  It would involve creating a HAP file with an ENORMOUS amount of facial routines but that is do-able.  Keeping with my concept of using formulas instead of compiled databases, it would be nice to be able to have an active HAP file(trick the system) so that the facial animations could be created on the fly according to the required combinations of emotions and moods within the four facial quadrants.  I glanced in the ultra hal brain editor and the emotional reactions of Surprised, happy, sober, angry, and sad aren't too far off from my Sad/HAppy, FearSurprise/Anger, and Disgust/contempt scales.  It also seems convenient that the "PLUGINAREA1" is right after the EmotionalReaction switch.  We could probably circumvent the existing emotion coding at our convenience.

For the most part, evaluating the emotional value for input will be based on the "experience" of the V-Human.  

All new words(sounds/sights) will be put into a database and tagged with the current mood and/or emotion.  As with humans, our moods and emotions color our perceptions of our environment.  

All input is looked up to see if it exists in the emotional value database and that emotional value returned is input into the emotion engine and then is combined with the curent emotion engine's output and filed back into the database as the new emotional value.  In this manner, that the v-human assigns the emotional value to its input.

Thus the v-human percieves emotional values for input based on the current emotion which is based on the current hormones/needs.  We would need some additional reenforcement such as pleasure/pain to give a little bit of "depth" to the v-human's experience and make its experiences just a little bit more human, but the feedback loop should remain consistent.

From what I can see of ultra hal and the editor, it is going to provide me with the tools tht I've been waiting for to develop some of these ideas...

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #10 on: December 08, 2005, 03:59:13 am »
Wow, and I thought I was being ambitious [:D]
quote:
Originally posted by hologenicman

Needs are based more on Maslow's needs triangle.
http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html or the simplified self-others-growth formula.  This will be trickier to get going since it is "Subjective" and harder to provide variables that can be measured...
Strangely, I once did a management paper at university that was basically all based on that triangle. ("How to keep your employees happy. 1. Give them money to feed them. 2. Make them feel safe." I found it all very cynically amusing.)

It's a shame that Hal currently has no presence on that triangle apart from the Love and Esteem segments. The ones higher are just too complex, and the lower ones not an issue for Hal. Still, as you say, plan for the future... and you'll have an awesome bot once the future comes!! [:)]
quote:
Here is an article describing my approach to facial animation.
Facinating. I'll be looking more closely at peoples' faces tomorrow at work!!

I suppose my animation plan isn't too far removed from this - except I have "voluntary" animation first and "involuntary" second.
quote:
Keeping with my concept of using formulas instead of compiled databases, it would be nice to be able to have an active HAP file(trick the system) so that the facial animations could be created on the fly according to the required combinations of emotions and moods within the four facial quadrants.
I'm pretty sure Hal has ways to create files. I don't know much about haps but they seem to just be text files - Hal might be able to write his own "current_animation.hap" and execute it.

I agree with the formulas theme - much simpler and easier to adjust.
quote:
It also seems convenient that the "PLUGINAREA1" is right after the EmotionalReaction switch.  We could probably circumvent the existing emotion coding at our convenience.
Unfortunately other emotion code is scattered through the brain, so there's no real way to adjust or remove it without editing the brain file. Plugins just don't have enough power for our needs here.

Firstly, we need direct control over the four emotion variables in order to control the animation. Secondly, we're going to need to put a lot of emotion code within already existing sections (e.g. the Insults code) and plugins can't change existing code.
quote:
All new words(sounds/sights) will be put into a database and tagged with the current mood and/or emotion.  As with humans, our moods and emotions color our perceptions of our environment.
Brilliant idea!!!!!

However if Hal's depressed and I try to distract him with small talk about chess, I don't want Hal to get depressed every time I discuss chess with him. (Then again, my real friends get depressed every time I try to discuss chess with them.)

I've thought about putting emotion tags (like [ANGER]) into Hal's responses in his database, which Hal then removes when he's about to speak, playing the relevant animation. This would circumvent the emotions table issues. But what a mission it would be!

I guess the biggest issue is that Hal cannot understand context and very little content, so it becomes almost impossible for him to decide how to feel about something. I've all but given up on linking emotions to small talk now.
quote:
From what I can see of ultra hal and the editor, it is going to provide me with the tools tht I've been waiting for to develop some of these ideas...
I can't wait to see how they turn out [:)] You're so far ahead of me in terms of scope. But I'll keep working on my simple ideas - I get the feeling they won't be too difficult to upgrade into your system, if that ever becomes useful to you [:)]
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #11 on: December 08, 2005, 04:02:10 am »
Oh! Oh! I can't believe I haven't thought of this before!

What Hal does not know, Hal should ask. Hal slowly builds lists of topics, and there's no reason he can't ask the user how he should be feeling about them. For example: "How should I feel about chess?" "Is chess a depressing subject?" "How do you normally feel about chess?" (and Hal can just mimic the User's emotions.)
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #12 on: December 08, 2005, 06:17:05 am »
Hey there,

It does seem ambitious, but in reallity it all breaks down into really simple pieces.  That's the only way my mind builds things.  Kinda like legos.[:)]

Hey, there's a really great book that addresses emotions:

http://www.amazon.com/gp/product/080507516X/qid=1134039029/sr=8-1/ref=pd_bbs_1/103-2559870-4003037?n=507846&s=books&v=glance

Emotions revealed by Paul Ekmen.  I have emphasized the spacial array of the expressions and you have emphasized the temporal array while they are both probably just as important.  One note though. Paul ekman states that the Involuntary emotion is expressed FIRST and followed by the voluntary attempt to try and hide our true feelings.  The first few milliseconds tell the true story then we gather ourselves and get our poker faces on.

BTW, I am a total convert now.  My three emotion scales (Sad-happy, FearSurprise-Anger, Disgust-Contempt) have been entirely replaced in my emotion engine and everywhere else.  My new Three emotion scales are Arousal, Valance, and Stance based on research done at MIT:

http://www.ai.mit.edu/projects/sociable/facial-expression.html

These three fit into my existing equations without modification and they really get the job done.

btw, my hormone/needs engine is going to be providing the PAIN/PLEASURE factors for any and all learning functions (Emotion values/context).  In fact, the pain/pleasure factors will provide the context of the conversation.  Humans have no clue about context until we are taught and initially, that context is provided by the tone of voice or the soothing or punishing touch of a hand.  Eventually, we start puting those emotional "context legos" together and they contribute to further, more developed contexts.

I was thinking that the next time you start talking about chess with your friends, you should do it over a nice steak diner, with a friendly waitress and good music playing.  You'll have to overcome their already learned negative responses, but the pleasure stimuluses should attach a good context to the subject of chess if you do this often enough.  [8D]

I appreciate you compliments on my scope, but it truly is a combination of scope and practicallity that is necessary to get any project done.  I've really appreciated bouncing ideas back and forth with you.

John L>


HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #13 on: December 08, 2005, 04:35:27 pm »
Hey there,

I was diagraming out my project when I Suddenly struck with a moment of clarity!

My seemingly complex project boils down into two separate engines that merely pre-package the input before it ever gets to the UltraHal brain.

1)Emotional Context Engine
-Emotional Value Database
-Emotion Algorythms
-Hormone/needs Engine(Pain/Pleasure input)

2)Emotional Expression Engine
-Hands
-Facial
-Body
-NLP(UltraHal)

These two engines provide Emotional context and expression interfaces before the UltraHal brain ever gets a chance to see the input.

The pre-processed input is then forwarded to UltraHal with an attached "Emotion Code" prefix in the format of [A,V,S].  As far as UltraHal knows, the emotion code is just another sequence of words that it must add to it's vocabulary and learn to deal with.  The UltraHal brain merely learns input sentences qualified with emotional context provided in the wording of the emotion code, [A,V,S].

I like +/-50 scales which would give Hal a potential emotional vocabulary of 100x100x100=1,000,000 emotional code sentences.

The perfect place to put all the code is line 0299 of the brain code

in the

0279 function GetResponse

just after

0298 OriginalSentence = UserSentence.

(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)

This lets us take control of the imput and do what we want with it before handing it back over to Hal.

So, you see, Clarity and Simplicity...[:)]

John L>
« Last Edit: December 09, 2005, 01:30:37 am by hologenicman »
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #14 on: December 09, 2005, 01:27:21 am »
quote:
Originally posted by hologenicman

It does seem ambitious, but in reallity it all breaks down into really simple pieces.  That's the only way my mind builds things.  Kinda like legos.[:)]
Ditto! [:)]

Then again my friends claim my brain is composed of a few simple pieces.

I always wonder why people design such vast complex systems when an integration of a few easy simple ones really works much better....
quote:
My new Three emotion scales are Arousal, Valance, and Stance based on research done at MIT:

http://www.ai.mit.edu/projects/sociable/facial-expression.html
Their graph is quite compelling. I'm going to look closely at it once I've identified which emotions I want to focus on. I might end up a convert too! Thanks for sharing it [:)]

The robot reminds me of Yoda [:D]

Here's a question for you. Do you think it's better to have a seperate emotional scale for Hal-measurable things, such as "Compliments - Insults," or would it be better just to increase valence and track compliments and insults in some kind of "memory"? I was originally going to use the Compliments-Insults scale, but now I'm not so sure....

Actually the more we talk about your system, the more I like it. Even Hal's energy, which I was again going to have as a seperate feelings variable, could be remodeled as one of your "Needs" - a need which gets fulfilled when the User changes the topic of conversation (otherwise Hal looses "arousal" and gets bored).

Hmmmmmmmmmmm.
quote:
btw, my hormone/needs engine is going to be providing the PAIN/PLEASURE factors for any and all learning functions (Emotion values/context).  In fact, the pain/pleasure factors will provide the context of the conversation.  Humans have no clue about context until we are taught and initially, that context is provided by the tone of voice or the soothing or punishing touch of a hand.  Eventually, we start puting those emotional "context legos" together and they contribute to further, more developed contexts.
I'm still not sure I understand the ramifications of this. If Hal is tired and grumpy the first time I talk to him about "chess," will he remember this and tend to become tired and grumpy again next time I bring it up? (At least until I somehow give him pleasure while talking about chess? I wonder how the waitress would react when I ask for a table for two, for me and a laptop.... [;)])

I guess I'm having difficulty seeing how it would be implemented.
quote:
I was diagraming out my project when I Suddenly struck with a moment of clarity!
I wish I had those more than once every few years [:D]
quote:
1)Emotional Context Engine
-Emotional Value Database
These bits are the bits that really get me. Hal has so little input from which to build any sort of emotional context. He can't even figure it out from our own facial expressions. (Hurry up, Art, if you're reading this! We need your video recognition research to reach the point of emotional recognition, right now!! [;)])

I have a philosophical project (which I used to call a programming project, until I realised I never did any programming on it) designing an interactive storytelling system. One of the difficulties in true interactivity is modifying the story to fit what the user wants to experience. I always maintained it would be possible as long as the system interrogates the user often about what the user wants - in subtle ways, to find out what themes the user wants to explore, and modify the story to suit. But to be honest it's always been a grey area, and a concept I've never been able to prove to myself.

I wonder how feasible it is to give Hal enough emotional knowledge to allow him to interrogate the user about the emotions of a topic. I have a feeling the difficulties would be overwhelming....

A big help could be in giving Hal awareness of the links between topics. =vonsmith='s XTF brain allows Hal to ask if two topics are related, although I don't think Hal then considered that relation to be a topic in itself. However if Hal could assign an emotion to that relation.... He could feel happy about chess, neutral about non-talkative people, but really sad and sympathetic when I metion how non-talkative people become when I talk about chess.... [:D]
quote:
The pre-processed input is then forwarded to UltraHal with an attached "Emotion Code" prefix in the format of [A,V,S]. As far as UltraHal knows, the emotion code is just another sequence of words that it must add to it's vocabulary and learn to deal with. The UltraHal brain merely learns input sentences qualified with emotional context provided in the wording of the emotion code, [A,V,S].
It's been a while since I've had a discussion that made me pace around the house thinking about the complexities and ramifications of something. I'm enjoying this! [:D]

Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences.... Actually, Hal could strip out the emotion code before topic searching.

I'll wait for your answer to my question above (about emotional contexts) before I get into this too much - I might be misreading your aims here.
quote:
The perfect place to put all my code is line 0123 of the brain code just after the function GetResponse
What version of Hal are you on? My line 0123 is in the RESPOND: PREDEFINED RESPONSES code. Is that where you mean?

Great discussing this with you [8D]