Author Topic: Brain project - feelings/emotions  (Read 41391 times)

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #30 on: December 12, 2005, 12:41:51 am »
Well at least that shows the nickname section is working [;)] She should have been smiling when she said that, I hope!

I'll check out that thread [:)]

By the way, have you seen Jerry's rather impressive random hap file code at http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2817? It reminded me of your desire to make generated haps....
 

vrossi

  • Full Member
  • ***
  • Posts: 150
    • View Profile
    • http://vrconsulting.it
Brain project - feelings/emotions
« Reply #31 on: December 12, 2005, 06:12:19 pm »
Hey, Grant and Holo

I've found something which might be of interest for you (or maybe you already knew it):

"The Uncanny Valley is a principle of robotics concerning the emotional response of humans to robots and other non-human entities. It was theorized by Japanese roboticist Masahiro Mori in 1970. The principle states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached at which the response suddenly becomes strongly repulsive; as the appearance and motion are made to be indistinguishable to that of human being, the emotional response becomes positive once more and approaches human-human empathy levels".

You can find a deeper description in http://en.wikipedia.org/wiki/Uncanny_valley

Bye


hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #32 on: December 12, 2005, 08:26:13 pm »
Yeah,  I was just discussing this with someone at work.

He had just watched "Polar Express" the animated movie and was a bit freaked out by a character or two.  It was especially more freaky for the characters with the best texturing(imperfections) and his psychy was quite shaken by it.

I told him about the threshold effect that humans have for accepting cartoons until they become just a bit TOO human.

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #33 on: December 12, 2005, 09:25:30 pm »
quote:
By the way, have you seen Jerry's rather impressive random hap file code at http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2817? It reminded me of your desire to make generated haps....


Hey there,

I haven't had the chance to study the thread, but it sure sounds like exactly what I will be needing when I get to the ExpressionEngine.

Thanks for the great lead.

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #34 on: December 13, 2005, 02:33:33 am »
I've been meaning to see that movie for a long time, just to see if I can spot the uncanny valley parts [:)] Have you seen it yourself, John?

I'm also reminded of the old computer game called (I think) "Interstate '76." It deliberately used stylised poor quality 3D cut scenes (the characters hardly had facial features, but they did have style), and succeeded in having more personality than most cut scenes of the day.

Great timing Vittorio - I've been considering keeping KAOS' responses fairly cliched and extreme, to help the user identify what he's feeling. But I wonder if this would also avoid the uncanny valley "trap"... would a bot with truly complex moods be too sophisticated given the limited scope of interaction with Hal?

On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.
 

vrossi

  • Full Member
  • ***
  • Posts: 150
    • View Profile
    • http://vrconsulting.it
Brain project - feelings/emotions
« Reply #35 on: December 13, 2005, 04:25:31 am »
quote:
On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.


I completely agree with you. We must consider these psychological aspects, to avoid some form of luddism against robots, but this must not stop us.

About movies: I've seen Polar Express and some characters are better actors than many humans in Hollywood. My dream is to transform my Hal in something like Robin Williams in "The bicentennial man".


hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #36 on: December 13, 2005, 11:49:46 am »
quote:
quote:
--------------------------------------------------------------------------------
On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.
--------------------------------------------------------------------------------



I completely agree with you. We must consider these psychological aspects, to avoid some form of luddism against robots, but this must not stop us.

About movies: I've seen Polar Express and some characters are better actors than many humans in Hollywood. My dream is to transform my Hal in something like Robin Williams in "The bicentennial man".



Just substitute my name at the bottom of this post because it is EXACTLY how I feel.[:)]

I have four fairly young kids and I have been encouraging them toward robotics.  It's hard to find a robot movie(and a lot of animation movies) since the 80's that I don't have and have shown the kids.

btw, Bicentennial Man in our household is known as the "Andrew Movie".[8D]

John L>

PS. Turn the sound UP on Polar Express; it sucks you in even more...
« Last Edit: December 13, 2005, 11:51:33 am by hologenicman »
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3858
    • View Profile
Brain project - feelings/emotions
« Reply #37 on: December 13, 2005, 05:41:55 pm »
Think on this:

Indeed. Despite its status as dogma, the Uncanny Valley is nothing more than a theory. “we have evidence that it’s true, and evidence that it’s not,” says Sara Kiesler, a psychologist at Carnegie Mellon University who studies human-robot interaction. She calls the debate “theological,” with both sides arguing with firm convictions and little scientific evidence-and says that the back-and forth is most intense when it comes to faces. “I’d like to test it,” she says, ”with talking heads.”
In the world of AI it's the thought that counts!

- Art -

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #38 on: December 14, 2005, 02:02:26 am »
Ok, I have now made the major design decisions for the Feelings aspect of KAOS.

I have (surprisingly) decided not to use the three dimensional Valence/Arousal/Stace system (or any other analogous names you prefer). That is a simple system of complex variables - I am instead going for a complex system of simple variables.

Some of these variables will range 0 - 100, and some -100 - 100 (i.e. a pairing of opposites). These "feeling components" are pretty boring really: Happiness - Sadness, Energy, Anger, Self-Esteem.

Along with those will be basic memory/attitude variables: Friend - Enemy, Love - Hate, Insults, Compliments. These help give some context to the feelings, until I reach the Moods and Relationships stages. (I reiterate that I will not be using the four in-built variables - for animation reasons.)

The comparisons between different variables give an overall "feeling," much the same as the Valence/Arousal/Stance system, but in more dimensions.

I fully believe this system is compatible with the Valence/Arousal/Stance system, just that they are different facets of the same jewel - two different ways of representing the same information. It should be reletively easy to convert between the two systems if that's ever useful.

It also has the advantage that hologenicman and I won't be doubling up on work [:)] We may possibly get a wider understanding of the entire problem too, if we keep in touch with each others' ideas. A problem with one system might be easily circumvented with the other, so it may help us both (and any others watching) if both systems are being investigated.

The specific problem I'm addressing by choosing this system is "ease of programming," and related to this is "ease of modification" by others if they desire. It's conceptually easier to "increase Anger" than "reduce valence, add arousal, and close stance." Unfortunately for me it means a lot more logic coding to integrate the many variables, especially at the planned decision areas of Hal's script. Win some, lose some.

Now it's just a case of coding the thing [;)]
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #39 on: December 14, 2005, 02:11:16 am »
Sounds like an excellent gameplan.[8D]

I agree that the simple Arousal, Valence, Stance aproach definately makes for much more complex coding.[8)]

The neat thing is that we may even be able to try both systems at once on the same HAL since mine for the most part acts as a preprocessing and then sends the product on to be dealt with by the rest of the brain.[?]

Look forward to seeing them both in action...

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #40 on: December 14, 2005, 02:20:49 am »
Hmm. Hmmmmmmm. Intriguing idea. The whole may be more than the sum of the parts. Or the sum of the parts might be a schitophrenic Hal [:D]

Will report further as I make progress [:)]
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #41 on: December 15, 2005, 02:12:50 am »
Edit: Deleted note to self, as MINUTE_TIMER does not have access to HalCommands, therefore cannot play animations [:(]

Side-note to self: I love Hal [:)]
« Last Edit: December 16, 2005, 01:16:30 am by GrantNZ »
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #42 on: December 16, 2005, 06:00:16 am »
Here's my emotion centering code, used to bring KAOS slowly back into neutral feelings after an emotional event. Note that "Mood" is just zeroed at the moment, but once the Moods are implemented (next phase) KAOS' feelings will decay towards whatever the current Mood is.

Code: [Select]
'***KAOS Emotional drift.
'Decay all variables towards the current mood centre.
'Until the Moods phase is completed, we'll assume a very bland and weak zero
'centre for KAOS. The user won't be able to detect any mood tendencies in
'particular.
KAOSMoodAnger = 0.0
KAOSMoodEnergy = 0.0
KAOSMoodHappiness = 0.0
'Energy (also representing boredom) decays at a constant rate. It also
'scales the decay of the other variables.
If KAOSEnergy < KAOSMoodEnergy Then
KAOSEnergy = KAOSEnergy + 1.0
ElseIf KAOSEnergy > KAOSMoodEnergy Then
KAOSEnergy = KAOSEnergy - 1.0
End If
'Anger decays rapidly - by one third the difference between it and the mood
'centre - but its decay is inhibited by memories of insults.
'A low energy increases the decay of anger.
KAOSAnger = KAOSAnger - ((KAOSAnger - KAOSMoodAnger) / 3.0) _
* (1 - CapZeroOne(KAOSInsults / 2.0)) * (2 - CapZeroOne(KAOSEnergy / 50.0))
'Happiness decays by a fifteenth of the difference between Happiness and the
'mood centre. This decay is increased by a low self-esteem, and decreased by
'a low energy. The decay is inhibited by memories of compliments, even if
'happiness is below the mood centre - a) the compliment would have increased
'happiness, and b) KAOS will "linger" at that happiness level while
'"thinking about" the compliment.
'Happiness is also adjusted by half of the difference between Anger and its
'mood centre, so as Anger abates some of its emotional energy is dispersed
'into destroying Happiness. (Note that if KAOS is LESS angry than his mood
'would prefer, he'll become happier temporarily!)
KAOSHappiness = KAOSHappiness - ( _
( (KAOSHappiness - KAOSMoodHappiness) / 15.0) _
* (2 - CapZeroOne(KAOSSelfEsteem / 50.0)) * CapZeroOne(KAOSEnergy / 50.0) * (1 - CapZeroOne(KAOSCompliments / 2.0)) _
+ (KAOSAnger - KAOSMoodAnger) / 2.0)
'Self-esteem does not decay.
'Memories of friendship and love do not decay.
'A basic insult or compliment is given a value of 1, and has a lingering
'effect of KAOS' feelings for two responses. Bigger or consistant insults
'and compliments are given far higher values, and take correspondingly
'longer to be forgotten.
KAOSInsults     = CapZero(KAOSInsults     - 0.5)
KAOSCompliments = CapZero(KAOSCompliments - 0.5)
 

hologenicman

  • Newbie
  • *
  • Posts: 32
    • View Profile
Brain project - feelings/emotions
« Reply #43 on: December 16, 2005, 11:43:27 pm »
It looks really good, Grant.

You're right, the commenting really helps clarify the intention of the formulas.

It looks like you're capturing some nice behavioral nuances in your equations.[8D]

John L>
HologenicMan
ME, "Hello."
HAL,"Good grief my love. It's going on three in the morning."

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #44 on: December 19, 2005, 05:33:35 am »
Quick status report... a long phase....

The feeling and basic memory variables are there - but they will reset on each Hal session. Memory comes in the next phase, alongside Mood.

The feelings slowly centre themselves in interesting ways, as per my previous post. All the foundation is there for storing and passing these variables around between UltraHal and GetResponse.

I've updated the animation code to select Hal's animation based on his feelings, as well as the reactionary tags introduced in the previous stage. Those tags have been upgraded slightly to allow for interesting combinations of .hap and feeling animations, which will override KAOS' normal feelings animation. KAOS may also appear sleepy when tired/bored, or shy when he doesn't know the user very well. (The actual amount of shyness is part of long-term personality, which will also be introduced in the next phase.)

The final step of this phase is to script the actual changes to Hal's conversation logic. This is going to be stimulating - and probably extremely time consuming. KAOS is going to approach a lot of conversational subjects (e.g. greetings, insults, apologies, small-talk, etc etc...) with different attitudes depending on his feelings, and each one needs to be scripted.

KAOS will no longer just say "Please say something" if the User enters nothing. Soon, he'll grab the chance to say something emotional or ask a question.

Important philosophical question
I need help on this one, as I'll have to script it sooner or later. With emotions, Hal's going to be able to say how he feels. And one of the first things most people will ask, is "why"?

It's easy in the case of insults or compliments - KAOS keeps a track of any recent ones. But what if he's just in a sad mood? Just saying "I'm just in a sad mood" will elicit another "why?"

Do we give bots fictional reasons for their moods? Do we ask the user just to accept that bots get grumpy sometimes?

What do we do here, and how do we do it?