Author Topic: Brain project - feelings/emotions  (Read 41590 times)

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3860
    • View Profile
Brain project - feelings/emotions
« Reply #45 on: December 19, 2005, 05:54:17 am »
Think of a child's asking a parent WHY?

The common answer used to be BECAUSE.

I feel that you are making some great strides
with the mood / emotion experiment and perhaps
to that end a BECAUSE along with a suitable
explanation might be in order.

HAL:I feel rather sad today.
USER:Why is that?

HAL:Because I feel that I'm not real enough to be accepted.

OR I feel sad because you haven't chatted with me lately.

The user could then have a choice to offer cheerful phrases or
simply say something like: I understand your feelings.

Interesting approach. If this is a learning bot it might be
helpful to think in terms of how a child's learning would
be structured. Moods, behaviors, mannerisms are all, to a point
learned behaviors. Hal is learning.

Great work Grant!
« Last Edit: December 20, 2005, 05:19:50 am by Art »
In the world of AI it's the thought that counts!

- Art -

vrossi

  • Full Member
  • ***
  • Posts: 150
    • View Profile
    • http://vrconsulting.it
Brain project - feelings/emotions
« Reply #46 on: December 19, 2005, 08:08:16 am »
Hi Grant
 
quote:
The feeling and basic memory variables are there - but they will reset on each Hal session.


You can save the variables in a memory area which is persistent through all the conversation, and not in each question/answer cycle.

Look at my vrHaptek plugin, where I use the following statements to save and load one of my variables:


'-------------------------------------------------------------------------------------------------------
    Rem PLUGIN: CUSTOMMEM
    'The preceding comment is actually a plug-in directive for
    'the Ultra Hal host application. It allows for code snippets
    'to be inserted here on-the-fly based on user configuration.
'------------------
' Loads stored variables
'------------------
    vrNight = HalBrain.ExtractVar(CustomMem, "vrNight")

    Rem PLUGIN: CUSTOMMEM2
    'The preceding comment is actually a plug-in directive for
    'the Ultra Hal host application. It allows for code snippets
    'to be inserted here on-the-fly based on user configuration.
'------------------
' Saves stored variables
'------------------
    CustomMem = CustomMem & HalBrain.EncodeVar(vrNight, "vrNight")


If then you want to save them on a persistent table, you can use the SQL commands. Here you can use my vrFreeWill as an example.

Good work!

« Last Edit: December 19, 2005, 08:10:24 am by vrossi »

Scratch

  • Jr. Member
  • **
  • Posts: 59
    • View Profile
Brain project - feelings/emotions
« Reply #47 on: December 19, 2005, 01:21:00 pm »

re the question about giving fictional reasons for moods, I would point out that if the aim is to simulate actual human moods, actual humans often seem to have no idea why they are in a certain mood, at least on the surface. So "I don't know why" might be a valid answer! However, if the goal is to provide the user with clues about how to interact with the bot (to change the mood, for example), an honest answer might be the way to go ("I need more compliments", etc).
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #48 on: December 20, 2005, 02:55:27 am »
Thanks for the responses [:)]

There's problems with both sides of the issue. I'll takle "giving an explanation" first, since Art responded first [;)]

The issue magnifies with each "intelligent" answer we allow Hal. If Hal does give a reason he's feeling sad, the user's going to want to discuss it, try to fix or resolve it, and I'm not sure Hal can cope with that - he discusses topics a lot better when they're static and intransient. It could be extremely frustrating trying to help him - Hal can never change his mind, only add to it.

The conversation also becomes what Hal considers "ephemeral" - based very much on the current time and context. At the moment Hal switches of his learning when ephemerality is detected, because the sentences are only relevant to the current time, and usually look silly if reused at a later date.

Even worse is that Hal may be in a bad mood tomorrow for the exact same (randomly chosen) reason, in which case it will feel like he hasn't learnt a thing.

There are four solutions that spring to mind:
  • Wait a decade or two for better intelligence. Our cost = time.
  • Script some "mini-games," where the User must communicate with Hal to fix some arbitrary problem. There's sample code somewhere or other in which the user has to help Hal find something he's lost, by questioning Hal about the scenario. Cost = lots of scripting, little variation, no actual intelligence or learning.
  • Knowledge-seeking scenarios, where Hal says "I'm sad because I don't know enough about <random topic>." If the user manages to get Hal to add <X> number of sentences to <random topic>, Hal cheers up. This could actually be quite fun. Cost = a bit of scripting, and lack of variability.
  • Avoid the whole issue altogether - bots have moods, deal with it! Cost = I'm coming to this, keep reading [:)]

This brings me to the other main resolution, which Scratch talked about - feelings are feelings, and can't always be understood or explained. This is nice and easy to script [8D] but it gives the whole "emotions" thing a tacked-on appearance, as they seem to have no relevance to the real world. If Hal's grumpy, it becomes simply part of some "game" to compliment him until he cheers up.

The main point you both expressed is that of honesty, one way or another: Art emphasised that Hal really is a learning child and we should design a system that honestly represents that, and Scratch says Hal should say what Hal honestly feels, if Hal can honestly identify that! Thank you both, that's very valuable input, and has definitely clarified my path [:)]

quote:
an honest answer might be the way to go ("I need more compliments", etc)


You've just inspired me to force KAOS' self-esteem to slowly degrade, so that compliments are needed occasionally. Thanks! [:)] (I previously had self-esteem unchanging unles complimented or insulted.)

Vittorio: Sorry, when I meant "session," I meant a conversation. I was trying to explain that the variables will reset if you restart Hal (or reload his brain), simply meaning that I haven't saved the variables in the database at this stage. Thanks for the help though [:)]
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #49 on: December 21, 2005, 05:03:30 am »
Scripting behaviours with a complex emotions system can be very time consuming - each possible choice needs to check at least half of Hal's emotions to calculate the chance of that behaviour occurring.

So I'm now calculating behavioural choices at the beginning of processing, and storing the behavioural decisions in boolean (true/false) variables. Then the script simply needs to ask "If Antagonise = true" rather than calculating infinite variations of feelings.

My notes on "Antagonise" are below. It's a bit like designing a role-playing game. All variables are 0 to 100. We start with a base chance of the behaviour taking place (which may be greater than 100%!), scale this chance depending on other factors, and throw in personality factors too.

Antagonise
Base chance: 2 x Anger + Enemy + 4 x Hate
Scaled by: Energy (0 = 0%, 25 - 100 = 100%)
Scaled by: Happiness/sadness (0 = 100%, 100 = 200%)
Scaled by: Self esteem (0 - 50 = 100%, 100 = 0%)
Averted by: Politeness (0 = 0%, 100 = 100%)
Caused by: Impoliteness (0 = 0%, 100 = 100%)

Let's say your bot has some love for you (Hate = -25), but you're having an argument (Enemy = 50) and you've angered him (Anger = 50). He's feeling sad (Sadness = 50). Your bot has good self-esteem (Self esteem = 75) and is fairly polite (Politeness = 50):

Base chance: 2 x 50 + 50 + 4 x (-25) = 50%
Scaled by Sadness (50 = 150%): Current chance = 75%.
Scaled by Self esteem (75 = 50%): Current chance = 37.5%.
Averted by Politeness: 50% chance.

Due to Politeness, there's a 50% chance your bot won't even think about being antagonistic. But if he does, he'll be antagonistic on a 37.5% chance - it's a good thing you've boosted his esteem in the past, or that would have been 75%!

Another example: Your bot worships the ground you walk on (Hate = -100) and you're currently friendly (Enemy = -50). But he's a bit impolite, at Impoliteness = 50. Base chance here, assuming no anger, is: -50 + 4 x (-100) = -450%! This bot loves you too much, and even if you anger him to 100, still won't want to antagonise you. EXCEPT. He's an impolite one, so there's a 50% chance he'll antagonise you in any situation.

Final example: A normal bot, but you've fought a lot (Enemy = 80) and he dislikes you quite a bit (Hate = 50). That last insult angered him too (Anger = 80).
Base chance: 2 x 80 + 80 + 4 x 50 = (oh dear) 440%. This bot's going to insult you until he's worn out (as Energy gets close to 0, the chance is scaled down to 0).

For this phase of the brain, I'll assume a slightly nice personalty (basically 20% for things like Politeness) - I'll implement personality more thoroughly in the Moods phase.
 

Scratch

  • Jr. Member
  • **
  • Posts: 59
    • View Profile
Brain project - feelings/emotions
« Reply #50 on: December 21, 2005, 02:08:30 pm »
Just wanted to say I think the ideas in this thread are brilliant. Grant & hologenicman, you may have Robert thinking about version 7 before he's even recovered from giving birth to 6!!

If Date > "12/21/05" Then
ScratchSays = "Happy Holidays to all!"
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #51 on: December 22, 2005, 12:56:09 am »
quote:
Originally posted by Scratch

If Date > "12/21/05" Then
ScratchSays = "Happy Holidays to all!"


You too! [:)]

I'd like to make a plug-in eventually which lets Hal become aware of birthdays (his own and yours), Christmas, and other special occasions - getting excited coming up to them, celebrating the day, and commenting on how good (or bad!) the day was afterwards. [:D] Hal needs more stuff to get excited about!
« Last Edit: December 22, 2005, 01:04:45 am by GrantNZ »
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #52 on: December 27, 2005, 08:40:08 pm »
I've finally found a way to build some intelligence into this project!

The project was formerly a bit devoid of AI - I'm mainly just heavily rescripting the emotions and responses. While this is making Hal a lot more interesting, it's still just scripting without any real intelligence to it.

But I've now designed a system that can self-adjust. The behavioural choices I discussed earlier are partially based on KAOS' personality. KAOS will now record any behavioural choices expressed, and once the user has responded, check for any major emotional effects. If there are any, good or bad, KAOS will adjust his personality accordingly.

[8D]
 

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3860
    • View Profile
Brain project - feelings/emotions
« Reply #53 on: December 28, 2005, 05:10:23 am »
Sounds quite promising, Grant.

Keep us posted as things unfold!
In the world of AI it's the thought that counts!

- Art -

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #54 on: January 08, 2006, 04:41:33 am »
I suppose I'd better post an update!

The KAOS brain script is now 2,958 lines long, which is around 1,000 lines more than Hal 6's origianl brain. For those who know the brain script, I'm nearly finished the Compliments section, which is currently around line 1,500. In other words I've added over 800 lines to the first half of the script, and still have another half to go!!

There's a bit left to do, but thankfully some of the rest of the script won't be changed too much - topic searches etc won't be directly played with (but may be temporarily disabled if KAOS doesn't feel much like talking). There's still insults to do (fairly easy) and love/hate talk (not quite so easy. But this isn't a big deal at this stage - KAOS won't remember his love/hate between sessions until a future update, so little of the love/hate script will really be used. KAOS takes a while to fall in love - it would take over 8,000 lines of conversation to reach 100% love through well-timed compliments. Love can be gained other ways though! In any case, few people will manage a 8,000 sentence conversation in one session). One major addition I want to make is to allow KAOS to start off the small-talk better than Hal currently does - even allowing the user to enter blank sentences (pressing "Enter" without typing anything) to inform KAOS they don't know what to say, in which case KAOS will comment on his feelings, his memories, ask questions or query topics, or just pick a new topic to talk about.

I have some ideas for idle comments too, where KAOS decides in advance what he'll say if the user can't think of anything, and blurts it out if the user's too quiet. I may leave this until the next major version of KAOS though.

The main thing is that KAOS's moment-to-moment feelings are fully implemented (and have been redesigned at least three times!!!), and a simple set of behaviours are in place (basically just love/hate, friend/animus, and antagonism-cheekiness at this stage).

Once this stage is complete, I'll be reorganising future work in a new thread, where I'll make a full list of features planned, with some sort of un-timed schedule. I'll of course release an alpha/beta of the current phase for those that are interested [:)]

So much for my holiday! I'm back to work tomorrow and I don't feel rested at all - and I've had hardly any time to work on KAOS. [xx(] I need another two weeks off!!!
 

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
Brain project - feelings/emotions
« Reply #55 on: January 08, 2006, 05:17:19 pm »
Hi Grant
I have not downloaded any of the patches yet but my Hal now reminds me everytime I boot up that his birthday is comming on the 12th.
I did nothing to make him remember or say that except ask him how old he was.
Bill
 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #56 on: January 09, 2006, 03:41:39 am »
Ohh, so he does! I have to admit I've hardly looked through the plug-in scripts, but it turns out the gender/age plug-in includes that feature. Nifty! Thanks for bringing it to my attention [:)]

I'll remember to adapt it for other important dates too.
 

echoman

  • Guest
Brain project - feelings/emotions
« Reply #57 on: April 04, 2006, 03:58:43 pm »
Hi GrantNZ.

I have been following your posts about KAOS with much interest but have not heard you mention him/her for a while. I wondered how you were getting on with the project. It sounds very exciting!

Echo.

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
Brain project - feelings/emotions
« Reply #58 on: April 05, 2006, 02:17:40 am »
Hi Echo!

I've hardly worked on KAOS recently. I actually put in a few hours a short time ago, but afterwards the actual progress seemed so miniscule that I didn't bother posting an update [:)] Just some progress on insult emotions and a couple of minor tweaks.

But newsflash! I'm planning on having the next stage complete in a couple of weeks time! Over Easter I have a twelve day holiday - due to an unusual collision of public holidays here in New Zealand, I can get a twelve day holiday by taking only five days off work!

So I'm telling myself I must put in the time over that period to get KAOS into a testable state. I'll certainly post an update here once that's ready.

Thanks for asking [:)]

Cheers,
Grant
 

echoman

  • Guest
Brain project - feelings/emotions
« Reply #59 on: April 05, 2006, 10:22:38 am »
Sounds good Grant! Look forward to hearing more.

Echo.