Hey there, I think I just figured out the quote system. That should be easier to understand.[:p]
quote:
Here's a question for you. Do you think it's better to have a seperate emotional scale for Hal-measurable things, such as "Compliments - Insults," or would it be better just to increase valence and track compliments and insults in some kind of "memory"?
The EmotionalValue Database in the Emotional context engine stores an emotional value for each and every word that Hal has ever been given and it is stored entirely outside of Hal's brain proecss. Let's reserve Hal's brain for thinking and responding and get all the emotions tallied up before entering them into the NLP. I'd like to have the have the facial expressions called from an ACTIVE HAP file. The trick will be to figure out how to get hal to play the expression HAP's based on the [A,V,S] tag that will prefix the input sentence. That way, I don't care what Hal does with the insult-compliment scale. It should be easy enough to disable the coding for Hal's emotion switch.[}:)]
quote:
Even Hal's energy, which I was again going to have as a seperate feelings variable, could be remodeled as one of your "Needs" - a need which gets fulfilled when the User changes the topic of conversation (otherwise Hal looses "arousal" and gets bored).
That's a good aproach. It's not what I meant, but there is no reason not to leave hal's internal emotion rules in place and modify them as you just mentioned based on arousal based on text. My pre-processing outside hal in my emotional context engine simulates feeling and your processing "Within" hal simulates thinking. That starts getting really kool for giving hal some really subtle personality "quirks".
quote:
If Hal is tired and grumpy the first time I talk to him about "chess," will he remember this and tend to become tired and grumpy again next time I bring it up? (At least until I somehow give him pleasure while talking about chess? I wonder how the waitress would react when I ask for a table for two, for me and a laptop.... )
I guess I'm having difficulty seeing how it would be implemented.
You've got it figured out, but here are some simple ways to implement the Pain/Pleasure input:
*do a system call for battery level in laptops. Low voltage = low arousal/high=high arousal.
*figure out a way to tie the system resource meter into stance. Lots of apps open with low resources = closed stance/few apps and lots of resources = open stance.
*Put a simple que (typed or button controlled) that our pre-processing interprets as emotional input and adjusts the valence accordingly.
Yelling versus complimenting.
"Hal, you really ****** me off!"
ValenceDelta = -("*"/3)
so that would be a yelling value of -2 valance.
"Hal, you are really great!!!!!!"
ValenceDelta = +("!"/3)
so that would be a compliment value of +2 valence.
These Ques don't matter to Hal. They get used by the EmotionalContext engine before Hal ever sees the sentence.
"Hal, I would like to discuss chess today!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"[8D]
Here are some possible textual ques for the Emotional context engine:
--- and +++ for Arousal
*** and !!! for Valence
<<< and >>> for Stance
These things could possibly be programed in as punctuation macros on dragon naturally speaking.
If this keeps up, I may start talking to my kids in symbols:
Jushua, Settle down------
Marina, You start being nice or you're getting a spanking**********
Stefan, that's a good job!!!!!!!!!!
John Christopher, relax<<<<<<<<<
The idea is to use these ques to teach hal and eventually, the EmotionValue database will be full enough to drive the emotions in various directions based on the words used in conversation WITHOUT punctuation ques. Babies have no idea what you are saying, just HOW you are saying it.. Eventually, they associate an emotional context to the words through exposure to the word in combination with posative or negative hormone/needs input. Later in life, those words stir feelings, but only because of a learned association.
quote:
Hal has so little input from which to build any sort of emotional context. He can't even figure it out from our own facial expressions. (Hurry up, Art, if you're reading this! We need your video recognition research to reach the point of emotional recognition, right now!! )
Exactly, eventually, the EmotionalContext engine will take input from sight, sound, text, touch... but for now, we are trying to code that info into textual inputs (and system resource calls if we push it)
quote:
Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences....
That's not a risk*******. That's a goal!!!!!!!!!!!!! [^]
The emotion code presented to Hal basically serves as a form of communication in itself. I don't mean programming communication, but rather emotional communication about the environment and interactions with people.
quote:
What version of Hal are you on? My line 0123 is in the RESPOND: PREDEFINED RESPONSES code. Is that where you mean?
I caught that too. the line was from a tutorial on version 5. I am using version 6 and I replaced it with:
The perfect place to put all the code is line 0299 of the brain code
in the
0279 function GetResponse
just after
0298 OriginalSentence = UserSentence.
(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)
This lets us take control of the imput and do what we want with it before handing it back over to Hal.
quote:
Great discussing this with you
Great discussing this with you too!!!!!!!!!!!!++++++++++++++>>>>>>>>>>>>>>.
John L>