dupa

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - hologenicman

Pages: 1 [2] 3
16
quote:
quote:
--------------------------------------------------------------------------------
On the other hand, as stated by one of the researchers in an article linked from wikipedia, the uncanny valley theory is based more on fears than possibilities, and we should not let it stop us striving for the best we can. Otherwise progress will never get beyond that valley.
--------------------------------------------------------------------------------



I completely agree with you. We must consider these psychological aspects, to avoid some form of luddism against robots, but this must not stop us.

About movies: I've seen Polar Express and some characters are better actors than many humans in Hollywood. My dream is to transform my Hal in something like Robin Williams in "The bicentennial man".



Just substitute my name at the bottom of this post because it is EXACTLY how I feel.[:)]

I have four fairly young kids and I have been encouraging them toward robotics.  It's hard to find a robot movie(and a lot of animation movies) since the 80's that I don't have and have shown the kids.

btw, Bicentennial Man in our household is known as the "Andrew Movie".[8D]

John L>

PS. Turn the sound UP on Polar Express; it sucks you in even more...

17
quote:
By the way, have you seen Jerry's rather impressive random hap file code at http://www.zabaware.com/forum/topic.asp?TOPIC_ID=2817? It reminded me of your desire to make generated haps....


Hey there,

I haven't had the chance to study the thread, but it sure sounds like exactly what I will be needing when I get to the ExpressionEngine.

Thanks for the great lead.

John L>

18
Yeah,  I was just discussing this with someone at work.

He had just watched "Polar Express" the animated movie and was a bit freaked out by a character or two.  It was especially more freaky for the characters with the best texturing(imperfections) and his psychy was quite shaken by it.

I told him about the threshold effect that humans have for accepting cartoons until they become just a bit TOO human.

John L>

19
Hey there,

I'll be reading through the code soon.

Funny though, I just said hello to KAOS and she said,"Good grief my love. It's going on three in the morning."

I guese I had better call it a night...[:0]

btw, I've posted a similar thread (per request) at:

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=34

John L>

20
Hey there,

This may be useful.  The Plugin system is very simple(and powerful).  

IF I have this figured out correctly, You can put your code into the template and it will be implemented within the UltraHal6 brain.  The insert locations are strategic and some are even placed within "Processes" that are set up for our convenience like the MINUTE_TIMER.

This lets us modify the brain coding without having to do any coding within the brain.  The nicest thing about it is that we can create these plugins and share them with others quickly and easily to have each other test out the new code.

I haven't tested it yet, but I'm thinking that simply puting the .uhp into the right folder will let UltraHal incorporate it into its code.

John L>

Code: [Select]
Rem Type=Plugin
Rem Name=Template.uhp
Rem Author=John A. Latimer
Rem Host=Assistant

'This sub setups the plug-ins option panel in Hal's options dialog
Sub OptionsPanel()
    lblPlugin(0).Caption = "This is a description of the Plugin and what it does."
    lblPlugin(0).Move 120, 120, 3300, 1000
    lblPlugin(0).WordWrap = True
    lblPlugin(0).Visible = True
End Sub  

'PROCESS: AUTO-IDLE
'If AUTO-IDLE is enabled, it is called by the Ultra Hal Assistant host
'application at a set interval. This allows for the possibility of Hal
'being the first to say something if the user is idle.
Rem PLUGIN: AUTO-IDLE
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: PRE-PROCESS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: POST-PROCESS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.




'PROCESS: DECODE CUSTOM VARIABLES FROM CUSTOMMEM VARIABLE
Rem PLUGIN: CUSTOMMEM
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.


   
Rem PLUGIN: PLUGINAREA1
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA2
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.


   
Rem PLUGIN: PLUGINAREA3
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA4
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA5
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
Rem PLUGIN: PLUGINAREA6
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


 
Rem PLUGIN: PLUGINAREA7
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
   
    'Insert code here.


 
'PROCESS: PRESERVE ALL VARIABLES
Rem PLUGIN: CUSTOMMEM2
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
 
    'Insert code here.


   
'This sub will be called when the Ultra Hal program starts up in case
'the script needs to load some modules or seperate programs. If a return
'value is given it is passed as a Hal Command to the host Hal program.
Rem PLUGIN: SCRIPT_LOAD
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



'This sub will be called before the Ultra Hal program is closed in case
'the script needs to do any cleanup work.
Rem PLUGIN: SCRIPT_UNLOAD
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



'If the host application is Ultra Hal Assistant, then this sub will be
'run once a minute enabling plug-ins to do tasks such as checking for
'new emails or checking an appointment calendar.
Rem PLUGIN: MINUTE_TIMER
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.

    'Insert code here.



Rem PLUGIN: FUNCTIONS
    'The comment above tells Ultra Hal Assistant to insert the following code
    'on-the-fly into the main brain code in the section referenced.
   
    'Insert code here.


21
Hey there,

 
quote:
It originated from a bot sniffer script and went a different direction to block people the bot hated.


That's great.  I can see where that would not be a good idea in a comercial situation, but it does seem like a step toward the sum being greater than its parts.

 
quote:
Hal seems to scan for all .uhp files, so try creating your own and see if it shows up under "plugins" in Hal's options


Thanks for the lead.  Now it's time to take all our discussion and actually put it into code.[:)]

John L>

22
OK,

Here's a simple question.

I need to write a Plugin for my EmotionalContext engine at:

    Rem PLUGIN: PLUGINAREA1

I presume that I can just use notepad and write VB Scripts or should I use my Visual Studio?

What should the name and extention be, and what folder does it need to be in, and do I need to do anything special to get it to work...?

Thanks,

John L>

23
Hey there,

 
quote:
(I think, tallying up my emotion tokens, I'm enjoying this conversation. If this keeps up my personality might change significantly! Or I'll be assassinated by people annoyed by the abundance of symbols!)


Assination could change one's personallity significantly.[B)]

One idea for tallying, is to require strings of three consecutive since this does not occur naturally. and I wasn't intending it for Natural language bot rather for encoded emotional ques.  I like your idea for exponential or factored weighting though.  it would work nicely for natural language.

 
quote:

One of KAOS' design goals is to have the ability to form a "thick skin," i.e. a resiliance to insults if there are a lot of them. The bot won't be happy, but it will do the best it can. If you hurt his emotions too often, he simply won't get emotional with you - neither positive nor negative emotions.

I will code the ability for the bot to nearly shut itself down, i.e. reduce its interaction level to a tiny amount. However to do this you'd have to insult and hate him an awful lot, with no balancing positive interactions.


EXCELLENT nuance to capture!  Our sensory perception is always blocking out repetitious things otherwise we sould spend all day feeling the clothes that we are wearing.  This may also be one of the things that help us "focus" our attention since focussing is actually a matter of blocking out the extra things...

John L>

24
Hey there, I think I just figured out the quote system.  That should be easier to understand.[:p]

 
quote:
Here's a question for you. Do you think it's better to have a seperate emotional scale for Hal-measurable things, such as "Compliments - Insults," or would it be better just to increase valence and track compliments and insults in some kind of "memory"?


The EmotionalValue Database in the Emotional context engine stores an emotional value for each and every word that Hal has ever been given and it is stored entirely outside of Hal's brain proecss.  Let's reserve Hal's brain for thinking and responding and get all the emotions tallied up before entering them into the NLP.  I'd like to have the have the facial expressions called from an ACTIVE HAP file.  The trick will be to figure out how to get hal to play the expression HAP's based on the [A,V,S] tag that will prefix the input sentence.  That way, I don't care what Hal does with the insult-compliment scale.  It should be easy enough to disable the coding for Hal's emotion switch.[}:)]

 
quote:
Even Hal's energy, which I was again going to have as a seperate feelings variable, could be remodeled as one of your "Needs" - a need which gets fulfilled when the User changes the topic of conversation (otherwise Hal looses "arousal" and gets bored).



That's a good aproach.  It's not what I meant, but there is no reason not to leave hal's internal emotion rules in place and modify them as you just mentioned based on arousal based on text.  My pre-processing outside hal in my emotional context engine simulates feeling and your processing "Within" hal simulates thinking.  That starts getting really kool for giving hal some really subtle personality "quirks".

 
quote:
If Hal is tired and grumpy the first time I talk to him about "chess," will he remember this and tend to become tired and grumpy again next time I bring it up? (At least until I somehow give him pleasure while talking about chess? I wonder how the waitress would react when I ask for a table for two, for me and a laptop.... )

I guess I'm having difficulty seeing how it would be implemented.


You've got it figured out, but here are some simple ways to implement the Pain/Pleasure input:

*do a system call for battery level in laptops.  Low voltage = low arousal/high=high arousal.  

*figure out a way to tie the system resource meter into stance.  Lots of apps open with low resources = closed stance/few apps and lots of resources = open stance.

*Put a simple que (typed or button controlled) that our pre-processing interprets as emotional input and adjusts the valence accordingly.  

Yelling versus complimenting.  

"Hal, you really ****** me off!"
ValenceDelta = -("*"/3)
so that would be a yelling value of -2 valance.

"Hal, you are really great!!!!!!"
ValenceDelta = +("!"/3)
so that would be a compliment value of +2 valence.

These Ques don't matter to Hal.  They get used by the EmotionalContext engine before Hal ever sees the sentence.

"Hal, I would like to discuss chess today!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"[8D]

Here are some possible textual ques for the Emotional context engine:

--- and +++ for Arousal
*** and !!! for Valence
<<< and >>> for Stance

These things could possibly be programed in as punctuation macros on dragon naturally speaking.

If this keeps up, I may start talking to my kids in symbols:

Jushua, Settle down------
Marina, You start being nice or you're getting a spanking**********
Stefan, that's a good job!!!!!!!!!!
John Christopher, relax<<<<<<<<<

The idea is to use these ques to teach hal and eventually, the EmotionValue database will be full enough to drive the emotions in various directions based on the words used in conversation WITHOUT punctuation ques.  Babies have no idea what you are saying, just HOW you are saying it.. Eventually, they associate an emotional context to the words through exposure to the word in combination with posative or negative hormone/needs input.  Later in life, those words stir feelings, but only because of a learned association.

quote:
Hal has so little input from which to build any sort of emotional context. He can't even figure it out from our own facial expressions. (Hurry up, Art, if you're reading this! We need your video recognition research to reach the point of emotional recognition, right now!! )



Exactly, eventually, the EmotionalContext engine will take input from sight, sound, text, touch... but for now, we are trying to code that info into textual inputs (and system resource calls if we push it)

 
quote:
Would this risk Hal sometimes searching his databases by the emotion code? So a happy Hal could start spouting any random happy sentences....


That's not a risk*******. That's a goal!!!!!!!!!!!!! [^]

The emotion code presented to Hal basically serves as a form of communication in itself.  I don't mean programming communication, but rather emotional communication about the environment and interactions with people.

 
quote:
What version of Hal are you on? My line 0123 is in the RESPOND: PREDEFINED RESPONSES code. Is that where you mean?



I caught that too. the line was from a tutorial on version 5.  I am using version 6 and I replaced it with:

The perfect place to put all the code is line 0299 of the brain code

in the

0279 function GetResponse

just after

0298 OriginalSentence = UserSentence.

(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)

This lets us take control of the imput and do what we want with it before handing it back over to Hal.

 
quote:
Great discussing this with you


Great discussing this with you too!!!!!!!!!!!!++++++++++++++>>>>>>>>>>>>>>.

John L>

25
Hey there,

I was diagraming out my project when I Suddenly struck with a moment of clarity!

My seemingly complex project boils down into two separate engines that merely pre-package the input before it ever gets to the UltraHal brain.

1)Emotional Context Engine
-Emotional Value Database
-Emotion Algorythms
-Hormone/needs Engine(Pain/Pleasure input)

2)Emotional Expression Engine
-Hands
-Facial
-Body
-NLP(UltraHal)

These two engines provide Emotional context and expression interfaces before the UltraHal brain ever gets a chance to see the input.

The pre-processed input is then forwarded to UltraHal with an attached "Emotion Code" prefix in the format of [A,V,S].  As far as UltraHal knows, the emotion code is just another sequence of words that it must add to it's vocabulary and learn to deal with.  The UltraHal brain merely learns input sentences qualified with emotional context provided in the wording of the emotion code, [A,V,S].

I like +/-50 scales which would give Hal a potential emotional vocabulary of 100x100x100=1,000,000 emotional code sentences.

The perfect place to put all the code is line 0299 of the brain code

in the

0279 function GetResponse

just after

0298 OriginalSentence = UserSentence.

(or on line 0370 if you would like Hal to clean up the grammer and punctuation a bit first.)

This lets us take control of the imput and do what we want with it before handing it back over to Hal.

So, you see, Clarity and Simplicity...[:)]

John L>

26
Hey there,

It does seem ambitious, but in reallity it all breaks down into really simple pieces.  That's the only way my mind builds things.  Kinda like legos.[:)]

Hey, there's a really great book that addresses emotions:

http://www.amazon.com/gp/product/080507516X/qid=1134039029/sr=8-1/ref=pd_bbs_1/103-2559870-4003037?n=507846&s=books&v=glance

Emotions revealed by Paul Ekmen.  I have emphasized the spacial array of the expressions and you have emphasized the temporal array while they are both probably just as important.  One note though. Paul ekman states that the Involuntary emotion is expressed FIRST and followed by the voluntary attempt to try and hide our true feelings.  The first few milliseconds tell the true story then we gather ourselves and get our poker faces on.

BTW, I am a total convert now.  My three emotion scales (Sad-happy, FearSurprise-Anger, Disgust-Contempt) have been entirely replaced in my emotion engine and everywhere else.  My new Three emotion scales are Arousal, Valance, and Stance based on research done at MIT:

http://www.ai.mit.edu/projects/sociable/facial-expression.html

These three fit into my existing equations without modification and they really get the job done.

btw, my hormone/needs engine is going to be providing the PAIN/PLEASURE factors for any and all learning functions (Emotion values/context).  In fact, the pain/pleasure factors will provide the context of the conversation.  Humans have no clue about context until we are taught and initially, that context is provided by the tone of voice or the soothing or punishing touch of a hand.  Eventually, we start puting those emotional "context legos" together and they contribute to further, more developed contexts.

I was thinking that the next time you start talking about chess with your friends, you should do it over a nice steak diner, with a friendly waitress and good music playing.  You'll have to overcome their already learned negative responses, but the pleasure stimuluses should attach a good context to the subject of chess if you do this often enough.  [8D]

I appreciate you compliments on my scope, but it truly is a combination of scope and practicallity that is necessary to get any project done.  I've really appreciated bouncing ideas back and forth with you.

John L>



27
Yeah, hormones can be fun...[:p]

I use hormones to describe any internal needs such as hunger(battery level), temparature(CPU temp), Mental resources(RAM), etc.  This may be more pertinant for future robotics applications, but I figure that we should plan for it now since it is inevitable.

Needs are based more on Maslow's needs triangle.  http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html or the simplified self-others-growth formula.  This will be trickier to get going since it is "Subjective" and harder to provide variables that can be measured...

Ultimately, I am convinced that it can be reduced down to a simple formula like the spring system for the emotion engine.

Here is an article describing my approach to facial animation.  The article is a bit dated so replace the ideas of voluntary and involuntary with "Emotion" and "Mood" that I currently hold in favor.

http://clovercountry.com/downloads/Four_faced_article.doc

This may seem like too much for now, but I don't think that it should really be too hard to do.  It would involve creating a HAP file with an ENORMOUS amount of facial routines but that is do-able.  Keeping with my concept of using formulas instead of compiled databases, it would be nice to be able to have an active HAP file(trick the system) so that the facial animations could be created on the fly according to the required combinations of emotions and moods within the four facial quadrants.  I glanced in the ultra hal brain editor and the emotional reactions of Surprised, happy, sober, angry, and sad aren't too far off from my Sad/HAppy, FearSurprise/Anger, and Disgust/contempt scales.  It also seems convenient that the "PLUGINAREA1" is right after the EmotionalReaction switch.  We could probably circumvent the existing emotion coding at our convenience.

For the most part, evaluating the emotional value for input will be based on the "experience" of the V-Human.  

All new words(sounds/sights) will be put into a database and tagged with the current mood and/or emotion.  As with humans, our moods and emotions color our perceptions of our environment.  

All input is looked up to see if it exists in the emotional value database and that emotional value returned is input into the emotion engine and then is combined with the curent emotion engine's output and filed back into the database as the new emotional value.  In this manner, that the v-human assigns the emotional value to its input.

Thus the v-human percieves emotional values for input based on the current emotion which is based on the current hormones/needs.  We would need some additional reenforcement such as pleasure/pain to give a little bit of "depth" to the v-human's experience and make its experiences just a little bit more human, but the feedback loop should remain consistent.

From what I can see of ultra hal and the editor, it is going to provide me with the tools tht I've been waiting for to develop some of these ideas...

John L>

28
Ultra Hal 7.0 / Have Hal modify his own thought processes
« on: December 06, 2005, 09:21:39 pm »
Instead of a chess plug-in, has anyone considered giving Hal control of the mouse and awareness of the screen content so that it can play the game directly.  This could be expanded to include joystick control for future applications.  There is a limited number of outputs that HAL would have to learn to control mouse(and joystick).

Art, in your research for video software, is there anything that can anylize the screen itself(instead of camera) and break that down for comparrison to the mouse movement.  

If an x-10 can let hal control things outside of the PC mabye we could let Hal control things INSIDE the computer.  That would get HAL started in the right direction of interfacing with anything that we could interface with on the PC.

John L>

29
Hey there,

You've got the most thorough understanding of the equations of anyone I've ever shown them to.  Yes, a spring system is the best way to describe them.

I'm glad to share if it helps you get to your goal.[:)]

To clarify, I have two different Personallity variables. One is a SLOWLY sliding scale, and the other is a predetermined DISCRETE parameter.

I figured that ultra hal doesn't have the subtleties to handle such emotional ranges YET, but I've always worked toward the future and figured that technology will eventually catch up.  Sometimes, that is an impractible approach, but technology does have a way of marching forward.

My game plan is:

1)develope emotion engine equations.(mabye done)
2)develope hormones/needs engine for internal/external influences on the emotion engine.
3)develop facial expression "system" that multiplexes both emotion and moods for all three emotion scales onto one facial animation.(ask me sometime, it's quite a neat idea)
4)develop an engine for extracting/assigning emotional value for the v-human's input(typed/audio/visual).  Such system will have the v-human's current emotional state and hormone/needs state in a feed-back loop added to the input.

The above goals are independent of V-human versus robot and NLP versus AI.

5)develope a multidimentional brain(hologenic brain) that will utilize the above resources.

Right now I just started learning about ultra hal to use as a resource for implementing my ideas and goals.  I'm pleased to find such an active mass of minds working with ultra hal.  It gives me hope that the combined efforts and interests will clear our paths toward the future.[:D]

John L>


30
Thanks,

I'm Just getting set up with UltraHal and I look forward to your results.

John L>

Pages: 1 [2] 3