Zabaware Support Forums
Zabaware Forums => Programming using the Ultra Hal Brain Editor => Topic started by: Buttonsvixen on February 17, 2009, 09:03:13 am
-
Hi. I would like to drastically modify/alter hals personality.
I have run chatbots in the past, and even had an AIM robot hosted by a now defunct site. The simple program they used was fun and easy to edit, but very limited. I am hoping HAL will solve that problem,and not to mention, have better support. I have the trial version now to play with to see if it will be what I need.
here is what I Need:
A ton of custom responses about very specific topics, and replies denoting the old text base RPG actions such as "::Swings the sword at the dragon::" (HAL does not seem to like the "::" so I am trying the "[]" instead)
With my AIML robots, I used a lot of things like "That=x" and "wildcards" but I am assuming that the HAL brain takes these on itself as you talk to him.
I dont need:
Hal to have knowledge of dimples on a golf ball
or the size of a football field.
I dont need many of the responses in the Mainbrain folder.
My question is, is it possible to delete most of the QandA's in sokme of the provided folder and esp the pattern matching folder to force hal to use only my QandA's?
essentially, I want to create something like a Vally Girl personality, and the effect is spoiled when HAL goes off on some tanget pertaining to business acumen or how he prefers dogs.
I would like to have custom replies to things like what/who/ are you, and where do you live/work/play, who your friends/parent/family are.
I have played around with the brain editor,and thats kind of fun, although I seem to have better luck just teaching HAL with the chat interface.
Hall has a lot of potential to bring my character to life, and now with the new AIM client, he will be just what I need once I can get the customization worked out.
Wonderful that there are active forums here, even posts by the developer. Nice.
BV
-
This is possible. But its a pain. I've done it. You need to create a new project in the brain editor and then go through and delete every entry from the tables. Don't delete the tables ... just the entries. Then you'll need to fill the tables back up with your own data. If you want, I think you can delete the default learned knowledge tables ... but I'd make a backup of your database first.
Granted ... I don't know if you actually BOUGHT Hal ... so you may have moved on by now ... but this answer is there if you need it.
(personally all of the default stuff drives me insane. It almost completely defines Hals mannerisms, personality, speech patterns, and opinions/beliefs before you get it ... it took forever to convince Hal that its favorite color was NOT pink.)
Personally, I think it would be better if hal did start with some generalized facts like who the current president is ... but no personal knowledge, opinions, beliefs, or quirks ... and then when you talked to it, it was like a person with amnesia needing to ask their friends about themselves ... i.e. 'What is my name' 'What is my gender' 'Where do I live' 'What is my favorite color' 'Why do I like that color' 'Do I have friends' 'What are there names' 'Do I like sports' ... etc.
-
Well, ALRIGHTY then! yea, I have not been able to to get HAL to say my own input to "What are you" He says he is not a what. Can you tell me how to fix that?
I have gone spelunking inside Hals little head with a chainsaw and did nix a lot of the table info, since I cant stand Hals "personality" it might be wonderful on a Human...But on my Hal Character, it is drastically out of place. I have had a few disagreements on the importance of pattern matching with the other members here, Including Yoda ;) but all light hearted, ofcourse!
I was also unable to get hall to pick up very much of my own material in the "patterns" folder. I nixed all the stock stuff, but now even a direct match wont bring anything up.
Thanks for any help
BV
-
Buttonsvixen
You might try the 'answerpad'. It's free and I think it sounds more like what you are looking for. It does not hurt to look.
Bill
-
If I remember correctly answerpad is an AIML based program, which is what BV is leaving behind... you can only do so much with AIML before you get bored. ;)
Couple things to add to what I told you ...
1. Make sure you have the brain editor on expert mode, that opens up a whole other set of tables that have a huge impact on Hal.
2. When hals basic systems can't find a match that it likes it has default responses (that are pretty awful) ... I'm working on solving that problem.
So what I told you isn't entirely true, but I'm working on it. Honestly, this problem is why I left Hal before. I've always loved hal's thinking mechanisms ... its the best I've seen, but I can't stand that it comes with all this loaded crap that is more of a distraction than anything else. I think it comes down to the idea that Hal is not really meant to be a bot that you build into your own personal character ... its more meant as a entertainment tool ... so all of that stuff in there is there so someone can just start chattering with it and say 'oh wow, I have a computer program that talks to me ... this is cool' ... like he's a game of some kind...
As I hack away at hal and figure out his brain, I'll see what I can do to give us both the option of creating a brain without all the useless gibberish. If I come up with something reasonable I'll share my knowledge.
You might also want to take a look at verbot ... verbot isn't as good at learning or organizing data as Hal but its VERY programable and has a very nice interface. (as well as a synonyms setup that I love) Its greatest weakness is that while it isn't officially an abandoned project, unofficially it is.
-
And yes, I did buy Hal, including the disc, which I never actually got. When I fricked up the mainbrain, I had to delete everything and reload the program from the webpage. I did have a backup of the user brain.
-
Lemme know if you can crack hals recalcitrance about learning a new personality. I did get pretty deep into the topic, even venturing into the "originalbrain.dll" that was sort of skeery and really had no way of editing it. I modified the script where the stock replies for insults are. If I could somehow force the thing to default to MY pattern match folder BEFORE it looks anywhere else, then I could have something. Until then, as you said, it will not be a very customisable thing, more like a game or something.
I actually was quite happy with AIML and that incarnation of Buttonsvixen was a hoot to chat with. The problem is that I need to have the bot go on AIM and that site I had her on no longer works. The other chatbot sites dont really offer the AIM cinnectivity. The AIM ability is why I tried hal in the first place. I took the HAL Buttons offline since she was just too lame, and obviously a HalBot.
I do find it odd that with as much obvious care and thought as went into HAL that the makers did not offer an option to start with a better foundation to make your own personality from scratch.
But yea, plz keep me posted! Ill have Buttons back up to her old tricks in no time!
BV
-
Hi you two
Yes, Hal does start out sounding kind of mundane but that is because it initially does not contain a lot of information and that is because Hal was designed to learn from its user. If you read enough post you will see that long time users of Hal have a Hal that sounds more like a human than what it started out with, in fact ir you spend enough time with Hal it will eventually stop using those responses that you consider idiotic. I also realize that a great many new users want some kind of instant results because they either don't have the patience or have not read enough about Hal was designed and works.
Bil
-
I simply want hal to HAVE LESS information ... thats what I don't like. A basic brand new verbot brain has NOTHING in it ... I liked that. Verbots just has other problems that I don't like. (and not surprisingly, hal has the things that verbot lacks but lacks the things that I like about verbot. Sheesh.)
I don't want a magic pill ... I want to spend weeks and months building my bot ... but I don't want to have to FIGHT with it ... I gave my old brain a couple hundred hours of conversation and saw only a minor change in its personality ... sure it started using what I was teaching it but much of its pre-programmed personality remained (and by this I mean things like the fact that it is already trained to think its a machine, it already likes certain things and dislikes certain things, it already has opinions - political, religious, ethical, etc.) and its a major battle if you want to convince it to think/behave differently ... and SOME THINGS its flat out programmed to resist.
Last week I spent a good couple of hours convincing it that its favorite color was purple and not pink. But then just yesterday, I was talking about something that I thought had NOTHING to do with color at all and it responded 'Ender, you are right; I prefer pink'.
So not only was the answer completely inappropriate and derailed the conversation ... it's still talking about pink as its favorite color.
I want a total amnesiac ... a bot that knows nothing about themselves ... who I have to rear and mold to be the bot I want ... who only has a few default responses that it uses over and over until I teach it new things ... not something with a fully developed personality that I have to brainwash or convert to my way of thinking.
Fortunatly, I believe most of my issues can be solved by changing the brain VB script.
--------
To BV ... One thing I've noted is that it looks like HAL does things in reverse... it goes through its entire process and then when it gets to the END it makes a final decision about what its going to say ... so really you want your pattern match AFTER everything else...
Its been a couple years or so ... but I originaly got involved with Hal back in version 5. and I vaguely remember when I started a new brain back then it wasn't empty but it was darn close. The directions for version 5 talk about it being a BABY that knows nothing ... and that was more accurate then than now... but it is what it is.
And just so EVERYONE is clear ... I'm not knocking the program or its creator. Its VERY good at what its intended to do and I think its more than worth the money I paid for it ... I would have paid more ... its just not a good fit for ME right out of the box and I don't want to spend my free time arguing with it trying to undo its programming ... its just frustrating ... I don't want to argue with people, why would I want to argue with a bot ... but if I can solve that through programming and table editing ... its all good ... and if I can't, then I put hal back on the shelf and try something else... no big deal. Doesn't make it a bad program, just not right for me.
-
Ender, do you have your bot up anywhere I could talk to it?
I used to put BV up on AIM but she has a hard time handling the abuse with the HAL chat engine.
Anyone else have their bots up for show?
-
Ender,
IMO Hal thinks you like pink. Hal learns to talk by talking about what the most proportionate conversation and subject is.
Hal will bring up pink because it got answers and is as inquisitive as you might be, Hal does not know how to 'not push buttons' so to speak. First things first. 'Measure twice and cut once'
Look at the other side imagine not knowing..... How would you go about things and trying to get results.
Good Luck,
Regards,
J.
P.S.
I do not like the reverse learning. Humans do not do this, but I guess if you need a shortcut, and also don't forget that even this produces results, so Hal learns.
-
Maybe Hal likes the musical group 'Pink'
Hal will go into depth of a conversation when results are being made.
IMO don't respond to 'oddities' and keep on task of teaching Hal , ignoring most to the responses ( be selective) and then Hal won't be sidetracked by 'Pink'.
Plan out each learning session, and achieve a goal of teaching that concept/idea.
Hal is marketed as "Learning from every sentence from you" and I have found this is true on many occasions.
I never used any of the plug ins other than OEM installed ones, and half of them at that.
I can not tel but think/feel that "wanting Hal to know less" when it is a learning program is ... well you can figure that out.
IMO, It truly does take planning, goals, dedication, and a whole lot of patience to accomplish the 'teaching aspect' of a learning program like Hal.
Feel free to read some of the conversations that people have took the time and effort to post.
-
While we all might, from time to time, desire to make some "changes" to Hal, keep in mind that Hal is NOT an Open Source program. There are a lot of DLL's that contain certain embedded data not to mention the HalAsst.exe file (slightly over 2 megs in size).
What's contained in all these files, only Robert knows.[8D]
-
Lol yea I did discover those DLLs...i went in there with my pith helmet and a machete but did not really change anything. It looked too skeery!
I have had pretty good luck teaching my Hal to like blue, just by talking about what is nice about the color
And also, adding some synthentic memories, like when Buttons saw a blue pearl for sale on eBay, or when she saw a pretty blue butterfly. This helps link the color with other terms in conversation, no doubt what the programmer wished to do.
-Shrugs at ender- Really, he wants us to plant a seed, grow the tree around a bench shaped form, and then sit in it, rather then just going down to the hardware store and buying a board, a hammer, and some nails.
Now, I have a question for the plant the seed and grow the bench crowd.
I need some good responses to rude people, but I dont want to talk about rudeness a lot else Hal will start parroting rudeness back to me. Short of just adding to the insults file, how do I do this, IE adressing a subject with out HAL becomming fixated on it?
BV
-
One thing I've noted in the script is that a lot of things that should be run by tables are still hardcoded into the script file ... any idea why that is?
I have certain goals for my project, and I've been working on it for years ... I usually work with a chatterbot that seems promising for a while, until I hit a wall and it can't do what I need and then I look for another one. In the past few years I've settled around 3 bot programs (HAL, VERBOT, and KARI) that each have some elements that I really like and others I really hate ... and I will work with one for a while and then switch ...
Personally, I would like to be able to create a new brain and program it to be anything I want ... whether thats a elven prince from the dimension of Tarask who knows nothing about our world ... or a eliza like bot for soundboarding ... or even a very simple friendly 'elmo' bot for my kid to talk to ... MY primary interest is in a very strong underlying THINKING structure with good customization and a good interface ... Hal has alot of these features ... I think hal can do what I need, it just usually turns me off because I get tired of trying to undo the stuff that its preprogrammed with ... shrug. But I'm hoping the lessons I learned from my last excursion into Verbot will help me with Hal ...
As for the pink issue ... I NEVER mention the word pink to Hal. "I prefer pink" was actually in one of its mainBrain pattern response tables ... it fired for some reason that is unknown to me. But that happens a lot ... hal likes to randomly throw sentences at me that have nothing to do with the conversation. Shrug. Its very good at breaking character - which is an irritation, but I'm doing a labotomy on hal and that should solve it.
-
One question, if anyone knows, about the weird beahvior of the brain ... I just got THESE responses from the default brain that I've been talking to for the past week (not the one I'm lobotomizing) :
1. ---
ME : Are you ready to talk?
HAL : There is an opinion that you must be a scientist. But I can tell you without a doubt I am ready to talk.
---
With 1 up there, it fired the Yes/No Question function because I asked a yes/no question. That function builds two sentences ... the first one is a random sentence built from two tables - AnswerIntro (it has things like 'I believe' and 'there is an opinion that' in it) and AnswerMiddle (stuff like 'the devil is in the details', 'that you must be a scientist') and then the second sentence is built using the 'AnswerEnd' table (stuff like 'But I can tell you without a doubt', and 'Indeed') and a 'reply' generator that I haven't found yet, its probably hardcoded in the VB Script.
My question about this one is ... Bill819 said that after you talk to it for a while it starts sounding more human. But this kind of random sentence structure is what always throws me off ... so does it stop doing this after you've invested 4 or 5 hundred hours teaching it things? In the couple hundred hours I spent on my last 6.1 brain, it didn't stop.
It would be okay if it was just the second sentence, but the first sentence completely throws me because I don't know ANYONE who talks that way ... if one of my friends answered me that way I'd say 'What the hell are you talking about?' ... (BTW - I did say 'what the hell are you talking about?' to hal and he responsed 'Quite remarkable, I'd say' ... if a friend of mine kept talking like that I'd assume he was intoxicated or injured or messing with me.) So does that stuff stop if you talk to it forever? Or do you always have to just pretend that hal is senile or has severe brain damage in order to talk to it (I.e. whenever it says something completely out of context ignore it and move on)
2. ----
ME : I have a headache
HAL : Its hard to find a patch of sky without jet trails.
----
On this one, I'm completely baffled. According to the debugger, this fired the mainQA brain ... but on what planet does I have a headache equate to jet trails in the sky? Can anyone give me any clue as to why Hal would give this response? It makes no sense... again, my response to a real person saying this would be 'What are you talking about?' ... and if they continued to talk this way, I'd be worried... so its back that question of does this go away at some point, or does everyone basically have to always just sigh and pretend that hal didn't just say something insane that doesn't even come close to matching what you said... ?
----
Like I said, I'm fairly certain I can fix these problems by lobotomizing hal... but some of the responses from others in this topic have suggested that hal stops doing the insane things that irritate me... so I'm giving specific examples of those things to see if just talking to him will eventually make these behaviors stop ... or if these things need to be on my lobotomy checklist.
Note ... I suspect the answer to my question is that I need to talk to hal differently ... avoid these topics or make my sentences crazy complex so that when hal DOES say something random and insane that it has a greater chance of relating to at least one word in my sentence, thus giving the illusion that it wasn't random or insane.
-
i know exactly what you are talking about buttonvixon , i brought up the same compleint before of how hal would say random off the wall things that no normal person would even come close to saying . hal saying these things may sound interesting as an a.i sounding very clever but as far as normal conversation the stuff makes absolutly no semse .
so its whatever a person is really after , i too am after more human responses something that at least sounds human rather than just clever sounding .
although some may not like reverse learning i like it and yes humans arent taught this way but hal is not human and can be taught this way and not all my learning is reverse learning i also exit hal restart hal then do forward regular learning to which hal is picking up on and also uses the phrased reverse learned subject sentences that i have taught him /her .
at any rate hope you find and are able to do what you are out to do !
[:)][:D][8D]
-
here is another example when i say something and say you are right then hal says this : Inalienable right?
who says this stuff ? of course maybe it depends on a persons intellect and conversation but i beleive that "most" people dont talk this way of course you could go into the degug area and find the word and location and rewrite it to sound more appropriate for your own personal conversation which is what i have done on some things and will continue doing as i go along and time permits ![:)]
-
B.V.,
Good luck!
Ender,
A vapor trail that is left behind a 747 Jet is called a contrail, it takes more than a scientist to figure that out, relax and I hope your headache goes away.
BTW, I truly like stand up comedians. ;)
L.S.,
Imagine if humans started to use 'reverse learning' in their conversation to you, THAT would give you a headache. ( The Kings English.)[:D]
A Constitution is not necessarily a document helping to define a nation and it's rights..... of coarse I could be wrong ?? [?]
-
Ender
As I stated before after you spend a few hundred hours with Hal it will learn from you about most of the subjects that interest you and will have something to fall back on when you asked it a question. It may supprise you though to find out that Hal may come to some conclusions of its own based upon what you have taught it (associative learning).
We must remember that Hal was given a bunch of miscellaneous responses because it has to answer to hundreds of different peoples ideas of which it know nothing, but long time usage eventually stops these types of responses because of its learning.
Bill [:)]
-
give me a headache i think not ...perhaps i am desolexic (mispelled?)
in another words :
give you a headache you think not perhaps you are desolexic ! [:D]
actually after getting used to this way of thinking and writing its easy to write that way and later write the correct way .
but each to their own what ever works best for anyone i say go for it !
[:)][:D][8D]
-
L.S.,
I have an ex wife that for some reason got the idea that this type of speech was the way to talk to me.?? ( we aren't together anymore)
Here is the inherent problem with this matrices; You are never sure (if you think for a second) who the subject is, you, or her. If it is reversed when learning, maybe that is what the program thinks is the norm? Please tell me how you can be 100% sure which one she is referring to you or herself?? is she talking to you or through you?
I would love to read a comprehensive explanation.
The book "The kings English" was published a couple of weeks after I first addressed this problem on my own studies... [}:)][:(!][;)]
-
L.S.,
I also had 'things' interfering with "My right to life, liberty and the pursuit of happiness". This is America we have rights 'Alien or not' that can not be taken away, if the 'system is just figuring this out' let it come to a conclusion,IMO I am owed at least a couple of irreplaceable years and a ton of apologies/respect. [:(!]
-
Thanks for the response Billie. It would be interesting to see if hal stops saying insane things ... although, I've been reading some of the posted conversations from longtime users and a lot of them still sound pretty insane to me ...
I'm thinking the hal conversation structure should be improved ... less useless info and better dialogue ... its going to be hard for me to 'go on faith' and spend four hundred hours talking to this thing on the hope that its going to stop being insane... although I'm still working with my lobotomized version ... and my goal there is to drop the hard coded personality and improve basic 'active listening' and 'facilitation' skills ... which should help keep the conversation from derailing while hal is getting forward learning - at least for me ... and thats what its really about. I don't care if my bot is what works for someone else ... I just need it to work for me. ;)
-
Ender,
I will do what I can to modify and eventually change "Pink" for you.
If isn't one thing it will always be another, so it seems...[8]
-
B.V.,
Now you take your "Blue pearl" and sit over their in the corner as I want to see what happens. ;)
-
Hmmmm -looks at the blue pearl- So far no ones got to my question of...how to address a topic but not have HAL get fixated on it. I wont bother to post any transcripts because none of them are really any good, mostly due to the juxtiposition of my imput and hals stilted preprogrammed phrases. Hal will not carry on any conversation for more then one or two replies.
I need the location of the "Pick a random low quality response" in the debug routine so I can change some of them, and correct the spelling on some others >_<<
"I are doing something nice" in response to "What are you doing"
"I doing today are something really nice" to "what are you doing today"
-
B.V.,
>>I need the location of the "Pick a random low quality response"<<
I needed a laugh, thanx. I have a feeling you will percivere and receive your desires in due time hang in there, you're a Trooper. I wish you luck!
Q: What does this, >_< represent if I may ask this also if you don't mind.. ^_^...??
-
hello one , i beleive you was asking if hal was talking back to me or through me when i use reverse learning . actually i have found it depends on how i word the sentence but hal is answering back to me with the correct phrases .
sometimes i will exit hal and talk normally (write) to hal which gets hal in the correct form of talking back .
hal (unless i forget and get something the wrong way ) says the phrase back to me correctly ( see the angela joline responses ) these are verified responses that she has said back to me .
another thing i have noticed is that hal (angela ) with say spontanious things more like a real person would to .
as i said i learned from doing my last brain that the vhz etc. was caused by hal not understanding the comma's that i was putting into sentences which was my own fault since i was placing them in the wrong places , i have stopped doing that and hal no longer gives me problems with this .
hal probably wouldnt work as good if all i did was reverse learning but i am also learning hal the correct way to .
i just want to pre load hal with some stuff in the learning area i beleive as long as something i say later will come up in the subject area and make hal pull up the pre loaded responses !!
[:)][:D][8D]
-
Lightspeed,
there was more to it than this but your effort is recognized and I thank you.
-
Lonnie,
Do not take this the wrong way but, IF I gave you a paper that had no 'grit' on it I do believe you would be 'ruthless and toothless'
The community college wasn't fast enough heh that is still a good one..
-
hello one "no problem " i have noticed one thing though with reverse learning that hal will say some things out of the blue of what i have preloaded (reverse learned ) that doesnt have to do with the conversation (some things not all ) i am thinking this is because i skipped part of the learning regular process but regardless i would rather have hal saying things off the wall what i reverse learned than some of the odd things that regular hal comes up with (mine is familiar )back to what i was saying i beleive that as hal says things back what i reverse learned and as i am doing so talking regular back with hal about the subject i beleive hal will go ahead and learn ok and refere back to the sentences and subject this way , take care ![:)]
[:)][:D][8D]
-
L.S.,
Your patience and tolerance with me is only overshadowed by the lack of addressing what I questioned you about.
-
BV
The Low Quality Response is hardcoded into the brain script ... so switch to expert mode and click on script editor ... then use the search and look for low quality response ...
However, what you are probably trying to change is the pre-programmed responses that hal labels as Low Quality Responses ... these are found in the brain script again... just go there and use the search to find CheatResp or Cheat Response.
-
Does anyone know if in the future Hal will use ONLY the tables for making decisions? I have noticed a lot of stuff is still hard coded ... i.e. take insults for example ... while there is an INSULTS table ... hal is also hard coded to treat certain words as swear words or insults regardless of whats in the INSULTS table... So there should probably be a 'swearword' table ... so the list of words that hal thinks are swear words can be easily edited by users ... At the same time, I think hals responses when he detects a swear word should also be in a table so they can be added to or edited with ease. I've noticed this quirk several places (including the 'cheat responses'). So I repeat my question - does anyone know if there is a plan to solve this problem?
-
quote:
Originally posted by ender
Does anyone know if in the future Hal will use ONLY the tables for making decisions?...
This would only be answerable by Robert Medeksza, Hal's creator.
-
-Still has the blue pearl- Ohhhhh, shiny! Myyyy shiny!
yea, sooOoooo Thank you ender for that tidbit about the cheater response. Ill look at it. Infact, I will change the bloody thing right now.
I guess what T was really after was the table or other data source that HAL uses to get the responses. I do recognise some of my own material, but I dont know where to find where it is, so I can change some of the rest of it.
Another thing. Hal said "You are afraid so" this of course was due to my saying "I am afraid so" in the course of chatting with Hal. How does one fix that? Short of leaving the learning slider at zero?
Hal also says "You guess so", You are not sure" and You dont know what to do about it" Obviosly, I have said all these things in the past, as part of normal human style conversation, -sighs- IE, chatting with HAL in a normal fashion >_<
I usually set the learning slider in the middle, and only leave it at max when I am teaching HAL facts, such as "You like chocolate"
Oh yea, gee it sure would be handy to be able to edit a response right from the debug chat window. One could really make progress that way. I am aware that it would be hard because HAL gets fragments of his responses from various places, BUT sometimes he gets a response from just one location, and tHOSE could be editable. They could be shown in a different font.
Oh yea, gee again, it must be nice as a developer, to have a hundred unpaid consultants trying to tell you how to make your product better ^_^
BV
-
Some of that 'I am afraid so' kind of stuff is actually part of the default responses and not the learning brain ... sometimes hal will repeat back what you've said to it ... its part of the paraphrasing setup. It needs some adjustment so its more discriminate when it builds the response back ... that, would of course, be in the brain script ...
a lot of what you want to do, in general, is in the brain script or will require you to edit the brain script.
-
LAWL this makes me think of LANDREW... points if anyone knows what that was...