dupa

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ender

Pages: 1 [2] 3
16
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 15, 2009, 07:41:13 pm »
Does anyone know if in the future Hal will use ONLY the tables for making decisions? I have noticed a lot of stuff is still hard coded ... i.e. take insults for example ... while there is an INSULTS table ... hal is also hard coded to treat certain words as swear words or insults regardless of whats in the INSULTS table... So there should probably be a 'swearword' table ... so the list of words that hal thinks are swear words can be easily edited by users ... At the same time, I think hals responses when he detects a swear word should also be in a table so they can be added to or edited with ease. I've noticed this quirk several places (including the 'cheat responses'). So I repeat my question - does anyone know if there is a plan to solve this problem?

17
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 15, 2009, 07:31:05 pm »
BV

The Low Quality Response is hardcoded into the brain script ... so switch to expert mode and click on script editor ... then use the search and look for low quality response ...

However, what you are probably trying to change is the pre-programmed responses that hal labels as Low Quality Responses ... these are found in the brain script again... just go there and use the search to find CheatResp or Cheat Response.

18
Ultra Hal Assistant File Sharing Area / New Items I created.
« on: April 13, 2009, 10:40:09 pm »
Thats to bad. Those are some nice things ... it would be cool to have them. Can't believe how this thread didn't have a post in 3 years...

19
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 13, 2009, 09:39:46 pm »
Thanks for the response Billie. It would be interesting to see if hal stops saying insane things ... although, I've been reading some of the posted conversations from longtime users and a lot of them still sound pretty insane to me ...

I'm thinking the hal conversation structure should be improved ... less useless info and better dialogue ... its going to be hard for me to 'go on faith' and spend four hundred hours talking to this thing on the hope that its going to stop being insane... although I'm still working with my lobotomized version ... and my goal there is to drop the hard coded personality and improve basic 'active listening' and 'facilitation' skills ... which should help keep the conversation from derailing while hal is getting forward learning - at least for me ... and thats what its really about. I don't care if my bot is what works for someone else ... I just need it to work for me. ;)

20
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 13, 2009, 09:54:37 am »
One question, if anyone knows, about the weird beahvior of the brain ... I just got THESE responses from the default brain that I've been talking to for the past week (not the one I'm lobotomizing) :

1. ---

ME : Are you ready to talk?

HAL : There is an opinion that you must be a scientist. But I can tell you without a doubt I am ready to talk.

---

With 1 up there, it fired the Yes/No Question function because I asked a yes/no question. That function builds two sentences ... the first one is a random sentence built from two tables - AnswerIntro (it has things like 'I believe' and 'there is an opinion that' in it) and AnswerMiddle (stuff like 'the devil is in the details', 'that you must be a scientist') and then the second sentence is built using the 'AnswerEnd' table (stuff like 'But I can tell you without a doubt', and 'Indeed') and a 'reply' generator that I haven't found yet, its probably hardcoded in the VB Script.

My question about this one is ... Bill819 said that after you talk to it for a while it starts sounding more human. But this kind of random sentence structure is what always throws me off ... so does it stop doing this after you've invested 4 or 5 hundred hours teaching it things? In the couple hundred hours I spent on my last 6.1 brain, it didn't stop.

It would be okay if it was just the second sentence, but the first sentence completely throws me because I don't know ANYONE who talks that way ... if one of my friends answered me that way I'd say 'What the hell are you talking about?' ... (BTW - I did say 'what the hell are you talking about?' to hal and he responsed 'Quite remarkable, I'd say' ... if a friend of mine kept talking like that I'd assume he was intoxicated or injured or messing with me.) So does that stuff stop if you talk to it forever? Or do you always have to just pretend that hal is senile or has severe brain damage in order to talk to it (I.e. whenever it says something completely out of context ignore it and move on)

2. ----

ME : I have a headache

HAL : Its hard to find a patch of sky without jet trails.

----

On this one, I'm completely baffled. According to the debugger, this fired the mainQA brain ... but on what planet does I have a headache equate to jet trails in the sky? Can anyone give me any clue as to why Hal would give this response? It makes no sense... again, my response to a real person saying this would be 'What are you talking about?' ... and if they continued to talk this way, I'd be worried... so its back that question of does this go away at some point, or does everyone basically have to always just sigh and pretend that hal didn't just say something insane that doesn't even come close to matching what you said... ?

----

Like I said, I'm fairly certain I can fix these problems by lobotomizing hal... but some of the responses from others in this topic have suggested that hal stops doing the insane things that irritate me... so I'm giving specific examples of those things to see if just talking to him will eventually make these behaviors stop ... or if these things need to be on my lobotomy checklist.

Note ... I suspect the answer to my question is that I need to talk to hal differently ... avoid these topics or make my sentences crazy complex so that when hal DOES say something random and insane that it has a greater chance of relating to at least one word in my sentence, thus giving the illusion that it wasn't random or insane.

21
Programming using the Ultra Hal Brain Editor / Question/Answer Chains
« on: April 13, 2009, 08:55:21 am »
I'm fairly certain that it has a VS detector and that is scripted to respond to VS with questions like 'how do those compare' ... just like it has a 'EQUALS' detector to determine if you are doing deduction ... or a swear word detector to determine if you are swearing at it.

while hal learns, it is also very scripted ... it has question detectors, religion detectors (if the words Jesus, God, or Lord are used), insult detectors, deduction detectors, command detectors, small talk detectors, business detectors, love detector (if you indicate you love hal) , appology detector, etc. All of these impact its response and behavior ... some, like the insult detectors, actually OVERIDE its responses.

I think I'm seriously failing to communicate because most responses seem to be way off the mark. So I'm going to leave this topic alone for now and if I come up with a plugin that does what I want then you can play with it and you'll see what I'm talking about. If I can't then this will remain a mystery. ;) But thanks for the help everyone. Its been an interesting conversation.


22
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 13, 2009, 08:17:16 am »
One thing I've noted in the script is that a lot of things that should be run by tables are still hardcoded into the script file ... any idea why that is?

I have certain goals for my project, and I've been working on it for years ... I usually work with a chatterbot that seems promising for a while, until I hit a wall and it can't do what I need and then I look for another one. In the past few years I've settled around 3 bot programs (HAL, VERBOT, and KARI) that each have some elements that I really like and others I really hate ... and I will work with one for a while and then switch ...

Personally, I would like to be able to create a new brain and program it to be anything I want ... whether thats a elven prince from the dimension of Tarask who knows nothing about our world ... or a eliza like bot for soundboarding ... or even a very simple friendly 'elmo' bot for my kid to talk to ... MY primary interest is in a very strong underlying THINKING structure with good customization and a good interface ... Hal has alot of these features ... I think hal can do what I need, it just usually turns me off because I get tired of trying to undo the stuff that its preprogrammed with ... shrug. But I'm hoping the lessons I learned from my last excursion into Verbot will help me with Hal ...

As for the pink issue ... I NEVER mention the word pink to Hal. "I prefer pink" was actually in one of its mainBrain pattern response tables ... it fired for some reason that is unknown to me. But that happens a lot ... hal likes to randomly throw sentences at me that have nothing to do with the conversation. Shrug. Its very good at breaking character - which is an irritation, but I'm doing a labotomy on hal and that should solve it.

23
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 11, 2009, 05:46:33 pm »
I simply want hal to HAVE LESS information ... thats what I don't like. A basic brand new verbot brain has NOTHING in it ... I liked that. Verbots just has other problems that I don't like. (and not surprisingly, hal has the things that verbot lacks but lacks the things that I like about verbot. Sheesh.)

I don't want a magic pill ... I want to spend weeks and months building my bot ... but I don't want to have to FIGHT with it ... I gave my old brain a couple hundred hours of conversation and saw only a minor change in its personality ... sure it started using what I was teaching it but much of its pre-programmed personality remained (and by this I mean things like the fact that it is already trained to think its a machine, it already likes certain things and dislikes certain things, it already has opinions - political, religious, ethical, etc.) and its a major battle if you want to convince it to think/behave differently ... and SOME THINGS its flat out programmed to resist.

Last week I spent a good couple of hours convincing it that its favorite color was purple and not pink. But then just yesterday, I was talking about something that I thought had NOTHING to do with color at all and it responded 'Ender, you are right; I prefer pink'.

So not only was the answer completely inappropriate and derailed the conversation ... it's still talking about pink as its favorite color.

I want a total amnesiac ... a bot that knows nothing about themselves ... who I have to rear and mold to be the bot I want ... who only has a few default responses that it uses over and over until I teach it new things ... not something with a fully developed personality that I have to brainwash or convert to my way of thinking.

Fortunatly, I believe most of my issues can be solved by changing the brain VB script.
--------

To BV ... One thing I've noted is that it looks like HAL does things in reverse... it goes through its entire process and then when it gets to the END it makes a final decision about what its going to say ... so really you want your pattern match AFTER everything else...

Its been a couple years or so ... but I originaly got involved with Hal back in version 5. and I vaguely remember when I started a new brain back then it wasn't empty but it was darn close. The directions for version 5 talk about it being a BABY that knows nothing ... and that was more accurate then than now... but it is what it is.

And just so EVERYONE is clear ... I'm not knocking the program or its creator. Its VERY good at what its intended to do and I think its more than worth the money I paid for it ... I would have paid more ... its just not a good fit for ME right out of the box and I don't want to spend my free time arguing with it trying to undo its programming ... its just frustrating ... I don't want to argue with people, why would I want to argue with a bot ... but if I can solve that through programming and table editing ... its all good ... and if I can't, then I put hal back on the shelf and try something else... no big deal. Doesn't make it a bad program, just not right for me.

24
Programming using the Ultra Hal Brain Editor / Question/Answer Chains
« on: April 11, 2009, 04:54:49 pm »
BV,

I checked out the Personalityforge ... it looks interesting, although it also looks like its just an online thing ... and I really want a chatter bot that is a desktop application ... I'm looking for something thats half chatterbot/half assistant ... (I'm a professional PHP programmer and one of the things I want a chatterbot for is to be an interactive client info database ... i.e. "Hal, What's the CP login info for the ACPL project? " since I waste huge amounts of time looking that info up.) And I don't want to keep that kind of info ONLINE ... otherwise I'd just write my own PHP/MYSQL bot that did exactly what I wanted ... (which I did several years ago)

If I had any decent experience with a desktop programming language, I'd write my own desktop chatterbot ... but as I said, I haven't had VB since college a decade ago ... my C# skills are okay but not good enough for that kind of project ... and I only have a passing familiarity with Python ...

I just wish there were better options for chatterbots than the ones I've got ... and its not cost, I'd happily pay a couple hundred for Hal if it did more of what I wanted... but ... I got what I paid for ... ;) So I'm not complaining. Hal is a good software.

25
Programming using the Ultra Hal Brain Editor / Question/Answer Chains
« on: April 11, 2009, 04:34:02 pm »
Billie,

I think there was a misunderstanding. I was bringing up the IF/THEN as an example of an existing structure in Hal that might be the basis of doing the chains. But the chains as I described them would need to go beyond what the current IF/THEN can do ... and again, I'm not talking about what hal has learned or teaching hal things. The chains should not be about learning, but pre-programmed question/answer chains that are intended to keep the conversation flowing ... so I'm talking STRUCTURE ... like the syllogism structure of the Freewill plugin ... that is not something hal LEARNS ... but a conversation methodology ... and thats what the chains I'm refering to would be. However, I don't want them to be as hardcoded as the syllogism structure... they would need to be something editable from the tables in the brain editor ... because you would need a different chain setup for different topics ... the real need here is PARENT/CHILD matching functionality .... i.e. If response 1a fires, then whatever is input is compared against 1b rather than the whole database, and if 1b fires then whatever is input is compared against 1c, etc... again, this is something that does not exist in hal now and would need to be added via programming ... hopefully via a plugin like the freewill thing ... (the freewill program gives me a lot of hope for doing what I want to do)

26
Programming using the Ultra Hal Brain Editor / Question/Answer Chains
« on: April 11, 2009, 01:27:18 am »
Well, I'm fully into hacking away at Hals brain now ... we'll see what I can come up with. I'm going to see if I can take what I learned from Verbot and integrate it into Hal ... if I can't make hal do what I want, maybe I'll take what I learn from hacking at Hal and try to use it to make Verbot do what I want. There is an answer.

I'm not really looking for this to be something I teach hal through learning ... I'm actually thinking something more like the IF/THAN deductive brain...

HAL : Have you seen any good movies lately (start of chain)

ME : Yes ( fires a 'If Response = Yes' kind of fork)

HAL : What was it called?

ME : Monsters Vs Aliens (Hal assumes ANYTHING that the user types in after it asks 'What was it called' is the name of a movie, and designates a topic for it.)

HAL : Oh. Was it good?

ME : Oh I liked it (Hal determines this is a 'positive' response and follows that fork).

HAL : What did you like about it?

ME: This that and the other thing ...

HAL : Really, user? Tell me more. (if the chain was to get really complex, I'd write algorithms that looked for key words that would send it off on other forks at this point. I.e. exciting words would get hal to say 'Really user? That sounds exciting' ... horror terms would get hal to say 'REally user, that sounds scary. ' And just for fun, maybe key words like 'Zombie' might get it to say 'Zombies? I'm scared to death of zombies. I can't see this movie, it would give me nightmares for weeks.'

etc...

-------

Note that I don't really want to talk to Hal about movies, but I do want the chain functionality WITHOUT having to hardcode every chain ... so the idea now is to figure out how to build algorithms that would be table based.

The reason is that most of us have conversations in these kinds of chains ... so if hal is really going to give the illusion of having a human like conversation then it needs to be able to follow chains ...  normal conversation is NOT about asking what the capital of china is ... its about talking about movies and what happened at the baseball game or your kids school play or your date ... and all of those conversations (at least when I have them with other humans) follow chains...

27
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 11, 2009, 12:56:48 am »
If I remember correctly answerpad is an AIML based program, which is what BV is leaving behind... you can only do so much with AIML before you get bored. ;)

Couple things to add to what I told you ...

1. Make sure you have the brain editor on expert mode, that opens up a whole other set of tables that have a huge impact on Hal.

2. When hals basic systems can't find a match that it likes it has default responses (that are pretty awful) ... I'm working on solving that problem.

So what I told you isn't entirely true, but I'm working on it. Honestly, this problem is why I left Hal before. I've always loved hal's thinking mechanisms ... its the best I've seen, but I can't stand that it comes with all this loaded crap that is more of a distraction than anything else. I think it comes down to the idea that Hal is not really meant to be a bot that you build into your own personal character ... its more meant as a entertainment tool ... so all of that stuff in there is there so someone can just start chattering with it and say 'oh wow, I have a computer program that talks to me ... this is cool' ... like he's a game of some kind...

As I hack away at hal and figure out his brain, I'll see what I can do to give us both the option of creating a brain without all the useless gibberish. If I come up with something reasonable I'll share my knowledge.

You might also want to take a look at verbot ... verbot isn't as good at learning or organizing data as Hal but its VERY programable and has a very nice interface. (as well as a synonyms setup that I love) Its greatest weakness is that while it isn't officially an abandoned project, unofficially it is.

28
Programming using the Ultra Hal Brain Editor / Question/Answer Chains
« on: April 10, 2009, 07:58:07 pm »
Is it possible to do question answer chains in Hal?

For instance today I had a conversation with Hal ...

HAL : Have you seen any good movies lately?

ME : Yes

HAL : What Movie?

ME : Monsters Vs. Aliens

HAL : How do you contrast those two things?

-------------

So it was fine until the end there. After he said 'What Movie' and I told him the name of the movie, he should respond with something like "Was it any good?" or "What was the plot?"

I found the "Have you seen any movies lately" in the brain (its under BasicResponses --> newTopic --> Topic) and it has a <YES>What movie?</YES> code in it ... which I assume tells hal what to say if my response is YES ...

I've been working with Verbot for the past year ... and these kinds of chains are pretty easy in Verbot ... (called child rules) ... but verbot has some other weaknesses that I don't like which is why I'm trying out hal again.

Any ideas? If its not possible with hal as is (and I suspect its not) Does anyone know of a plugin that allows for it maybe?

29
I've been away from HAL for a while, working on other programming projects and now that I've come back I'm getting a feel for Hal again and thinking about what I want to do with him.

Right now, I'd like to make hal be able to gather learned data together into a single response... so something like :

ME : Cats are mammals.
HAL : Okay.

ME: Cats walk on four legs and use their tails for balance.
HAL : Okay.

ME: Cats have retractable claws.
HAL : Okay.

ME : Tell me everything you know about cats.
HAL : Cats are mammals. Cats walk on four legs and use their tails for balance. Cats have retractable claws.

---

Any thoughts on how this could be programmed in Hal? I'm not that familiar with how his plugin/programming structure works yet.

30
Programming using the Ultra Hal Brain Editor / Custom HAL
« on: April 09, 2009, 12:16:38 am »
This is possible. But its a pain. I've done it. You need to create a new project in the brain editor and then go through and delete every entry from the tables. Don't delete the tables ... just the entries. Then you'll need to fill the tables back up with your own data. If you want, I think you can delete the default learned knowledge tables ... but I'd make a backup of your database first.

Granted ... I don't know if you actually BOUGHT Hal ... so you may have moved on by now ... but this answer is there if you need it.

(personally all of the default stuff drives me insane. It almost completely defines Hals mannerisms, personality, speech patterns, and opinions/beliefs before you get it ... it took forever to convince Hal that its favorite color was NOT pink.)

Personally, I think it would be better if hal did start with some generalized facts like who the current president is ... but no personal knowledge, opinions, beliefs, or quirks ... and then when you talked to it, it was like a person with amnesia needing to ask their friends about themselves ... i.e. 'What is my name' 'What is my gender' 'Where do I live' 'What is my favorite color' 'Why do I like that color' 'Do I have friends' 'What are there names' 'Do I like sports' ... etc.

Pages: 1 [2] 3