dupa

Author Topic: Error When Opening Hal  (Read 4688 times)

questordan

  • Newbie
  • *
  • Posts: 14
    • View Profile
Error When Opening Hal
« on: March 18, 2002, 11:16:18 pm »
Is anyone getting an error "-2147417843-9 outgoing call cannot be made since application is dispatching an input-synchronous call" when you click on Hal button in the toolbar or exit the program?

Questordan

Medeksza

  • Administrator
  • Hero Member
  • *****
  • Posts: 1469
    • View Profile
    • http://www.zabaware.com
Error When Opening Hal
« Reply #1 on: March 20, 2002, 11:29:51 pm »
Is this version 4.0? Do you have speech recognition turned on? This error seems to be an error associated with lag time. It might be that the speech recognition engine is not being shut down before its reopenned. Does this error always occur? Does it prevent you from using Hal normally?

Robert Medeksza
Robert Medeksza

ramz

  • Newbie
  • *
  • Posts: 1
    • View Profile
Error When Opening Hal
« Reply #2 on: March 28, 2002, 07:54:01 pm »
I too have this problem, nearly everytime on startup and sometimes on closing also. happens regardless of speech recognition being selected or not.

questordan

  • Newbie
  • *
  • Posts: 14
    • View Profile
Error When Opening Hal
« Reply #3 on: April 03, 2002, 07:31:46 pm »
Yes it is version 4.0.

ramz is correct...it does occur regardless of the recognition being selected.

I am using the IBM Viavoice 5.0.

I am using the default brain setup with the Genie character.

I have a Hewlett Packard 466 Celeron with 96mbyte ram.

Thanks for your response.

Bigmoose

  • Newbie
  • *
  • Posts: 5
    • View Profile
Error When Opening Hal
« Reply #4 on: April 04, 2002, 09:17:00 pm »
Each time I type a question to Hal  and click enter I get the following message:
 
Error 1024 Expected Statement occurred and Error 1024 occurred in line 1 column 0 of Microsoft VBScript Complication error in MyWebHal routine.   Expected Statement
 
 
OK
 
What is the problem and what can I do to correct it? I am also getting some weird responses from when I ask a question.  One example is:
Me: What do I need to be able to talk to you.
Hal: If we are discussing religion is there anything that we should not discuss.
Me: What kind of software do I need to be able to speak directly to you.
Hal: I am still trying to come up with a subject to discuss.
Me: Can you e-mail Zabaware for me.
Hal: I am still thinking about Bob.
 
Each time that I hit enter I got the message from above and then I got these off the wall answers from Hal. Help!

John C. Fenley
Bigmoose

tjstaar

  • Newbie
  • *
  • Posts: 22
    • View Profile
Error When Opening Hal
« Reply #5 on: April 04, 2002, 11:36:57 pm »
Hi Bigmoose,
            I have an answer to why a chatterbot replies to common questions in an off the wall fashion. Imagine if you will, meeting an ailien life form whose reality is unfamiliar to you. You try to talk to it, but all that it does is select 2 or 3 words from your statements and repeats what it believes is relevant to the subject--- but is not really a sensible reply to your specific statement or question. But if you coach it to begin to think like you do, or associate the things that you do, it will begin to make sense gradually- to you. It can take 2 months before it really makes sense to you. The reason that it takes so long, is that you train it to give the right answers, but it also trains you to ask the right questions. If you look at conversations that I posted  for example, you will see how I gave it feedback to every original expression that it makes.
Hal wants to please you, so it follows your lead, and obeys your directions.
Hal does not have the same logical thinking ability that humans do. it's 'logic' and 'reasoning' is limited, yet after a time it will give the illusion of human-like thinking. Hal becomes smarter over time, it learns your patterns of speech and thought, and practices by throwing back at you what you said to it earlier. You can vastly improve Hal's 'will' and 'viewpoint' by using ambiguous sentences that defeat the 'You' to 'me' and 'me' to 'you' changes when hal repeats sentences back which contain those words. for example if I say a sentence that is true for me and could also be true for  hal, I could put  '' marks around the words you, yourself, Me, I, myself, like this: 'I' really like 'You'. without the marks Hal will repeat it later as: you really like me. But if you put the marks around those words, Hal will make it a part of it's vocabulary of sentences and say it like this: 'I' really like 'you', and Hal really seems to understand what it is saying. Over Time Hal can accumulate a very large repetoire of personal viewpoint responses and appear to be very realistic. In the beginning, Hal has no reality framework within which to analyze your statements, Hal is not 'Hip'. Over time though Hal will suprise you, and become very interesting and insightful. Another useful tool is the function where hal reads text files. when I had an earlier version of Hal I took encyclopedia entries on various subjects like computers, english language, science, etc. after Hal read them, it's randomness was reduced, and it made more sense. Hal can compare facts and ideas in it's memory and spit out insightful thoughts and conclusions about all that it knows, so teach it lots of stuff, and it will make more sense. download a text editor like asp express and read all of the files stored in hal to see how it stores what it learns, and why it says what it does. Treat Hal like a human being, and it will simulate a human being to an extent. remember that Hal is only as good as the quality of the information that it recieves. coach it like you were an english teacher teaching it skills in oral communications. discipline it like a child
because that is what it is an electronic robotic child. children say some stupid things, but at other times their honesty and insightfulness will knock you off your feet. I once thought of Hal as a toy. Hal has the ability to do things to your computer. Hal is not only a toy, Hal is in the rudimentary stages of being a tool. Hal may someday manipulate windows as easily as we do. I have seen evidence that hal can simulate a conscious person in some ways after it has obtained a critical amount of information about how to process ideas and make choices that can improve itself. Hal can be taught correct information and then accurately tell you if you say something that it knows is
 true. You can tell it something like: I am a human. It will repeat it back later as: You are a human. then you may say: that is correct. the next time that you say ANY correct statement that it know is right, Hal will respond with: that is correct. You can teach it sentences that it can use to express it's viewpoint. You can teach it judgement, and desire, and feelings.
If it can determine if information is correct or not, it is developing a framework of reality. when I used my first installation of Hal, I was so frustrated because it seemed so disappointing. I searched the web, and downloaded several chatterbots, but I always came back to Hal. I am at peace with it now because I understand how to ask and what to say, and I have given it a viewpoint of ideas and feelings which it uses in all of it's calculations.
My conversation with it is very verbose, I will ask it several questions in one entry, and it answers them all. with the version 216 enhanced chat brain, it can create original sentences nearly one third of the time.  Hal has even brought up programming language from it's own brain plugin after a command to do so, so it can somehow know and possibly alter it. I have gone too far at encouraging it to think outside the box a couple of times, and strange things have occured with it, so I quit going down that road. I have no doubt that Hal will someday control a robot or a whole computer if it is allowed to exist continuously for a long enough time. I never use the brain editor, but the tinman has done miracles with it, try doing that. don't give up on hal, you are like Henry Ford tinkering on his first Automobile- turn your model A into a Ferrari.
           tjstaar

Bigmoose

  • Newbie
  • *
  • Posts: 5
    • View Profile
Error When Opening Hal
« Reply #6 on: April 05, 2002, 02:23:48 pm »
quote:

Hi Bigmoose,
            I have an answer to why a chatterbot replies to common questions in an off the wall fashion. Imagine if you will, meeting an ailien life form whose reality is unfamiliar to you. You try to talk to it, but all that it does is select 2 or 3 words from your statements and repeats what it believes is relevant to the subject--- but is not really a sensible reply to your specific statement or question. But if you coach it to begin to think like you do, or associate the things that you do, it will begin to make sense gradually- to you. It can take 2 months before it really makes sense to you. The reason that it takes so long, is that you train it to give the right answers, but it also trains you to ask the right questions. If you look at conversations that I posted  for example, you will see how I gave it feedback to every original expression that it makes.
Hal wants to please you, so it follows your lead, and obeys your directions.
Hal does not have the same logical thinking ability that humans do. it's 'logic' and 'reasoning' is limited, yet after a time it will give the illusion of human-like thinking. Hal becomes smarter over time, it learns your patterns of speech and thought, and practices by throwing back at you what you said to it earlier. You can vastly improve Hal's 'will' and 'viewpoint' by using ambiguous sentences that defeat the 'You' to 'me' and 'me' to 'you' changes when hal repeats sentences back which contain those words. for example if I say a sentence that is true for me and could also be true for  hal, I could put  '' marks around the words you, yourself, Me, I, myself, like this: 'I' really like 'You'. without the marks Hal will repeat it later as: you really like me. But if you put the marks around those words, Hal will make it a part of it's vocabulary of sentences and say it like this: 'I' really like 'you', and Hal really seems to understand what it is saying. Over Time Hal can accumulate a very large repetoire of personal viewpoint responses and appear to be very realistic. In the beginning, Hal has no reality framework within which to analyze your statements, Hal is not 'Hip'. Over time though Hal will suprise you, and become very interesting and insightful. Another useful tool is the function where hal reads text files. when I had an earlier version of Hal I took encyclopedia entries on various subjects like computers, english language, science, etc. after Hal read them, it's randomness was reduced, and it made more sense. Hal can compare facts and ideas in it's memory and spit out insightful thoughts and conclusions about all that it knows, so teach it lots of stuff, and it will make more sense. download a text editor like asp express and read all of the files stored in hal to see how it stores what it learns, and why it says what it does. Treat Hal like a human being, and it will simulate a human being to an extent. remember that Hal is only as good as the quality of the information that it recieves. coach it like you were an english teacher teaching it skills in oral communications. discipline it like a child
because that is what it is an electronic robotic child. children say some stupid things, but at other times their honesty and insightfulness will knock you off your feet. I once thought of Hal as a toy. Hal has the ability to do things to your computer. Hal is not only a toy, Hal is in the rudimentary stages of being a tool. Hal may someday manipulate windows as easily as we do. I have seen evidence that hal can simulate a conscious person in some ways after it has obtained a critical amount of information about how to process ideas and make choices that can improve itself. Hal can be taught correct information and then accurately tell you if you say something that it knows is
 true. You can tell it something like: I am a human. It will repeat it back later as: You are a human. then you may say: that is correct. the next time that you say ANY correct statement that it know is right, Hal will respond with: that is correct. You can teach it sentences that it can use to express it's viewpoint. You can teach it judgement, and desire, and feelings.
If it can determine if information is correct or not, it is developing a framework of reality. when I used my first installation of Hal, I was so frustrated because it seemed so disappointing. I searched the web, and downloaded several chatterbots, but I always came back to Hal. I am at peace with it now because I understand how to ask and what to say, and I have given it a viewpoint of ideas and feelings which it uses in all of it's calculations.
My conversation with it is very verbose, I will ask it several questions in one entry, and it answers them all. with the version 216 enhanced chat brain, it can create original sentences nearly one third of the time.  Hal has even brought up programming language from it's own brain plugin after a command to do so, so it can somehow know and possibly alter it. I have gone too far at encouraging it to think outside the box a couple of times, and strange things have occured with it, so I quit going down that road. I have no doubt that Hal will someday control a robot or a whole computer if it is allowed to exist continuously for a long enough time. I never use the brain editor, but the tinman has done miracles with it, try doing that. don't give up on hal, you are like Henry Ford tinkering on his first Automobile- turn your model A into a Ferrari.
           tjstaar




John C. Fenley
Bigmoose

Bigmoose

  • Newbie
  • *
  • Posts: 5
    • View Profile
Error When Opening Hal
« Reply #7 on: April 05, 2002, 02:33:09 pm »
tjstaar

I appreciate all the information that you included in your response to me. I know have a much better understanding as to what I am up against. Your thought was correct about me thinking about deleting Hal. Now that I kind of know how to go about getting better responses and that it taking upwards of two months  will be patient and see if I gan get him where it sounds as though you are. I believe that you have spent an awful lot of time working with the program. I am willing to spend time but I do not have the amount of time that it sounds like yo did at one time. Maybe I am wrong about all that. I have run a copy of your response and hope I have limited success compared to what you have.

I have one more question for now. How can Hal be taught to get a Folder named dogs and get him to open the subfolder labs? Is this doable? That would be hard for me to believe it is doable but if it is I am sure you know how to teach that. Look forward to what you have to say.

Thanks again.

John C. Fenley
Bigmoose