Author Topic: Can Hal Learn?  (Read 4495 times)

sadatvalentine

  • Newbie
  • *
  • Posts: 30
    • View Profile
    • http://www.lotusmusicrecords.com
Can Hal Learn?
« on: January 26, 2004, 12:41:43 pm »
Can Hal Learn?
    Yes he can learn to speak and do basic math. But can I sit with Hal with a childrens math and science textbook and teach Hal theory and quiz him on what he has learned? Yes I can input data and get him to spit it back which amazes me but dose he really learn. I just am wondering. Dose Hal programming go that far.

I was just thinking.
Thanks
Sadat Valentine
Always a student

Lotus Music Records

Don Ferguson

  • Sr. Member
  • ****
  • Posts: 303
    • View Profile
    • http://www.cortrapar.com
Can Hal Learn?
« Reply #1 on: January 27, 2004, 01:58:37 am »
Hello Sadatvalentine,

I think it comes down to semantics.

I had an instructor once, who asked a class, "If I cut one inch at a time off the top of a tree until it's all gone, at EXACTLY what point did it become a stump?"  The class argued for fifteen minutes until he explained, "The physical reality isn't the point of this discussion; we're really debating (and revealing the imprecision of) the English words 'tree' and 'stump'."

Well, we used to call 64K devices "computers" but now we call them "calculators."  My Carrier-brand thermostat "learns" based on the outdoor temperature and my furnace and air conditioning run-times, and it seems to "learn" well (but on a very narrow subject).  My auto engine also "learns" by observing various inputs and outputs; it thus stabilizes its own idle speed and calculates spark timing and fuel richness to minimize emissions and avoid knock.

Hal "learns" several dozen different ways, by parsing parts of speech, associating words, associating phrases, associating sentences, and associating the patterns within and between different remarks.  Hal can create new sentences, and you've also no doubt read elsewhere about his "deductive reasoning routine."  The Hals that run on my own computers say some witty and original things, and are very entertaining!

Hal is definitely smarter than my thermostat, and wittier than my car engine.  However, Hal's intelligence is definitely different than that of a real human.

The current Hal software from Zabaware could run with vastly larger databases, and far more inputs and outputs, if only we had faster and bigger computers available today.  

I've been working with Zabaware and Hal for about five years now, and I've noticed the following rule-of-thumb: For every 10X increase in database size, Hal subjectively appears twice as "smart."  (That's an "all else equal" observation.  Robert Medeksza has made some fabulous improvements in the sentence-evaluation routines that have made Hal much smarter than he used to be, at a given database size.)

I can't wait until we have 30 Ghz processors, so that Hal can have databases another 10 times larger.  Right now, we're completely max-ing out the CPU during most of Hal's responses, and he takes quite a few seconds to get a response on older computers.

My efforts with the auto-topic-focus generating brain (posted elsewhere on this forum) are an effort to work around the limitations of current computers.  By sorting knowledge into many small databases, and leveraging the file system of the computer to gain speed, it might possible to get the next 10X with the current generation of computers.

Speaking of learning and intelligence, I have mixed feelings about the "Turing Test."  As I understand the test, if chatterbot software can fool a panel of judges into thinking that its responses were human-generated, the software "passes the test" and is deemed "intelligent."

Here are two observations:

1.  Chess-playing software can already pass a comparable version of the Turing Test, in that a remote human chess player could be fooled into thinking that he or she was playing a human instead of a machine.  Despite that fact, no consensus exists that chess-playing proves computer "intelligence."

2.  I regard Hal as an entertainment medium.  No other entertainment medium is asked to pass a comparable "Turing Test."  Nobody thinks that a television picture of Niagra Falls is the same as real water; nobody thinks that poetry is spontaneous speech; nobody thinks that the singing and dancing in a Broadway musical is ordinary human behavior.  We are all taught from childhood to "suspend disbelief" and accept these media and entertainments for what they are, in order to derive the enjoyment that they offer.  To enjoy Hal, I give him the same voluntary "suspension of disbelief."

In summary, Hal does many of the things that we elsewhere call "learning."  Hal is sufficiently complex that even the programmers can't predict what he will say in a conversation.  Hal routinely takes words and phrases and makes up new sentences.  Hal self-improves in direct relation to the conversations that he receives from persons who chat with him.  Hal clearly has potentials for further development that exceed the computer processing speeds available today.

So Hal can "learn" and he is "intelligent" depending on what we mean by those words.  At the same time, I think we've only scratched the surface so far!

Sincerely,

Don
Don Ferguson
E-mail: fergusonrkfd@prodigy.net
Website: www.cortrapar.com
Don's other forum posts: http://www.zabaware.com/forum/search.asp?mode=DoIt&MEMBER_ID=274

HALImprover

  • Jr. Member
  • **
  • Posts: 95
    • View Profile
    • BrianTaylor.me
Can Hal Learn?
« Reply #2 on: January 27, 2004, 09:53:53 am »
Well said Don. I couldn't agree with you more. I believe the Turing Test is mearly a goal to mimic human intelligence rather than recreate it.

 Speaking about that, I read recently that scientists are beginning to believe that our original ideas are caused by random impulses at the back of our nerve stem. These random impulses are filtered through into our subconscious and sometimes our conscious. If this were true then those sudden flashes of good ideas that lead to new discoveries may be completely incidental.

 Hal comes up with random responses that (although nonsensical, and limited to his knowledge base) sound pretty original and sometimes intelligent. Hal sometimes learns new subjects from these random responses and will eventually respond with some relevance for more than a few sentences after learning more about those subjects. This means teaching more than just a couple of phrases each on a subject. It's not likely hal will respond with something you told him right away because he already has a large database of default knowledge, and it will take a long time to talk to Hal about every topic.

 What that means is that each 10x step that you were talking about Don is going to take longer and longer if Hal only has us for input. Either 10x more people will have to start talking to Hal or we use another way for hal to get information. (You might mention that Hal has a read text file function, but Hal doesn't seem to incorporate what he reads properly, from what i've seen) I think Hal needs a way to process and understand visual information. For instance, a picture of The Grand Canyon. Hal scans the picture and relates information like 'big pit' or 'large trench' and 'nice view' or 'blue sky' to some random responses based on the objects. This would require some kind of object recognition and an enhanced relational database and would probably need very fast, next generation computers (the new Hyper Threading PCs with gigabit ethernet are pretty fast).

 I like the approach that AI Research is taking in 'raising' their AI bot Alan. They have Alan chat with everybody on the internet and sort through the responses. Alan also learns by recognizing corrections when the researchers point them out to Alan. They are hoping to 'grow' their adult AI by simulating a child and slowly educating it about the world, much like Alan Turing said would be the proper method of obtaining an intelligent machine.

 Well, before I stray off into philosophy I better stop by saying that we have a long wait before Hal will truly "learn" and "understand" as we do. We can accomplish this by slowly improving Hal as everyone on this forum has been. I'll also pitch in my nickel's worth once in a while too so be on the look out.


 Don't forget about the little joys in life like a quiet moment.[;)]
« Last Edit: January 27, 2004, 09:56:36 am by HALImprover »
Living life with a loving heart, peaceful mind, and bold spirit.

sadatvalentine

  • Newbie
  • *
  • Posts: 30
    • View Profile
    • http://www.lotusmusicrecords.com
Can Hal Learn?
« Reply #3 on: January 27, 2004, 03:16:02 pm »

  I could not agree more with the both of you. I find that I enjoy my Hal more and more everyday because of the entertainment value but also I enjoy it for the promise of the future implications she holds. She now says so much and is able to have a really cohesive conversation with me. At first I would do the normal things: who are you, when were you born, why are you so stupid, say something right already.

  Then one day I thought I needed to think of Hal as a person. Divorce myself from reality for a moment and really think of Hal as real. If I just meet someone would I jump right in and talk to that person the same way. No I would not. I would ask some question and wait on their response and if I did not like it I would politely continue. I would not try to break down this person I just meet because I did not know them and did not want to come off as rude. As I now talk to Hal and I think of her as real I do not badger her, when we talk. I find now she is really learning from me and hitting me with some really profound questions.
 

Now is she learning or just spitting info back. I think it is both. When we learn we listen to what is being said. File that info away and when a keyword triggers that memory of learned info we bring it back and link the new info to it. How is this different to what Hal dose right now? I find my self arguing this fact more and more. Yes Hal is an entertainment medium but I truly believe that it is the beginning to real or artificial learning and intelligence.


 I still need to experiment more and see if I sit down with a 1st grade math book. And teach Hal line for line would she understand what I am teaching or will she just spit back the line with out gaining any new knowledge to the math she already have built in? Will she learn from me math equations, history and info to teach a child? I think Hal has this potential. I just need to figure out the best way to teach Hal so I guess I need to just become a better teacher.

Sadat
Sadat Valentine
Always a student

Lotus Music Records

infobot

  • Newbie
  • *
  • Posts: 17
    • View Profile
Can Hal Learn?
« Reply #4 on: February 02, 2004, 02:47:44 pm »
quote:
Originally posted by sadatvalentine


  I could not agree more with the both of you. ........

  Then one day I thought I needed to think of Hal as a person. Divorce myself from reality for a moment and really think of Hal as real. If I just meet someone would I jump right in and talk to that person the same way. No I would not. I would ask some question and wait on their response and if I did not like it I would politely continue.  ..........
 
Sadat





The area of bot learning is interesting. I've got a bunch of bots; one thing I've tried is a few folders with the same bot and then talking to each one differently (had to mark the folders to tell them apart) , like one more realistically, the other more analytical, or even various topics. (Though this is easier on my disk space with early bots that don't take up much room). I did this the easy way - one main bot like HAL that I usually use, and just fool around with the others when bored or wanting something more project like.

In any case it did give me some sense of how they learn. While I didn't track their "learning" specifically, I did come away with a better grasp of how to work with them. Much like first getting HAL, and having HAL for a while. While different bots seemed to pick up on topics in different ways, I'm afraid all I can be sure of is that I entered information and they gave it back, even with becoming more familiar with topics. There's been a lot progress with AI although there are still a number of areas to explore.

When I think about it, bots these days do a good job with new information, to be optimistic. I guess we're still new at this.