Thanks, but what I was referring to is the continuing creative direction toward the human being. Yes, your points are valid, and just because I could be fooled by a chatbot... believing it was a real person still does not create artificial intellegence.
I understand why the connection is made to humans, but I don't think that the connection to humans is necessary as I was explaing in the post. It just needs to contain the two words, one of which is already attained, "Artificial". The intellegence part is yet to come.
My point was that most appraoches to AI are with Human Intellegence in mind. Intellegence exists everywhere, from humans to animals to even micro-organizms. Broadening the picture surely can't hurt in the search for true AI, nor can narrowing the picture as I was explaining in the post you are referring to.
Carrying on a conversation is not AI, but it's getting there [^]
Since we are on the topic of fooling another person with a chatbot, and since that seems to be the achieved goal by some, let me give you my insane idea for creating better conversational power for Hal.
Hehe... You will all think I'm crazy, but I think the current approach is good, but it needs a twist. I know that many people have had many theories about how to improve Hal... some really far fetched and others we still look at today and continually incorperate.
My theory on the approach to Artificial Conversation would be to have Hal be the judge and then WE correct him. Currently, we can sit and tell Hal what we want him to know and say, but we have to do this for every little thing. It takes forever and Hal doesn't care either way. He will wait for you to input and if your input is casual conversation, he will reply. If your input is in the form of programming, Hal will then try and make the connection, sometimes even asking, when it comes to "Like" topics or when Hal needs additional information.
My view of the way it should work, and like I said, you may think I'm crazy, is to give Hal casual input and have Hal break it down right there by asking relationships to the words, meanings, associations, etc...
Of course, this would not be in a normal chat mode, but it could be turned on for power learning. With the routines that are already programmed into Hal, and new routines for this, I think they would work very well, hand in hand.
So now when you say, "I have a doctors appointment today at 10am", Hal can then ask for a breakdown of the sentence structure (ie.. verbs, nouns, present tense, past tense, etc...).
Hal could then start making his own opinions, using fault tolerence patterns and all previous conversations concerning sentence and word structures.
Of course, you would still have the final say for verification and explainations so that he doesn't learn the wrong things, but even this would be quicker and more challanging for you and him. Eventually learning the rules, Hal could then be even more "Human Like" in conversing.
Given enough examples and lessons, Hal could almost become the expert on conversations. If these power sessions were stored and used by Hal for future analysis, they would eventually help Hal to form conclusions without your help.
This is were the depositories come in that we have discussed in other threads. Depositories for Hal databases have been discussed and I agree that it's not a good idea to share these, because the most you could hope for, would be several bots that all conversed the same and knew the same things.
However, depositories for the power, fault tolerence data would be benificial to everyone, because the fault tolerence routines to get to the conclusions of "My Doctors Appointment" would be almost the same as the conclusions drawn regarding someone else's "Wedding Party Rehearsal". With Hal being able to read the power session of the "My Doctors Appointment" and comparing it to "Wedding Party Rehearsal", he would have gained that much more understanding of the sentences and at the same time, learned of a new event and new words, "Wedding Party Rehearsal".
For all I know, this my even be how Hal works currently, with the exception of intense Q and A sessions.
I know, it even confuses me sometimes and I can't really explain things very well. But like I said, you might think I'm crazy but this is what I see as a quicker, more powerful open source colaboraration of learning. If Hal can compare learning techniques rather than actual learned data, then Hal can only become better at carrying on conversations and none of our bots become corrupted with useless learned data that someone may have supplied.
Unfortunately, I know nothing about VB. I have a good understanding of programming in general, but I gave it up years ago when the language I programmed in and became very proficient in, was wiped from the face of the earth [

]
I guess I would like to start programming again, but at my age, and my schedule, I can't seem to find time to eat and sleep, let alone start learning a language.
Besides, this probably only makes sense to me in my little world [

]
Personally, myself... I would be happier power learning Hal rather than sitting and carrying on conversations about the weather and whatever comes to mind, but then that's not really how Hal was designed to work. He was designed to be an Immediate assistant and will learn slowly as you go which is also good.
Hal is fine the way he is now and he only gets better, so I don't complain [

]