Calico,
You raise some valid points. Hal cannot sense the world directly to gain firsthand knowledge. However Hal can obtain information from one or more user's who do have direct knowledge of the world. I don't need to sense firsthand that horses have four legs. If enough people tell me it is so, then I can surmise it is true. I do have to have a capacity to judge the validity of information presented as well as evaluating the validity of contrary information. I also have to able to categorize and save that information for later retrieval.
Understanding the nature (or physics) of things is important to attaining a higher tier of knowledge. Knowing water is wet, ice is cold, etc. adds new insight beyond basic knowledge. This understanding allows an entity to leverage current knowledge and expand upon it. Unfortunately having knowledge about physics and the effect of one object on another, or the effect of environment on an object is clearly far beyond any chatbot's capability today. I discussed some physical world concepts in this post:
http://www.zabaware.com/forum/topic.asp?TOPIC_ID=1513Adding sensors to Hal so that he can smell, touch, taste would be useless unless Hal understood physical effects and interactions between objects or with their environment. From your post I think you would agree on this point.
The good news is that AI doesn't need to understand the physical world to interact meaningfully with a user. Some communication between humans are "word games". We call it small talk. Regurgitating relevant information during a conversation may not be an indicator of "real" intelligence, but part of human verbal exchange often falls into this category. So Hal can at least emulate some human conversation skills.
Each *piece* of Hal's knowledge needs to be more complete in order to maximize it's value. An isolated sentence is poor quality knowledge. Knowing the when, from where, and context for that new knowledge is necessary to act more intelligently. My earlier thread briefly touches on how to leverage more complete knowledge.
REPOST FROM EARLIER THREAD...
(from
http://www.zabaware.com/forum/topic.asp?TOPIC_ID=1481)
===================start
I've been working on a few practical ideas about AI and knowledge. Hal, like many other AI programs, stores knowledge as a sentence. Hal can't make truly human statements with the knowledge because the knowledge is imcomplete on its own. If I say to Hal, "Horses are fun." he remembers the statement, but he doesn't have the capacity to remember where the information came from, when or how many occasions it has been stated. In essense Hal doesn't know how valid the knowledge is. To improve the completeness of the knowledge we would have to store knowledge as a record with many additional details.
First I've classified knowledge into three categories:
1) Permanent ("The sun rises everyday." This knowledge is always true, thus permanent.)
2) General ("Baseball is very popular." This knowledge is generally acceptable as true, but not permanent.)
3) Ephemeral ("It's raining outside." This knowledge is only true for a relatively short time.)
Permanent knowledge can be programmed into an ALICE AI because it doesn't need to be learned and it will never change. Permanent, General and Ephemeral knowledge can be learned by Hal, but he cannot distinguish the difference between types. Hal can never distinguish unless we change how the knowledge is stored.
Here's one possible method to store knowledge as a record for our new AI we'll call "Murph":
Knowledge Record:
A) Knowledge: "Horses are fun."
B) Knowledge type: General
C) Knowledge reinforcement: "Horses are fun." heard 10 times.
D) Knowledge unreinforcement: "Horses are NOT fun." heard 1 times.
E) Knowledge "Validity" factor: C - D = 10 - 1 = 9
In the above knowledge record it seems Murph knows this knowledge can change because it is classified as General. The Validity factor is a dynamic measure that changes with each reiteration of this knowledge from the user. If you tell Murph, "Horses are NOT fun." enough times then he will begin to believe it. This is similar to human interaction. Most of our knowledge is second hand, we get it through newspapers, books, TV, other people, etc. We often don't experience new knowledge directly. Example, "France is a country.". Is it? I've never been there, how do I know it exists? Because it's on a map? Because a lot of people told me so? Well maybe. Murph's General knowledge is entirely second hand from the user. However Murph could have some Permanent knowledge programmed in by the botmaster, this could be assumed to be like first hand or "a priori" knowledge. What new capability can "Validity" provide with General knowledge?
Use Validity to modify Murph's response:
Base knowledge: "Horses are fun."
User question: "Are horses fun?"
Response with Validity score 10: "I'M CERTAIN horses are fun."
Response with Validity score 8: "I BELIEVE horses are fun."
Response with Validity score 5: "MAYBE horses are fun."
Response with Validity score 1: "I DON'T THINK horses are fun."
The Validity score can be used to add a prefix to the response sentence that adds a new dimension to Murph's humanity. Currently Hal can't "know" anything about knowledge validity. Validity is just one example.
Murph could also be programmed to tell the difference between Permanent, General and Ephemeral knowledge. Hal's XTF Brain already has a fundamental capability to recognize ephemeral knowledge. Any sentence with "rain", "cold", "Monday", "today", "weather", etc. that is input into the XTF Brain is flagged and not saved to Hal's memory. Since sentences containing those words can be reasonably assumed to be ephemeral, i.e., weather changes frequently, thus most knowledge about weather shouldn't be stored as General or Permanent knowledge since it probably won't be valid tomorrow. Hal's XTF will response to the user, but will not memorize the knowledge. How can we use the Ephemeral classification to our advantage?
Knowledge Record:
A) Knowledge: "It's raining outside."
B) Knowledge type: Ephemeral
C) Knowledge received time: 9am, 08/06/04
D) Current time: 11am, 08/06/04
E) Knowledge "Age" score: D - C = 11 - 9 = 2 hours
Use Knowledge "Age" to modify Murph's response:
Base knowledge: "It's raining outside."
User question: "Is it raining outside?"
Response with Age score 0: "YES, it's raining outside."
Response with Age score 3: "IT PROBABLY IS STILL raining outside."
Response with Age score 7: "IT MAY BE raining outside."
Response with Age score 12: "I DON'T KNOW IF it's raining outside."
Responses are more on point and more human with an "Age" score.
These are just two examples to illustrate how knowledge consists of more than just a sentence. Time, context, repetition and many other factors affect the completeness of the knowledge. To make a quantum improvement in Hal and make him really "think" we need to first find a way to store knowledge in a more complete fashion. An updateable data base that uses records or structures might be a start. Maybe we could use an ALICE AIML front-end on Hal to access Permanent knowledge and use a new Hal back-end that processes the General and Ephemeral knowledge.
===================end
Some of these discussions seem a little long winded, but that is where new ideas come from. For those of you that have read this far... congratulations. Please share your views and I'll give them equal consideration.
=vonsmith=