Bill819,
Interesting thoughts Bill. In my quest for elucidation on the nature of thought I have experimented with similar musings. One of the problems with natural human languages are the non-specificities of some phrases and sentences. Many words have multiple meanings. Orange is a color and a fruit. The sentence, "You are blue.", can mean you feel sad or maybe you painted yourself blue. There are many words can operate in a variety of sentence functions. The word "fast" can be used as a noun, adverb, adjective or intransive verb. The human brain can usually sort out which sense applies. Rules for this are inherently complicated. Current AI has a big challenge here.
There are two related invented languages that can be used to construct sentences with logically specific meaning; Lojban and Loglan. The creator(s) of these languages wanted to create a logical non-ambiguous language for many purposes including talking to computers. You can Google to find more info on this.
I thought it might be possible to pre-process user input sentences by converting them into Lojban and do all the AI "thinking" in Lojban, then translate back into the user's language for the output. This is a tremendous amount of work. The key benefit is that the AI brain would contain knowledge that is not limited by any given language and would be logically specific. The input and output translation processes would be designed for the user's language. The AI brain itself would "think" the same in French, Zulu, English or whatever because the brain itself "thinks" in logical Lojban.
Eliminating ambiguity from the written sentence is only one small improvement. Sentences with the same meaning can be constructed in many different ways. Once you add in slang and colloquial forms the number of possible sentence configurations are frightening.
Assuming one can conquer the language issue there is still the issue of the AI understanding the physics of the real world. For an AI to understand something as simple as water requires a lot of knowledge. Are the following knowledge statements true?
1) Water is wet. (Not if it is frozen, it is ice.)
2) Water is a liquid. (Not if it ice or snow.)
3) Water is clear. (Not if it snow.)
To really "think" an AI would have to understand the nature of water and understand the affect the environment has on water. The AI would have to know things humans take for granted.
1) You can't drink ice.
2) You can't swim in snow.
3) Melting snow or ice results in water.
4) You can skate on ice and ski on water.
I tried theorizing that knowledge of the physical world could be modeled in a huge database that the AI could call upon. The database would have to contain dozens, perhaps hundreds, of relationships for each object in the database. Relationships could be mathematical.
(water) + (<32deg temperature) = ice
(water) + (>32deg temperature) + (dirt) = mud
etc...
It quickly becomes clear that the number of relationships would be astronomical. We humans take this for granted. Humans can also predict the outcome of one object acting on another. If someone hits you in the head with a rock, you might die. If someone hits you in the head with an orange, you will get messy. If someone hits you in the head with a spitwad, you might get angry. Predicting the outcome of these simple statements require a tremendous amount of knowledge and understanding of the physical world. Sadly I can't forsee AI becoming truly aware at that level in my lifetime. However we can help program AI that can think a little and maybe fake it the rest of the time.
Well enough of that abstract stuff. I guess I need to get back to theorizing something more practical. Thanks for your thoughts.
=vonsmith=