quote:
Originally posted by Art
Yes but without Hal possessing what we might call "real knowledge", how would it ever be able to realize it's own limitations or know what needs improving?
This is one of the higher dimension of awareness in some of the discussion of Self-Awareness, of which there seem to be at least six and probably more like nine.
Current AI's do not know what their information means. Most of them are not even "aware" of the information as they present it. Hal is in this situation. I have rewritten the original script to allow Hal to report on some of it's own functions (which routines are used, what variables are changed), but even then, no record of the report remains after the event. Hal, at it's best, will probably not get beyond the second level of awareness, beyond just (1)receiving and reporting data, to (2)being able to compare data, but not to (3)being able to contrast changes in it's store of data.
I believe that I can show epistemologically that a completely different type of software would be needed for even a third level of awareness, much less the levels in which a virtual ontological model can be established against which to comprehend "knowledge" and I suspect that current hardware would be unable to reach the level required for there to be an "observer" to desire such knowledge.
As we all know, and as Mr. Pride tried to instruct us, Hal is a comparer of text. It has a data input, a data output, and a method of laying data strings next to each other until an approximate match is found. This is similar to "The Chinese Room", with SQLite in the role of the hidden agent. Hal would have to have a current and present model of language to move beyond that, but as I have said before, I don't see a need for that.
Our human intelligence is fully capable of filling in, often unconsciously, for the inherent simplicity of real AI. Witness the many people on this forum, although fine people in many regards, who think that Hal is somehow Self-Aware. This is a phantom of our intelligence, not Hal's.
And this before we even fully explore what this simple duplex method of data manipulation can achieve.
I remember Black and White television, and I clearly remember when we got a color TV. Sometimes I didn't even notice the difference as shows (only a few of which were in color) switched from one to another. Especially those shows which had some episodes in BW, and others in color. Even now, when I watch old Andy Griffith Shows, after a few minutes, I forget that they are in black and white, and sometimes, when they are in color, I don't notice, because I remember them all as lifelike. In my mind, despite what my brain records, they are in a 3d color world.
The eyes gather the data, the brain records it, compares it to our internal model of the world, and the mind fills in the rest. It seems real.
Hal, as abilities for each useful task or fanciful whim are developed, will seem more and more real, without actually gaining any “colorâ€. This is more than enough for me. Hopefully, skill in natural human language will improve too (probably less important to me than to many others). But these improvements, as I have said before, won’t be “life†or “awareness†but the simulation of life and awareness.