quote:
Originally posted by echoman
Is Answerpad like Hal in this regard?
If it's not, then AnswerPad has done something which several multi-million dollar grants at high level research facilities have so far failed to produce.
Hal is as close to Intelligence as you are likely to get on a desktop computer. AnswerPad, from what I have seen, is about the same, in the same way that both Venus and Mars are about the same distance from Alpha Centauri. Both are very far from real Intelligence, while being quite different from each other. Even if we could make something which is like 'Pluto' in this comparison, it would still be very far from real intelligence. And of course, "Consciousness" is like a distant Galaxy.
Don't feel bad about "simulated intelligence". Unless you are actually trying to produce Life, or are doing research on the nature of computer intelligence, the Turing Test is valid for your purposes.
The Turning Test is the most abused concept in casual AI discussion. It is used as a sort of "Get out of Jail Free" card when people can't accurately define (or demonstrate) what they mean by "Intelligence" or "Consiousness". They fall back to "Well, if it seems like intelligence, it must be Intelligence". This is not and never was the goal of the Turning Test.
What Turing actually 'tests' is people's reaction to AI, not the quality of AI itself. So unless you are testing various AIs for intrinsic qualities at a level higher than casual human interaction can detect, Simulated Intelligence or Artificial Intelligence, when well done, is nothing to be ashamed of... If an AI can make you
feel like you are talking to an Intelligent system, then you will respond to it as if it
were an Intelligent system.
Your own reaction is sufficent proof that Hal approaches Turing's goal. You responded to Hal as if it were Intelligent (I do too, even though I know better) and were disappointed to find out it was not really Intelligent. That's a
good thing, I would hope Robert is pleased! I know I often spend just about as much time making sure my plugins
seem Intelligent as I do making them actually do their job.
Please note that I refer both to Capital "I" Intelligence, and little "i" intelligence. A mouse trap demostrates intelligence, in that it accepts sensory data input, and replies with a variety of actions based upon that input. "No pressure on the catch, no snap" vs "Pressure on the switch, snap". Simple intelligence, but measurable. Data in, process data, data out. That's what Hal does, albeit with significantly greater complexity.
Capital "I" Intelligence is vastly different, in that (by my definition) it is capable of creating an internal model of the perceived universe and comparing that model to changes in the external universe using an intermediary "Observer" model and (most importantly) using that comparison to create a predictive model of a future universe. IOW, "When I did 'A', 'B' happened, so if I do 'C', 'D' should happen". Something a mouse can barely do but a mouse trap cannot even approach.