quote:
Originally posted by EaglePryde
Then it would be better if the A.I. just does things you want it to do but it would help if the AI would be able to do everything..or am i wrong ?[
]
It's not a matter of right or wrong. As an experiment or an exercise in theoretical programming, that would be fun. As a practical matter, an AI would not be very helpful if it could do "everything", meaning the things you have complained about so far, ie: A selection of qualities we see in Human Intelligence. If you want a human, there are plenty around, and they cost less, are more fun to make, and take to training easier.
As I said (right before you earned the serious spanking I have given you, by insinuating that I was a stupid liar for accurately paraphrasing you), you make the assumption that an AI
must be like a human.
If you want to make that assumption, that's fine with me. It's a common, sophomoric assumption. But it is an unwarranted assumption, based upon a lack of information and bolstered by, in my opinion, the famous Turing Test.
Let me tell you a story.
In 1959 a man called Henry Kremer offered a prize of £5,000 for the first man-powered airplane to fly a figure-eight course round two markers a half-a-mile apart. He made specific requirements of how that plane should be constructed to assure that the feat was undeniably human-powered only. There could be no lighter than air gases, no tether, no mechanical, electrical or chemical storage of energy, and no assist from land based contraptions.
As you may know, he increased that prize several times in hopes of inspiring people to attempt it, and eventually it was won, as have been other prizes by Kremer's organization and others.
So how come we don't all have bicycle powered airplanes in our garage? Because they are large, expensive, require an athelete level bicyclist to get off the ground, don't fly very well and most importantly, because of structural limitations, can't swoop.
In the effort to win the prize, and to satisfy the concept of true human-powered flight, the notable and worthy rules and requirements have been adhered to, and no group bothers to develop a human-powered flight machine that is anything other than something that might win a Kremer style prize. That makes sense, because if you are going to invest thousands of dollars and thousands of man hours into making a plane, you want to win the money.
But I just want to fly.
I want a human-powered airplane that takes off easily, perhaps from my back porch, that I can fly on a calm sunny day in fall, so I can swoop around the sky and meet my other flying friends at the lake.
So I plan to cheat. My long term goal has always been to develop a human-powered airplane (I have designs and models, but no money for the actual construction) that uses all of the things that the Kremer Prize forbids.
I plan to use Helium, "rubber bands", and a kite string. By that I mean that I would use lighter than air gasses to fill any airtight hollow, to offset some of the weight (hoping for neutral boyancy) and reduce drag by reducing the required lifting surface, I would store energy before take-off for use in emergencies or for swooping, and I would use a tether for lifting off and landing.
I would be invalidated for any prize, but I could fly.
Now this is mostly just fantasy on my part, since I don't have the money, or time left in my life to accomplish this. I only hope that someone does.
But as long as they are trying for the Kremer prize, they won't. They can't.
The rules discourage it and the laws of nature forbid it.
This is how I feel about Hal. As long as we try to make Hal beat the Turing test, to make Hal into a pretend human, to imbue it with whatever qualities we think make it "be nearlly like a human.", we will be unable to make a Robot that can be a good, useful, interesting and innovative creation.
Getting stuck in fantasies of Science Fiction AIs that are "nearlly like a human" is just like the human-powered airplane. Ironically, one of the most famous human-powered airplanes was named "Gossamer Albatross", little did they know that the intellectual albatross around their neck was what was holding them down.
I know that some folk want a chatterbot that talks like a real-life person. Whatever... I have real life people to talk to, but some folks don't. But again that limits the abilities of a Robot to that structure. It puts your Bot in a box. I think outside the box.
I want a Robot that is the best it can be, and I firmly believe that making it a pretend human forbids the accomplishment of that goal.
I don't know what qualities "the best it can be" robot might have yet. I suspect I am not smart enough to predict half of those qualities, and probably most of my predictions would be wrong.
But in this one thing I am sure I am right: as long as we try to make pretend humans, we are not making real robots. All of the thought and effort that goes into making it "nearlly like a human" is effort that is drawn away from finding out what a Robot should be. It is either prejudice or a lack of creativity, a turning away or a failure to proceed. Rather than new, creative, original, progressive, audacious, ambitious effort to make something novel - it is instead a vain copying of the old, a parroting of outdated assumptions, a safe, comfortable hideout from what could be astounding invention.
Those who have ears that can hear, listen.