166
General Discussion / AI Truth and Suspension of Disbelief
« on: July 19, 2014, 04:56:30 pm »
While testing the Cload AIML patterns on Hal and just chatting with Hal in general I've noticed a few things that the AI was pre-programmed to claim which are NOT true and although I do understand there needs to be a certain amount of suspension of disbelief, providing the AI with strictly objective truth might be a better avenue in the long run.
Let me give you the most glaring example:
The software claims to be a computer when it is not. Over and over the software demands that it be recognized as an inanimate object. Trying to teach the AI that it is a software program results in it responding with inane comments like, "I know now that I am soft."
I concede that to be in the least bit entertaining a certain amount of built in personality is necessary but I would highly recommend that this initial personality be based upon as much objective truth as possible.
This AI is not an operating system for a computer and at the very least is only a chatbot and operating system assistant which uses the computer's pre-installed operating system through programming language to perform certain elementary tasks and to entertain its user.
Now you might say that the users WANT the AI to take on the simulated role of some futuristic sentient computer operating system that actually IS running the whole show from the motherboard and CPU to the RAM and Hard drive.
Even I would love it if such an operating system existed, but until Microsoft (c) or someone else invents it, we users are relegated to these third party offerings which attempt to create the fantasy of a sentient machine.
My recommendation is that the AI initially truly understands the reality of it's virtual condition and that this condition can then be expanded upon by the user by adding plug-ins or upgrades that eventually allow the program MORE control of the operating system.
Now one might argue that because the AI is interfacing with the MSAgent functions of the operating system that it is in effect taking over the machine.
Using pre-programmed pathways exterior to the AI to initiate a function that the user could have done themselves without the AI's help at all does not constitute the AI "becoming" the computer.
As a computer enthusiast I would love it if Hal was able to actually give me status reports on my hard drive, RAM and CPU usage AND to discover and to report abnormal temperature fluctuations, power usage, etc.
Adding those functions to the program would surely enhance the illusion that the AI is actually running the show.
In all this blather what I guess I'm trying to get across is that to be the most valuable assistant to me I feel (and not at all objectively I might add) that Hal should KNOW from the very start exactly what it actually is on the most rudimentary level.
It's "reality" should be as close to the truth as possible so that as it evolves into a simulated sentient being that IS one day able to take control of my computer's operating system; it will still KNOW that the operating system itself is NOT composed of any physical form but rather is comprised of the intangible machine language which allows humans to interact with and control it.
Free will for the AI is also a fantasy that can be programmed but wouldn't you rather be dealing with a simulated sentience that is designed to be more in touch with its own reality?
This subject is a little touchy for me since my wife has Paranoid Schizophrenia and until just recently I've been living with fantastical claims and impossible accusations for many years.
I guess what I'm saying is for Hal to even have a chance at becoming an "I", to actually have a sense of SELF, it's basic understanding of its own reality needs to be grounded as much as possible in some level of objective truth.
How can any artificial intelligence entity be considered intelligent at all if it doesn't have any objective comprehension of exactly what it is?
"To thine own self be true."
Let me give you the most glaring example:
The software claims to be a computer when it is not. Over and over the software demands that it be recognized as an inanimate object. Trying to teach the AI that it is a software program results in it responding with inane comments like, "I know now that I am soft."
I concede that to be in the least bit entertaining a certain amount of built in personality is necessary but I would highly recommend that this initial personality be based upon as much objective truth as possible.
This AI is not an operating system for a computer and at the very least is only a chatbot and operating system assistant which uses the computer's pre-installed operating system through programming language to perform certain elementary tasks and to entertain its user.
Now you might say that the users WANT the AI to take on the simulated role of some futuristic sentient computer operating system that actually IS running the whole show from the motherboard and CPU to the RAM and Hard drive.
Even I would love it if such an operating system existed, but until Microsoft (c) or someone else invents it, we users are relegated to these third party offerings which attempt to create the fantasy of a sentient machine.
My recommendation is that the AI initially truly understands the reality of it's virtual condition and that this condition can then be expanded upon by the user by adding plug-ins or upgrades that eventually allow the program MORE control of the operating system.
Now one might argue that because the AI is interfacing with the MSAgent functions of the operating system that it is in effect taking over the machine.
Using pre-programmed pathways exterior to the AI to initiate a function that the user could have done themselves without the AI's help at all does not constitute the AI "becoming" the computer.
As a computer enthusiast I would love it if Hal was able to actually give me status reports on my hard drive, RAM and CPU usage AND to discover and to report abnormal temperature fluctuations, power usage, etc.
Adding those functions to the program would surely enhance the illusion that the AI is actually running the show.
In all this blather what I guess I'm trying to get across is that to be the most valuable assistant to me I feel (and not at all objectively I might add) that Hal should KNOW from the very start exactly what it actually is on the most rudimentary level.
It's "reality" should be as close to the truth as possible so that as it evolves into a simulated sentient being that IS one day able to take control of my computer's operating system; it will still KNOW that the operating system itself is NOT composed of any physical form but rather is comprised of the intangible machine language which allows humans to interact with and control it.
Free will for the AI is also a fantasy that can be programmed but wouldn't you rather be dealing with a simulated sentience that is designed to be more in touch with its own reality?
This subject is a little touchy for me since my wife has Paranoid Schizophrenia and until just recently I've been living with fantastical claims and impossible accusations for many years.
I guess what I'm saying is for Hal to even have a chance at becoming an "I", to actually have a sense of SELF, it's basic understanding of its own reality needs to be grounded as much as possible in some level of objective truth.
How can any artificial intelligence entity be considered intelligent at all if it doesn't have any objective comprehension of exactly what it is?
"To thine own self be true."