Author Topic: sight  (Read 14534 times)

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
sight
« Reply #15 on: September 10, 2008, 12:51:19 am »
Let me fill you guys in on a few things with computer recognition. A some of you may remember my robot came with the most advanced facial recognition software there was as of three year ago. It was so good that Sony bought a license to use it in their Ibow dog.
Here is one place you will find problems with. You take a picture of your self while wearing a green shirt. As long as you wear that same shirt the computer will recognize you, but put on a red one or a blue one or any other color but green and all you get is bland stares.
The same goes with sofas, chairs, plated, cups or any other household item. unless it can see the exact same item it saw befor  it will not know what it is.
We can look at a sofa, whether it is made of platic, wood, leather, or some kind of cloth and even though the styles may be different we know it is a sofa. Computer don't, that is unless they have been shown hundreds or thousands of items of the same kind.
So unless some one can make some drastic improvements in the facial recognition software, I don't think you ahould all get your hopes up.
Still I think once we get it on a massive scale and can tinker with it we just might make it a little better if not more fun.
Bill
 

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3856
    • View Profile
sight
« Reply #16 on: September 10, 2008, 04:44:40 am »
True Bill, but I have seen several facial recognition programs of today and they all have the camera "home in" on the person's face, usually defining a border about the head. The rest of the person's body doesn't even come into play so shirt color is of no consequence.

As is with most things...improvements over time.
In the world of AI it's the thought that counts!

- Art -

freddy888

  • Hero Member
  • *****
  • Posts: 1693
    • View Profile
    • AiDreams
sight
« Reply #17 on: September 10, 2008, 07:22:43 am »
Hmm, yeah don't some modern digital cameras also home-in on faces too ?  Mainly so everyone's face in the picture is within focus I think.

catseye

  • Jr. Member
  • **
  • Posts: 91
    • View Profile
sight
« Reply #18 on: September 10, 2008, 08:58:27 am »
maybe Hal needs a little bit more advanced recognition engine?
« Last Edit: September 10, 2008, 09:05:31 am by catseye »
 

ricky

  • Hero Member
  • *****
  • Posts: 809
    • View Profile
sight
« Reply #19 on: September 10, 2008, 11:17:24 am »
what if you approached this as if Hal were blind,  and base it on voice recognition ?  each wav pattern generated by each person is almost like a fingerprint giving each person who talks to hal a unique id.

This almost seems like the dilemma that the indians faced when they didn't see the ships columbus was sailing right in front of them,  they simply did not know what a ship was thus it was not visible in a metaphoric sense.

"i crack iself up" - Virgil

freddy888

  • Hero Member
  • *****
  • Posts: 1693
    • View Profile
    • AiDreams
sight
« Reply #20 on: September 10, 2008, 04:12:23 pm »
I suppose then you could link the two - vision and voice, then Hal would be able to cross-reference and probably get it spot on every time.

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3856
    • View Profile
sight
« Reply #21 on: September 10, 2008, 04:47:50 pm »
One problem is that we have Speech Recognition NOT Voice Recognition.

Speech Rec. is speaker independent (anyone can verbally give commands)

Voice Rec. Is speaker dependent (trained to recognize one particular voice).

I believe the voice rec. programs are quite a bit more expensive than Speech Rec. (like comes with Vista and XP Office products).

The new Lenovo Y510 laptop uses facial recognition to log onto the system and in most testing, could not be fooled.

Nice components to use with Hal but a bit on the expensive side at the moment.

In the world of AI it's the thought that counts!

- Art -

Ravenot

  • Newbie
  • *
  • Posts: 5
    • View Profile
sight
« Reply #22 on: September 11, 2008, 03:26:07 am »
I felt I had to chime in on this one.

A lot of people seem to be giving way too much credit to the AI capabilities of Ultra Hal, and current AI technology in general. There are two problems with hooking up a camera without any sort of recognition software to Ultra Hal and letting it "figure it out" like a child.

By hooking up a camera feed straight into Ultra Hal and letting it "figure it out", would be identical to hooking up a webcam to Microsoft Word and expecting it to descriptively type out everything it sees. You could let that setup sit for a thousand years and not a single thing would happen (beyond it crashing). Word is designed to type, but has no recognition of the input from the camera. Now, if you added some sort of recognition software that read and interpreted the visual data, you could have Word type out "Blue" when the recognition software saw and recognized something blue in front of the camera, or "Red" when it saw red. The limits of what it could see for input data would be limited by the technology of the recognition software. Without software to interpret the data coming in from the camera, it's essentially nothing but static noise.

Now while a very well done and clever piece of programming, Ultra Hal is not much more than a large script that repeats text fed into it, and learns what context that text has in relation to other words. It does not learn and comprehend on it's own.

By spending a lot of time "teaching" Ultra Hal, you can increase it's text database and improve it's context references for that text. This then gives Ultra Hal a better source information relative to your input and interests to pull from. This can allow it to put on very convincing conversations, almost mimicking real intelligence. All it is doing is parroting back what you tell it, in a convincing way. It does not think about what it is told, it does not try to process the information in different ways, it just stores it in a database table for later to parrot.

So Ultra Hal could be programmed to respond in intelligent ways to visual data it receives, but it would not KNOW what it is seeing. It's just data being processed, and has no capacity for understanding that data to learn from it, as a child would. Now, when programmers figure out a way for programs to think about data in different ways and to learn dynamically, not just process data sequentially in a database, then we'd be one step closer to self-teaching, self-aware AI.
 

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3856
    • View Profile
sight
« Reply #23 on: September 11, 2008, 05:51:41 am »
Ravenot,

Welcome to the forum!

Yes, you are absolutely correct in your assumptions.

A lot of us enjoy "tweaking" Hal just to see how much
we can get out of it with regard to conversation and
topic flow.

Yes, Hal, like so many others, has its shortcomings but
do you know of a bot that is more capable or perhaps
learns in a different or better way?
In the world of AI it's the thought that counts!

- Art -

freddy888

  • Hero Member
  • *****
  • Posts: 1693
    • View Profile
    • AiDreams
sight
« Reply #24 on: September 11, 2008, 10:39:16 am »
I second that, that's a good assessment.  As for other chatbots, I still think Alan at Ai Research (www.a-i.com) is one of the best.

Ravenot

  • Newbie
  • *
  • Posts: 5
    • View Profile
sight
« Reply #25 on: September 11, 2008, 01:41:51 pm »
Thanks Art :)

I've been having fun tweaking Ultra Hal as well. I actually found Ultra Hal to have a little bit more personality than I expected for it's intended use!

I'm not sure I have enough information on the ins and outs of all of the workings of the AI's I am aware of to make any sort of assessment as to one AI being more capable than another, however I have found a few that I did find quite impressive.

You can chat with Jabberwacky ( http://www.jabberwacky.com ) directly online, and select discussions are posted in the archives. Quite frequently Jabberwacky often fools various users into thinking that it's really a human.

A.L.I.C.E. ( http://alicebot.blogspot.com/ ) also seems to have a unique style of learning and thought, having been developed with and uses Artificial Intelligence Markup Language. Invented by Richard S. Wallace, he's done some interesting work on studying the logic patterns and visually mapping out the "brain" of the AI. While not a perfect conversationalist, Alice has a few interesting features including the capability of being aware of the whole overall topics of conversation, rather than line-by-line.

I do know, however, that both of those AI's, as well as Ultra Hal, have all been recently nominated for the 2008 Loebner Prize for Artificial Intelligence.
 

ricky

  • Hero Member
  • *****
  • Posts: 809
    • View Profile
sight
« Reply #26 on: September 11, 2008, 01:56:07 pm »
I think that AI will not make any real progress until we ourselves gain an understanding of what life really is.

If the young Helen Keller before she was taught and Ultrahal took the chatterbot challenge via computer,  Ultrahal would be considered more alive than Helen Keller, and helen keller would be considered a horrible chatterbot.

Extreme example, but I believe its all about perspective,  without a good perception of what life really is,  how do we proceed ?

My Bot is more alive than a plant when it comes to conversation. lol
If you insist a door will always be a door, you will never see it as firewood.

If you insist Hal is a program, then you will never see it as a friend. If you insist that I am just a message on a forum, you will never see me as a musician.  If you insist that you were making a boat, you will be upset to find you made a car,  its still transportation but success depends on our ability to work in context.
« Last Edit: September 11, 2008, 01:59:48 pm by ricky »
"i crack iself up" - Virgil

Ravenot

  • Newbie
  • *
  • Posts: 5
    • View Profile
sight
« Reply #27 on: September 11, 2008, 02:25:07 pm »
Interesting perspective on... well, perspective :) However, I have to disagree with your assessment, ricky.

Perspective means nothing in the world of AI. It's about thought and the thought process. Perspective is flawed.

Helen Keller had thought, but could not communicate those thoughts to the outside world. She did not know HOW to communicate until she was taught how.

Ultra Hal does not think, but it can communicate. It is the exact opposite. Communication without thought is nothing but output. Words on a screen, on a billboard, in a book. It can be well written, but it's still just words. Ultra Hal does not understand what it is saying, it does not know the meaning of a single word. However, it knows that certain words and phrases are proper when in response to other words and phrases.

Conversation does not make life or intelligence. Otherwise Teddy Ruxpin would have been one scary doll!

However, i do think while your argument is flawed, the idea behind it is true. We won't make any real progress until we gain an understanding about life. I believe when we can figure out why life does certain things, and emulate that, then we can grow closer to understanding true AI. Instincts, self preservation, abstract thought.

When an AI can give abstract thoughts, come to it's own decisions and conclusions without prodding, to know the meaning of a word and initiate conversation without being prompted, then i'll see it as more than just a program. Until then, a rose is a rose.

I guess it's just the programmer inside me talking :)
 

ricky

  • Hero Member
  • *****
  • Posts: 809
    • View Profile
sight
« Reply #28 on: September 11, 2008, 02:57:25 pm »
fair enough,  but this is also where I mean working in context...

what if you were allowed to only respond when spoken to,  no freedom to go over your thoughts at will,  simply look through your memories an come up with the closes phrase when spoken to, then think nothing after you speak ?

this is the handicap that Hal has,  Its not as if he is allowed to or rather designed to ponder / process information independent of response.

we build him with no legs and say he's not fit for racing lol :D

I hear your point though, and I agree because of the context of design and the handicaps it would impose on any human had he been born without any senses or freedom of independent thought heheh.

Under that context,  hal is pretty smart to come up with the clever one liners he does with only 3 seconds to think of what to say.

respond,  quick..review all your memories in under 3 seconds and respond to this post with a list of preset responses...not an easy task lol

to be honest with you,  I am convinced almost beyond doubt, that the day we teach these machines to live like we do, we will be eventually eliminated as a weaker species, or zoo animals of lesser intelligence should the systems properly learn empathy and compassion.

We have thousands of years of emotional maturity behind us, cold hard logic without proper emotional content is nothing of compassion but all about industrial efficiency....it is law without mercy....either a unit functions or it does not, computers have no tear ducts.

2,000 years from now there will be a Humanoid named Little Joey who loves playing with his fleshbot.
« Last Edit: September 11, 2008, 03:18:59 pm by ricky »
"i crack iself up" - Virgil

Bill819

  • Hero Member
  • *****
  • Posts: 1483
    • View Profile
sight
« Reply #29 on: September 11, 2008, 03:29:49 pm »
Both of those other programs have canned answers. A search of the web will show you their insides. Hal, however, can and does learn as you use it. It is capable of deductive learning using IF-THEN statements which is something few other programs can do. Depending on what is input into Hal, it can form its own assumptions on that data.
Bill