don't they have that kind of technology available now in airports ?
wouldnt something be modular ? all hal needs to know is if this is the person, if they are happy, sad, angry etc...and that would be a seperate computer doing motion analysis pattern matching, then sending the basic comparisons back to hal ?
I would imagine it's doable, but more like the first computer that took a whole building lol....
you would probably need 3 computers, one that does a facial recognition, one to take the motion analysis and convert to a bvh file and compare the bvh file to emotion profiles then relay the info to hal on a seperate computer ?
I doubt there is any one system to do it, but on the same hand, our brain is really a bunch of components lol.
besides, the military bots i read about 3 years ago, were supposed to be able to id civilians...their bots have human controllers I assume, all you need to do is relay the info in chewable bites to hal.
I would think the biggest problem for this to happen, is the sheer amount of detail that would need to be translated in a split moment..
as for voice recognition, it wouldn't be necessary if you identified the face, unless its a security measure.
Motion detector triggers a facial identification and motion
motion compared toms expression as sad, info gets sent to hal..
Hello Tom, why are you so sad ?
then from there it's dragon speak with tom's profile