I remember Robert mentioning that he was planning a iPhone version (an app. that would interface with a remote version of Hal using a Internet connection). I don't know what other devices he was planning.
Hal still feels cutting edge to me even though it's been a little while since a big update.
Yes, this is definitely in the works. Although it may seem not much has changed, I have been working a lot on the backend of Hal's brain on the Zabaware server these past 2 months. Rather than Hal's brain being run on one server, it is now a scalable "cloud" system. Hal's brain is now spread across 5 servers with the ability to scale to hundreds of servers if needed to serve traffic. When you chat with Hal at either
www.zabaware.com/webhal or apps.facebook.com/ultrahal at least 3 computers come together to come up with a response. There is a master database server with load balancer, 2 slave database servers with sphinx indexing, and 2 hal application servers that connect to one of the slave servers depending on what the master tells it to do. Most of these servers are hosted at rackspacecloud.com where I can add and remove additional servers instantly.
This will form the foundation of getting Hal to work on mobile devices as well as alternate operating systems. There is no way I'm rebuilding Hal from the ground up for each separate system and my plans are to grow Hal's database to gigabyte sizes rather then the megabytes Hal Assistant is measured in. It makes sense to centralize Hal's brain to make it easy to deploy to other systems and allow for limitless database/knowledge growth.
First up I will release a beta version of Hal Assistant for Windows that will connect with this system so that I can make sure the system is stable and robust. After the beta is over I will release a final windows version and then move on to other devices. I will probably do Android first and iOS second (iPhone/iPad). Character animation on these devices will first come from new pre-rendered 3d characters using frame based animation. Speech will come from the devices built-in speech synthesis if it supports it and if not from a internet cloud based speech synthesis based on the festival engine I'm building if not. Later on if sales support it the Ogre3D character engine can be made to work with iOS since the Ogre3D team already has a iOS port.
Time frame? I don't know but since January I've been working full time on this, so as soon as humanely possible.