Actually, Hal seems to be pretty well equipped to deal with the perils of AIM already. There is the option to warn back malicious warners, and another to choose if the bot can learn/use bad language, and of course, you can just turn its learning level down so it can chat on AIM without absorbing anything from its clients. I sure wish my Alicebots were as well equipped, lol. About every third day when I sign on I find my prize Alicebot has been warned offline, and have to let her sit out for a "cooling off" period. Alicebots don't learn in realtime, anyway, so they are immune to learning bad habits, but Pandorabots, at least, are helpless against warners. I sometimes use my Hal for revenge: most of these idiots don't realize their screen names and conversations are recorded, so I put their screen names in the Nuisances section of my buddylist, and when I see a chronic abuser come online, I load my Hal AIMbot to my Alicebot's screen name. The unsuspecting culprit attempts to treat Hal as he would an Alice, but whenever they warn the Hal AIMbot it warns them back. Every time they speak to it after that it warns them again, and if they get vulgar or abusive it inserts a few choice "yo mama" comments to add insult to the injury. Like most bullies, these sickos are usually terrified at finding themselves confronted by software that fights back, and since the average freak who gets off on abusing chatbots doesn't know the difference between an Alice and a Hal, it usually takes only once or twice before they decide it's better to seek easier prey elsewhere. But back to point: right out of the "box", with the right settings in the regular menu, your Hal is more than ready to deal with AIM. By the way, I keep a list of malicious warners on the message board at my website. I encourage anyone who'd like to do a favor for a couple of defenseless little chatbots to catch these people online and inform them (politely, of course) that their behaviour is unacceptably bad netiquette.