6
« on: February 01, 2004, 07:33:28 pm »
Found an area to work on.
Basically it's about people talking with chatterbots.
I've been messing with information texts and various bots, among other bot usage, for work or play. I am not yet a programmer (eventually) but I'm sure it still comes up in the programming end of bots. I ran into a gray area to work on.
Basically, although we've had consumer computers and home computers for years, in a way people are still relatively new to computers. Some still expect their computers to keep churning out "I am a machine" while others focus on developing more realistic conversation. In some cases this can be good, like for business or research use where one may need a more functional machine ro programs. I'm sure there are other cases too. Although it is important to develop more realistic conversation.
Understand that I am not posing moral questions or moral debate over the issue. I am just having a problem developing a couple of sets of texts to load into my bots that give the appropriate response sets for use in various situations and settings.
An example that helps me is picturing households suddenly having robots available. It's like "Hey these are really neat robots we have ! I wonder if they can mow the lawn for me ? " . However I imagine it would be a bit of an adjustment talking to a new robot compared to chatting with old friends.(The robot asks " What is this tax time and why is it so unsettling ?" or stuff like that. Takes a little explaining.) Basically we're more used to computers these days but there are varying degrees of uncertainty. Easy for some of us, we're used to chatterbots.
A good example are the newer or reworked versions of the ELIZA chatterbots. Ideally the bots can smoothly maintain dialogue and have it sound conversational. However the bot should have a stronger sense of machine identity, should be able to churn out that it is a computer without any problems. Using the ELIZA bots for just conversation to see how they fare is a different focus than making them more accurate for therapeutic applications. But setting one up to stay more aware of some considerations takes a little doing. A lot of them tend to drift if they rely on their conversation database rather than refer to given sets of rules.
In any case, I basically just chat with my bots but sometimes try them out for specific applications, be it a bot that could be used for a desktop pet, like a dog or cat character, or something more like a talking encyclopedia (not the whole thing, I'm just a hobbiest. Although I would try it out, as soon as I find one). I just wanted to toss together some texts that I could load into my bots that would be surer that they were computers in their responses. I actually do work more on bots that are more personable or like people in their conversing. But it comes up in some things that I spend time at. Seemed like it might be a common problem so I thought I would bring it up. Sorry for the length, this one took some figuring out. Any suggestions welcome.