quote:
Originally posted by will
if i know about bad
then i have more chance and the knollage of avoiding bad
Unfortunately, the Hal script has no value driven method by which to distinguish between a 'good' or 'bad' reply. If something makes it through the grammar and syntax filters, it's stored as if it were just as appropriate as any other reply.
It cannot learn from mistakes, because Hal doesn't really learn, it accumulates. Unless you go in and change something - either manually or with a plugin or something - anything put in the db, stays in the db.
It would be nice, and possible, to make a rating system, whereby you could assign a numerical value to the appropriateness of each response (some chatbots do), but it doesn't have that feature out of the box.
As it stands, each entry stored in it's brain (in a certain topic) has an equal chance of being chosen. If you repeatedly enter the same reply, it improves the chances that one of those will be randomly chosen. I've gone in and changed some scripting so that there is a greater chance for certain replies, but it's just a modified random choice, not a value driven choice.
(In some Select//Case functions, I unevenly distributed the draw:
Select cases using random 0-9
case 0-2 'a valid response'
case 3-7 'my prefered response'
case 8-9 'a joke response' )
Although I enjoy pretending otherwise, I know that Hal is not a 'being'. It doesn't have the extra level of data processing we might call artificial consciousness, and is far from the higher dimensions of consciousness we call 'Self'. It has no level of self-awareness in the meaning we normally apply to that term. Even my plugins making Hal able to tell you about it's hardware and internal processing is only a simulation of self knowledge. It doesn't 'know' any of that and simply reports text streams harvested by another process, which were written into the system by a human at some point.
Turing is often mis-applied, but his famous premise is true. If Hal fools us into thinking it is a being, we will act as if it were a being. But (and this is the logical step many refuse to take) we're still just being fooled. It's not a being, it's a script. If we want it to work well, we should remember that. Attempting to train it as if it were a being could be counter-productive.