1
General Discussion / HAL Feedback & Reinforcement
« on: December 12, 2011, 02:34:32 am »
I haven't had much time to spend on programming directly, but I've been pondering on methods of making fundamental improvements to HAL's learning capabilities. One idea is inspired by the (not too recent) change in the the way HAL loads plugins via external files. This seems to me a great way to allow the introduction of a feedback-loop of sorts, whereby HAL makes changes to the plugin code based on further user interaction and self-analysis (ideally during idle periods). Sort of like the basis of building tools to make better tools, HAL would have an algorithm with dynamic metrics that would refine applicable plugins resulting in better, future analysis and refinement (supposedly). This is actually not as complex as it sounds to implement based on programming language, yet becomes severely difficult due to the natural dependencies on human language.
The main thoughts I'm concentrating on are what components HAL could potentially use this feedback process, further than storing more relevant questions/responses/topics. I used this approach in one test build I did on my own bot development project and I was able to help refine the bot's responses through a kind of teacher/student interaction. I actually got the idea from how Ai Research develops (some of) their bots by "teaching" children's books and correcting any "misunderstandings". While this method does improve 'conversational accuracy' it seems to me to have a few fundamental flaws in that you are not really "correcting" the bot so much as making it more similar to yourself by way of expression. This approach doesn't work so well once the content enters the realm of the debatable. After all, we all have our own opinions on what is right or wrong in most cases.
What do you all think? To me, a big part of learning is not just interpreting new information, but associating and correlating the existing information to find better ways of taking in the new information. Thoughts and comments would be greatly appreciated.
Happy coding!
The main thoughts I'm concentrating on are what components HAL could potentially use this feedback process, further than storing more relevant questions/responses/topics. I used this approach in one test build I did on my own bot development project and I was able to help refine the bot's responses through a kind of teacher/student interaction. I actually got the idea from how Ai Research develops (some of) their bots by "teaching" children's books and correcting any "misunderstandings". While this method does improve 'conversational accuracy' it seems to me to have a few fundamental flaws in that you are not really "correcting" the bot so much as making it more similar to yourself by way of expression. This approach doesn't work so well once the content enters the realm of the debatable. After all, we all have our own opinions on what is right or wrong in most cases.
What do you all think? To me, a big part of learning is not just interpreting new information, but associating and correlating the existing information to find better ways of taking in the new information. Thoughts and comments would be greatly appreciated.
Happy coding!