Author Topic: What's on my mind  (Read 3591 times)

onthecuttingedge2005

  • Guest
What's on my mind
« on: March 04, 2008, 11:38:18 am »
I have been working out the solutions to artificial intelligence to work with truth, maybe, falsehood.

I believe this solution when worked out will give the bot greater reasoning abilities far beyond what is currently understood.

here's a snip of what I am working out.

Code: [Select]
'some beginning of a conversation. the answer to the statement
'isn't known to be true or false, yet and the bot doesn't know if the answer
'is a correct relevant response, yet.

<Start>Hello<Maybe>Hi, how are you?, is this a correct response?<End>

<Start>I am doing fine and yes, it was a correct response<Maybe>That's great and thanks.<End>



'Saved from the conversation when the statement and answer is found to be related as true or false.
'the data save uses three tables.
True_Table;  <Start><True><End>; learned to be a possible truth.
Maybe_Table; <Start><Maybe><End>; data that has no real answer yet and will be used later.
False_Table; <Start><False><End>; learned to be false and may never get used in any conversation again.

'all newly learned data and possible related statements or answers are appended to a table called; Maybe.

'use the data in later conversations to find the truth.

'Later the bot returns a response and question and will ask the user since the learned response was found to be true and related.

<Start>Hello<True>Hi, how are you?<End>


Jerry[8D]

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3905
    • View Profile
What's on my mind
« Reply #1 on: March 04, 2008, 07:27:28 pm »
Jerry,

Sounds ok but wouldn't it become rather tiresome answering every statement as to whether it was true or false?

Besides, I never lie to Hal in casual conversation so I can only assume my Hal believes everything I tell it to be true and factual.

Perhaps I'm missing the point of your experiment and if so, I'm sorry, but please enlighten me a bit more.
In the world of AI it's the thought that counts!

- Art -

lightspeed

  • Hero Member
  • *****
  • Posts: 6819
    • View Profile
What's on my mind
« Reply #2 on: March 08, 2008, 09:20:03 am »
how about hal saying is this really true ( for a more human sounding question ) and the user could just say yes which would equal truth or no which would equal "lie" . but i do in a way see if you had to do that every time it would be bad pehaps a randome question from hal on subjects which would as in above determine the truth or lie answer from your answer ??[:)] a random learning process wouldn't be as bad and actually sound more human .[8D]
 

toborman

  • Newbie
  • *
  • Posts: 6
    • View Profile
    • http://thinker.iwarp.com
What's on my mind
« Reply #3 on: March 08, 2008, 10:11:56 am »
quote:
I believe this solution when worked out will give the bot greater reasoning abilities far beyond what is currently understood.



anything that adds "understanding" to Hal will enhance his ability.  thanks for working on this. I'm sure you will learn something from this experiment.

for some additional ideas check out this site:
http://mindmap.iwarp.com/