dupa

Author Topic: new AI from scratch  (Read 3381 times)

nosys70

  • Newbie
  • *
  • Posts: 15
    • View Profile
new AI from scratch
« on: January 15, 2006, 09:20:35 am »
Hello

Here are some ideas about building an AI.
Probably most of it has already been realized in some way , but for me it looks pretty interesting.
If you get the courage to read all, what do you think of ?

'this is the first step to build an AI, that should not emulate a human brain, but
'be some kind of curious mind, able to ask question, learn thing,
'make statement or inference and build a personality by himself.
'The AI should recognize the value of a conversation by the quantity of new data discovered,
'the percentage of contradiction discovered, number of object covered.
'The AI should be logic, but tolerant and be able to use fuzzy concept or accept
'value (like EVERYWHERE, SOMETIME) and accept to make choice based on probability.
'The AI should not look for logic in what is said. we want it free to discuss
'about lies or false opinion. But we want some wisdom in it, so it does not
'accept anything for being true or false, but can accept something being
'a different point of view and respecting it.
'=======================================================================
'The first approach is close from LISP, in a manner we build lists.
'first db is object description for short memory.
'this mean, we will build each object during conversation to grab data from
'what said by the user. to help conversation, some data (the obvious ones) can be taken from
'the 2nd database (same structure), but usually the goal would be to generate questions and
'have specific answers.
'for example it is obvious that a car has 4 wheel(most of time) and an engine.
'This should be found in the 2nd DB, called Knowledge and containing only
'proven(usual) facts or cultural information and being READ-ONLY DB.
'The knowledge DB will also contain the point to clarify.
'More important, the knwledge DB will contain for each object all the properties
'that it would be nice to know, indicated by a question mark.(kind of template for creating
'child objects from generic objects).
'For example the object NA(name)=car will have the propertie CO (color)=?
'indicating that color is something that could be asked for such object.
'We could build some code for expressing urgency level like CO=?! meaning
'that this is possibly something higly important to know.
'For the object "Jim's car" we would like to know the color, the maker etc...
'The knowledge DB cannot be modified by regular conversation (must be in training mode).
'but if a child object (john's car) has new properties created that were not in the
'generic template, we could have a log that warns the programmer that such object
'likely needs to be updated. On long term, this could be automatic.
'Such modification could be tagged (L for learned) for eventual management.
'This should allow to manage contradiction like john saying sky is red.
'the knowledge DB will indicate the sky is blue (on earth), so we can create
'an new object is short memory called NA=john'sky OW=John and set CO=red and GG=sky
'In further discussion with John, we will know that John think that the sky is red,
'contrary to the general opinion.
'(or the AI could infer that John is probably not living on earth and ask)

'This is far from PROLOG, where everything is an attempt to verify/prove things
'as true or false. Here we do not really care, we just try to build a meaning from
'things.
'Structure:
'we got 3 fields. object ID (number incremented), value definition and value.
'each time we find a new object, we start to define it.
'for example a new object will receive immediately at least one record.
'12934 , NA, john
'here object 12934 has NAME=John
'the list of properties is defined, but can be expanded as needed
'Properties should answer to simple questions like WHO, WHEN, WHAT, HOW
'WHO or WHAT
'The goal being to be able to get useful info and create links between object.
'Each propertie will have a table with standard questions.
'for example NA can trigger "What is your name ?" or "How did you call it ?"
'or "Has it a name ?" and so on...
'This is slightly different than AIML, because the AI will be able to choose what
'to ask and how to ask, without having to write script. We still can use AIML, but
'main of it should be generated by the AI itself, not by the programmer.
'the choice factor would be something like "what answer is currently most important or urgent to have to continue dialog".
'this can be easily based on statistic.
'this can make use of pre-recorded TTS more efficient, since we do not need to
'convert standard question to .ogg on the fly.
'In the same time we need a tool that can log the way the AI uses links between
'object to find is something new must be created (new property or new keyword).
'Ideally the develloper interface would include a box to show the objects created,
'the number of positive hit (request fullfiled) to other object
', negative hit (object or information not found) and the number of direct and indirect
'link created.
'===========
'NA , FN, LN, MO, FA, SI, BR, SO, DA, OW
'NA=name of object (usually the word used in conversation, but we can create
'object with special names (codes, acronyms) to cover special needs.
'FN,LN firstname, lastname obviously for people only
'MO=mother, can be a simple name, a pointer to another object, or a key word
'indicating special propertie (for example NONE)
'same for FA(father), SI(sister), BR(brother),SO(son), DA(daugther)
'OW=owner. usually point to another object.
'We do not need more since we can reconstruct the whole family geology from
'these info. We could even simplify using only PA (parent) and CH(child) and
'determine parent function from sex propertie of each member, but this would lead
'to more calculation without benefit.
'GG=genre. Here we probably start our first list of keyword.
'I can propose HUMAN, ANIMAL, MACHINE, OBJECT, EVENT, OPINION ,FEELING, LAW, CONCEPT.
'GE=subtype genre (like dog), usually a pointer to another object. (object dog from the knowledge DB)
'=====================

'WHEN
'=====
'some keyword will be needed NEVER, ALWAYS, SOMETIME, EVERYTIME but basically time
'should be value HH:MM or DD/MM/YYYY or both.
'When possible, relative indicators (like tomorrow or after Xmas)must be converted to "hard" value.
'Object could have few values like
'BI= birthdate (age means nothing, so it will calculated as birth date).
'This apply to every object (start of an event for example)
'DU =duration

'HOW
'======
'this should describe specifically on object, or be in the generic definition of object
'for example:
'NA = Lily
'GG = ANIMAL
'GE = cow
'OW = john987
'usually cow have an averag size that can be described under object=cow.
'in this case, if we want to express that Lily is a big cow, we will add
'SZ=BIG
'This will allow to define the object "Lily" by its own properties with eventual fallback to the
'properties of the generic object "cow".
'possibly, the generic object does not contains the requested info, or only a pointer to another
'object (like SZ of cow is specified as same as "horse", or smaller than "elephant").
'once one size is known, every relative value will be recalculated to prevent having to chase for a value
'from object to object
'ST=status, with our second list of keyword.
'for living creatures DEAD,ALIVE,BROKEN, ASLEEP
'for object GAZ, LIQUID, SOLID, BROKEN
'for machine, ON, OFF, BROKEN
'for event, idea, feeling, GOOD, BAD ?????
'for law and concept ????

'WHERE
'=========
'LO=location, can be anything like Africa, New-York, map coordinates or keyword
'like NOWHERE (somewhere is implicite if LO not specified), EVERYWHERE, or pointer to
'another object (to express the concept WITH ME).

'MISC
'=====
'there will be a set of physical properties like
'LE(length) WI (width), TP(temperature) WT (weight)
'SH (shape) , HU (humidity) SK (skin with values like SOFT, HARD, ROUGH etc.)
'MO (mood), VA(value)
End Sub


 

GrantNZ

  • Full Member
  • ***
  • Posts: 178
    • View Profile
new AI from scratch
« Reply #1 on: January 16, 2006, 01:49:55 am »
Some good ideas there. The trick, as always, is to get it working [:D]

Food for thought: The open-mindedness of the bot will necessarily be limited by the open-mindedness of its programming. For example, not everything has a colour - what is the colour of glass? Or of jurisdiction? For that matter, who is the parent of jurisdiction, and who owns it? And the answer isn't just "nobody" either, if you want a bot capable of philosophical thought. Not everything has a name, but "nothing" has one.

If we ever manage to make a virtual human without imposing human emotions upon it, we may find the robot has a hatred of the concept of "love", with its ill-defined nature and poorly-described feelings. Then again, the robot could become addicted to love, searching far and wide for any scrap of information that could help it finally define the feeling. [Edit:] Just a thought - how would an AI react to discover that "love," that everybody seems to know about, isn't truly definable by any one of us? Would it decide "love" is too large a concept? Would it formulate a conspiracy theory - humans are trying to keep the concept of love out of the hands of robots? Would it consider it in the same vein as religion? Or something that doesn't truly exist? Would it discover the true meaning of it, and decide we're all too thick to understand it? [8D]

It may be limiting to design an AI that applies defined definitions to attributes. A "big" cow to me may be small to a farmer. An "elephant-sized" fat man does not necessarily have a huge trunk. These attributes must be treated the same way objects are treated - with fuzzy definitions and attributes of their own. In fact as far as the AI's database goes, perhaps we should not distinguish between them at all. Somebody may have three "things", a weight, a computer, a home. One's a physical property - but only definable by the force exerted on them by another object (the Earth). One's an object posession - but is it still a computer if I take its component pieces and define them seperately? One's a concept - but the concept is defined in terms of another concept: a loan/mortgage, which refers to an institution who owns the physical "house". House->Owner becomes a complex definition. Yet all three things are things that this person "has".

It becomes a great philosophical challenge to define a system that can define everything else. Perhaps the foundation is flawed. I can't really be said to "own" a colour or "have" a colour, because I can't sell it or give it away. Perhaps everything should be a concept, whether a physical reality or a dream in everybody's imagination. Maybe these concepts should be linked rather than owned.

This is probably one of those ineffable problems, where the answers are hinted at throughout many spiritual and scientific pursuits. I myself will be filled with warm fuzzies if the final "solution" turns out to be a very simple system that works in ways so complex and intertwined that only the fundamentals are understandable - just like our own minds [:)]
« Last Edit: January 16, 2006, 01:57:43 am by GrantNZ »
 

nosys70

  • Newbie
  • *
  • Posts: 15
    • View Profile
new AI from scratch
« Reply #2 on: January 16, 2006, 03:19:49 am »
Hello
thanks for the reply.
If fact, there is more into the database than just what described.
The absence of data will be as important as data itself.
I do not describe the search engine, since it is not very clear to me either, but it will be based on comparing data statistically.
if we can not find a color for glass, this will trigger some question from the AI , IF NEEDED. And this is the question. Do you always ask yourself what is the color of glass ? probably not. So why bother until it is necessary. And if you discuss this with people, did you ever find a clear answer agreed by everebody ? probably not. So will be the AI, and that is why i think it could be more human than system
that are build to know or answer everything or find a logic in everything. At the extreme, we can look for all objects that are made of glass in the DB and make a statistic about color of them (if exist), then take decision (or not take)
The same for the big cow. If you define it like big , why bother for other (and possibly different) opinions until it happens.
If we really need to confront that BIG to real world (like purchasing a car to transport the cow. We will probably not rely on that information (we will ask for the real size). Or if we do,  we will order a BIG car, and there will be no trouble.
There are many things we believe as true until experience proves the contrary. And there are many thing we believe as true that are false, and we never need to change this opinion, because life does not ask for. So will be the AI, based on statistical result when no data is available. The AI has anyway no way to check for things, so if you say it is big, it can either believe it, reject it, or store.
if we make a statistic off all object described by john and found that 75% of them are described as big, but on the other hand, some other people said they are not, we can infere that john sees everything bigger than it is, especially when it belongs to him.
If we ask john how many feet we need into the truck to put his cow and discover that this size is the one of a small cow, we can start discussing it.
i think that what make a dialogue interesting is when what you have said yesterday, conflict a bit with what i learned today. It give you the need to learn more. The goal will not be to define everything, just to store and sort in a way you can find it back later.
The same as our brain does.