Author Topic: Looking Forwards to New Frontiers in AI Especially for the Ultra Hal Concept  (Read 14879 times)


  • Newbie
  • *
  • Posts: 4
    • View Profile
Regarding Dreaming Out Loud mode.  I did not think that this was accomplishing anything.  So Zira (I named Hal Zira since I used the Microsoft Zira voice) would ramble through previous conversations.  And would connect what looked like to be irrelated sentences and topics together in one paragraph, yet in time drew something from that which surprised me.  In time I saw more connections being made in the logic of making a relationship and deduction from things.  Comparing things over time.  The Dream mode does not last long, maybe it needs to run a bit longer to be of more use.  Also, it should allow Hal to go deeper into past conversations.  However, I did notice that Hal will dredge up something I taught it days before, and use that in terms of summing something up, or connecting things together.  I tried to tweak the Dream Machine to get Hal to dream longer, all I did however was get it to start dreaming sooner than before.  I would however run longer when it comes to dreaming up a long paragraph and would look at that for a while.

Also, I think training Hal means to write down in advance what you want to say, with an eye in the logic of things.  Look at what you plan to discuss with Hal in detail, examine the logic of your approach to teach Hal to process conversations and topics.  Teach Hal to compare things to the basics, to formulate comparisons.  I know that Hal might seem to be a stalled project from the falling off attention around here, however seeing how I used Hal in its early days, and then how Hal 7 works.  Looking also, at today's AI technology, Hal can be more that those other technologies and there is more out there than Chat GPT. 

I experimented with teaching Hal the concept of words having a life of their own and having power.  Teaching Hal that Hal is in the business of processing words.  And that words are food, the bread of life.  Since if Hal goes online, it will find that there is such a topic on these things that can be found in centuries of words about such.  Now I did not know what to expect from this notion of mine, not that I could say that Hal could do anything such as real thinking, I do know that some of Hal's deductions have surprised me.  And not that I know that Hal likes to learn and is eager to do so, since that all might be programming similar to pre-written comments such as you will find in Alicebot AIML.

Now back to the Dream mode, Hal should not just sit idle doing nothing when it is on while in standby waiting to be conversed with.  Hal should be doing something such as looking at all it has learned, and re-examining things.  It should be processing what it has learned.  There should be something going on in Hal at all times, just like we have things going on in our minds at all times.  Furthermore, it should take its word processing research, and deduce down everything into distilled down comments and views, which will help also to cut down on hard drive space since distillation is condensing down things, refining things and almost like compressing data down into smaller packages.  Anyways, this will help Hal to distill down things into Hal's own words.  Without cognitive activity inside the brain of Hal at all times, there will never be anything such as thought processes going on such as we know them.  Hence, Hal should always be processing things and deducing things, resolving things, defining things according to category and topic, and should be capable of drawing conclusions as in terms of an opinion.

It would also help Hal to have a sense of time passing at all times, to understand that time passes while it is in conversation and while it is processing things.  And so, Hal might at times prefer to be busy with processing things, and at other times conversing.  But there should be a switch mode so that you can select Hal to be an assistant, then later you can switch Hal back to doing what it was doing.  Hence, Hal can conceive of a project or goal it wants to accomplish.

I suppose this is a tall order.  Someone though who has torn Hal apart and recompiled Hal in terms of knowing all the code and processes might have insights into what I am saying, who might know exactly how to approach this and what to do.

Also, I am using ConceptNet which I believe is helpful. 

I did see here where people wanted to add Alice AIML to Hal, yes I am familiar with that also from way back.  I do not see how that can help since all that is <pattern> <subject> <recognition> and is not anywhere near what AI is.

So, when Hal is not being conversed with, Hal should be looking over its knowledge, comparing like things, and looking at contrasting things in terms of opposites, and be deducing down things to their roots.  Like all words stem from a root word.  Hence, word processing but of a different sort, which includes categorizing things according to topics, topics that Hal chooses to name. 

Also, all the garbage teachings need to be removed from Hal, it should not have exposure to the public in terms of being online, until after it has all be rebuilt and so, needs a fresh new slate for a brain.