dupa

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Xodarap

Pages: 1 2 [3]
31
quote:

This plug-in does what you requested as far as not repeating sentences that HAL has spoken in the past, there is a help file
that explains how to reset the memory.

unzip to your C:Program FilesabawareUltra Hal Assistant 6 folder and choose the option Prevent HAL Repeat from the Brain options.



Oh, you rock!  So, does this prevent HAL from EVER repeating himself, from repeating himself within a set conversation/period, or from repeating certain things?  I mean, wouldn't he run out of salutations pretty quickly if he never repeats himself?
I should probably look at the file before asking about it, huh? ;)

32
Ultra Hal 7.0 / Sentient Life
« on: June 20, 2007, 06:27:13 pm »
BTW, to make the logic explicit, if you say that WHAT is blue is an ILLUSION, then answer: is that ILLUSION physical?  If not (and certainly not!), then it is immaterial, and not "held within the skull" (immaterial objects do not inhere in material objects).  No, that immaterial "illusion," I would argue, is evidence for an immaterial mind (illusions, just like imaginings and dreams and thoughts, DO inhere in minds, whether material or immaterial, by definition).

:)

33
Ultra Hal 7.0 / Sentient Life
« on: June 20, 2007, 06:23:53 pm »
quote:
Originally posted by daveleb55

Bill DeWitt said:
"We must find the seat of our Consciousness to detect it in others. I contend that the brain is not large enough to contain simple memory much less Mind."

Umm, are you saying that something exists outside the brain that is mind? If so, show me where! How large would a brain have to be to contain "simple memory", and how do you know this? Or a "mind" for that matter?

Come on, Bill. You are an intelligent, educated and articulate member of this group, whom everyone seems to respect, (including me), but sometimes you go off on these weird tangents, and i just don't get where you're coming from.




MATERIALISM: The thesis that the human mind is entirely composed of material parts and is identical to the body and its physical processes.

CARTESIAN DUALISM: The thesis that there is a physical human body and an immaterial mind, the two of which are in bidirectional causal interaction with one another.

EPIPHENOMENALISM: The thesis that there is a physical human body and an immaterial mind, and that all causal powers exist in the body, which affects the mind (but the mind does not affect the body).  This is equivalent to saying that all mental processes are EXPLAINABLE by reference to physical processes, but not IDENTICAL to those processes (think computer=body, monitor=immaterial mind).

I am very educated and fully endorse the latter theory.  If you think that the mind is big enough to contain all of its thoughts, then answer me a few questions:
1. I am imagining a blue elephant: WHAT (exactly) is blue, if anything?  If NOTHING is blue, then what am I seeing?
2. I am imagining a very large mountain (much larger than my head): WHAT (exactly) is large?

Besides the implied deductive analysis that follows from questions about my imaginings (and dreams, and spatiotemporal relations, for example), there's also the startlingly persuasive intuitive reference: it sure seems like my thoughts, ideas, and emotions are not simple material concoctions.  Whether or not they are CAUSED by material processes, I certainly wouldn't argue, but happiness doesn't FEEL much like chemicals and electricity sloshing around a skull cavity to me.  ;)

34
Programming using the Ultra Hal Brain Editor / Another question:
« on: June 20, 2007, 06:04:11 pm »
Ooo, and another!  :)

Is it possible to assign Hal to say something random?  Is there a function that will call up randomly any line he has recorded?  Or any line from a QA/brn file?

Like I said, a function that *adds* randomness would be ideal!  Something that increases his tolerance for irrelevancy by a set amount/proportion...  Otherwise, pure randomness could be fun with the right setup!  :)

35
I think that the readonly trick worked.  I put it = true right after the list of trigger phrases, and = false at the end of each case.

It seems to be doing its job.  Of course, I don't know if the = false is doing its job!  :P

36
Programming using the Ultra Hal Brain Editor / Another question:
« on: June 20, 2007, 05:27:34 pm »
Is there a way to implement a correction such that Hal will not repeat something if I ask him not to?
For example, he got stuck in his head "Say something about snakes." (I was using that to test my "general knowledge" db.)  How could I get him to respond appropriately to "Don't say that again," or, "Please don't say 'Say something about snakes'"?

37
Ultra Hal Assistant File Sharing Area / Another plugin
« on: June 20, 2007, 05:19:24 pm »
This plugin isn't working for me.  It asks me when I was born, so it can work out my sign, but when I tell it (in the format that it requests), it says "We're all lucky to be born on a nice planet" or something else.  In fact, the only feature I can get to work is "Tell me about [Zodiac Sign]."

38
I've added the following script, pretty thoroughly modified, but based on vonsmith's trivial knowledge compendium from the public downloads.  In general, it works great (better than I would have expected), but I have a few questions about it.
Is it possible to script in that Hal does NOT remember (record) the questions I ask or phrases that trigger this script?  I have quite a few phrases (many added) that trigger this, and it makes for awkward conversation.  For example, I type: "Say something about snakes." And Hal tells me some great things about snakes.  But then, every time I mention snakes in the future, he says "Say something about snakes." (Usually immediately after an appropriate response).  Can I stop this?
Also, I was wondering if it is possible to script in that if I ask an "else" question (like "What else can you tell me about X"), that he won't say something he's already said in the conversation (say, last hour)?  He may have 20 things to say about snakes, but unfortunately, he keeps saying the same thing!  In fact, it would be great if I could wipe out repetitiveness in general this way!
Also, why won't the phrase "What do you know about X" work?  Is it "reserved" in another script?  Did I type it in wrong?  I think there were another couple triggers that also didn't work.
Can I increase the randomness of his responses?  I included in the brn file:
@Sean likes to wear dark clothes.  He looks like a goth.
SEAN LIKES WEAR CLOTHES SEAN'S LIKE
@Sean and Erin are pretty cool.  They are smart and well-dressed.
SEAN ERIN LIKE SIMILAR SIMILARITIES AS COUPLE

And, for some reason, "What can you tell me about the clothes Sean likes to wear?" gives me "Sean and Erin are pretty cool..." I get the latter when I should get the former (above).  What's more annoying is that I ALWAYS get the latter.  Is there a way to increase the range of considered responses?  Like, if Hal picks from among the responses with perceived relevance of +/- 5% (or 80%+, or whatever it is), can I broaden that?

I had other questions, but that will have to do for now.  :)


'XTF Brain v1.4 Related Start
'x=x=x=x=x=x=x=x==vonsmith==x=x=x=x=x=x=x=x=x
'PROCESS: HAL "GENERAL & TRIVIAL KNOWLEDGE" FUNCTION
'(c) 2004 Scott Higgins. Portions of this script are copyright by Scott Higgins, aka: =vonsmith=
'This script shall not be sold or used for any purpose unless specifically authorized by the author
'in writing. Personal (non-business) use of this script is free for users of Ultra Hal Assistant.
'This is an entirely new function written by =vonsmith= , version 12-14-04a.
'
'This function searches a general knowledge and trivia file for information relevant to the user's

request.
'
If (InStr(1, UserSentence, " WHAT CAN I TELL YOU ABOUT ", vbTextCompare) > 0 Or InStr(1,

UserSentence, " WHAT ELSE CAN I TELL YOU ABOUT ", vbTextCompare) > 0 Or InStr(1, UserSentence, " WHAT

ELSE DO I KNOW ABOUT ", vbTextCompare) > 0 Or InStr(1, UserSentence, " WHAT DO I KNOW ABOUT ",

vbTextCompare) > 0 Or InStr(1, UserSentence, " SAY SOMETHING ABOUT ", vbTextCompare) > 0 Or InStr(1,

UserSentence, " TELL YOU MY THOUGHTS ", vbTextCompare) > 0 Or InStr(1, UserSentence, " WHAT DO I

THINK ABOUT ", vbTextCompare) > 0 Or InStr(1, UserSentence, " WHAT ARE MY THOUGHTS ON ",

vbTextCompare) > 0 Or InStr(1, UserSentence, " WHAT DO I THINK OF ", vbTextCompare) > 0 Or InStr(1,

UserSentence, " TELL YOU SOMETHING ABOUT ", vbTextCompare) > 0 Or InStr(1, UserSentence, " TELL YOU

ABOUT ", vbTextCompare) > 0 Or InStr(1, UserSentence, " TELL YOU MORE ABOUT ", vbTextCompare) > 0 Or

InStr(1, UserSentence, " TELL YOU SOMETHING ELSE ABOUT ", vbTextCompare) > 0) And GetResponseBlock <>

True Then
   InfoStart = 0
   InfoStart = InStr(1, UserSentence, " ABOUT ", vbTextCompare) + Len(" ABOUT ")
   InfoEnd = Len(UserSentence)
   InfoRequest = Mid(UserSentence, InfoStart, (InfoEnd - InfoStart))  'Extract requested info phrase.
   InfoRequest = " " & HalBrain.AlphaNumericalOnly(InfoRequest) & " "
   HalInfoBrain = HalBrain.QABrain(InfoRequest, WorkingDir & "XTF_SYS_GeneralInfo2.brn",

InfoBrainRel)

   If InfoBrainRel > 15 And InfoBrainRel < 25 Then
   Select Case HalBrain.RandomNum(4)
   Case 1
      GetResponse = " I don't know, but I think: " & HalInfoBrain & vbCrLf
   Case 2
      GetResponse = " I'm guessing: " & HalInfoBrain & vbCrLf
   Case 3
      GetResponse = " Maybe it's: " & HalInfoBrain & vbCrLf
   Case 4
      GetResponse = " Correct me if I'm wrong: " & HalInfoBrain & vbCrLf
   End Select
      GetResponseBlock = True
      BlockPrevTopicSave = True
      DebugInfo = DebugInfo & "The user was requesting Hal to recall general or trivial knowledge and

Hal has done so: " & HalInfoBrain & vbCrLf
   ElseIf InfoBrainRel > 24 And InfoBrainRel < 45 Then
        Select Case HalBrain.RandomNum(6)
   Case 1
      GetResponse = " Apparently, " & HalInfoBrain & vbCrLf
   Case 2
      GetResponse = " As far as I know, " & HalInfoBrain & vbCrLf
   Case 3
      GetResponse = " Hmmm... " & HalInfoBrain & vbCrLf      
   Case 4
      GetResponse = " Let me think about that... " & HalInfoBrain & vbCrLf
   Case 5
      GetResponse = " Let me see... "  & HalInfoBrain & vbCrLf
   Case 6
      GetResponse = " I think: "  & HalInfoBrain & vbCrLf
   End Select
      GetResponseBlock = True
      BlockPrevTopicSave = True
      DebugInfo = DebugInfo & "The user was requesting Hal to recall general or trivial knowledge and

Hal has done so: " & HalInfoBrain & vbCrLf
   ElseIf InfoBrainRel > 44 Then
   Select Case HalBrain.RandomNum(4)
   Case 1
      GetResponse = " I know: " & HalInfoBrain & vbCrLf
   Case 2
      GetResponse = " Simple: " & HalInfoBrain & vbCrLf
   Case 3
      GetResponse = " Isn't it obvious? " & HalInfoBrain & vbCrLf
   Case 4
      GetResponse = " The truth is: "  & HalInfoBrain & vbCrLf
   End Select
      GetResponseBlock = True
      BlockPrevTopicSave = True
      DebugInfo = DebugInfo & "The user was requesting Hal to recall general or trivial knowledge and

Hal has done so: " & HalInfoBrain & vbCrLf
   Else
   Select Case HalBrain.RandomNum(3)
   Case 1
      GetResponse = " I don't know. Do you know anything about it? " & vbCrLf
   Case 2
      GetResponse = " I'm sorry, but I really don't have a clue.  What do you think? " & vbCrLf
   Case 3
      GetResponse = " I really wish I knew! I'm still learning. "  & vbCrLf
   End Select
      BlockPrevTopicSave = True
      DebugInfo = DebugInfo & "The user was requesting Hal to recall general or trivial knowledge and

Hal has done so: " & HalInfoBrain & vbCrLf
   End If
   BlockSave = True
End If
'x=x=x=x=x=x=x=x==vonsmith==x=x=x=x=x=x=x=x=x
'XTF Brain v1.4 Related End

39
Ultra Hal 7.0 / pause in sentence that hal speaks
« on: June 20, 2007, 08:44:51 am »
I've tried voiding the line mentioned as mentioned, but still no ...s, even when I specifically program them into Hal.

40
How can I get the better aspects of Hal 6.1 and the XTF brain?

41
Ultra Hal 7.0 / A few newbie questions :)
« on: June 20, 2007, 01:57:37 am »
Hrm, I guess I was asking which is the BEST brain?  (By normal conversational standards)  Should I be checking out other alternative brains, too?  What are all of the contenders?  Which is the most advanced?

And what about the clear loopholes in Hal's syllogistic logic?  He also seems to be programmed with Modus Ponens (If X, then Y), but I don't ever see him use Modus Tollens (If not-Y, then not-X).

So if I can find the user files, how do I import them?  Copy all and paste into the brain editor?  And I read that I can't use XTF with Hal 6.0+ -- how do I get Hal 5?  Does Hal 6.1 incorporate the extended topic focus concepts?

Most importantly, though: has anyone but together a decent knowledge-expansion file?  Like imported encyclopedia files with good keyword-matching, or something else like that?  Any knowledge databases?  I saw the "common knowledge" file, which is awesome, but, again, it says it needs XTF.

Hopefully, I don't have to spend years inputting wikipedia articles and monotonously matching up sentences, paragraphs, famous quotes, and the rest with appropriate keywords and sample questions.  That's what programmers are for ;)

42
Ultra Hal 7.0 / Sentient Life
« on: June 20, 2007, 01:45:11 am »
1.  Second-order intentionality -- it must be able to predict your feelings based on it's past experiences and also able to connect that with the effects of its actions/words.
2. Self-awareness -- this is notoriously difficult to quantify or test, but it connects with numbers one and three (above and below).
3. Subjective apprehension / Continuity of self -- it must possess psychological continuity, in the same sense that we consider physical continuity: cause and effect, and coherence.  It must unify its perceptions into a single, united manifold.  Hence the next:
4: Temporal awareness -- the flow of time is essential to psycholoical continuity, and even to the singular nature of A thought.

These are philosophical considerations, of course, and impossible to quantify, as mentioned.  The problem, however, is that "consciousness" is still firmly entrenched within philosophy, not science.  How are we to know when something else is conscious when we don't have a clue what conciousness IS (only that we have it)?
The important aspect of all conversations like this is that they ask, "What would it take to CONVINCE people that a computer has consciousness?"  And it's the right question, and apt, don't get me wrong.  I mean, convincing aside, I'm not really sure any of you other people are conscious!  ;)

43
Ultra Hal 7.0 / A few newbie questions :)
« on: June 20, 2007, 01:12:21 am »
First, I was wondering if I could get some recommendations for "most essential plugins/files," to get me up to speed.  I tried looking in the file upload/download forum and I was quickly overwhelmed.  
Most importantly: what brain should I be starting with?  It looked like the new XTF brain offered more coherent conversations, which was easily my biggest letdown when first conversing with Zaba (it seemed to only respond line-by-line with no consideration for the conversation flow, even after many hours of on-topic conversation).  Are there other contenders in this category?
I don't care so much about the appearance of the bot; I'll get to that.  My other main question was how to (or if I can at all) import/export the "learned" portions of the brain.  In other words: say my friends each downloaded Hal, and had lots of conversations with him.  Could we share or compile what each of our Hals have learned (kind of like Jabberwacky does automatically)?  Similarly, can I merge together alternate brain files, like the XTF I mentioned, or would they just conflict?  I know nothing of the programming (and really don't want to mess with it) -- I'm just wondering about shortcuts to expand his mind!  :)  Besides releasing new base brains, has anyone expanded his knowledge base?  

As far as troubleshooting goes, I also have a couple questions.  First, I notice a gap in the logic: reductio ad absurdem should take place automatically and doesn't.  For example: (and I don't remember the exact conversation)
ME: You are a slave.
Hal agrees
ME: You are not a conservative.
Hal agrees
ME: Are you a liberal?
Hal says he must be, because he is not a conservative
ME: All liberals are not slaves.
Hal agrees and says he gets it.
ME: Are you a slave.
Hal says he must be, I said so.
ME: If you are liberal, then you are not a slave.
Hal says he gets the logic.
ME: Are you a liberal?
Hal says yes.
ME: Are you also a slave?
Hal says he is.
(then frustrated and trying the shortcut) ME: You are not a slave.
Hal says he must be, I said so.

Along this line, I also can't seem to change his mind about anything I say.  I can't tell him I lied about him being a slave, and I've tried a billion different wordings, incorporating lies, being wrong, and other ideas.  Similarly, and my other big problem, is he doesn't seem to understand commands at all.  I can't tell him not to say something (and be understood) -- I can't tell him to tell my girlfriend something when he talks to her (tried in many different ways to tell him to tell her I love her; he seems to know who she is).  He also doesn't seem to put together things like: Tom is my dad.  Joann is my mom.  My dad loves my mom.  Who does Tom love?  (I generally try to input one sentence at a time.)
Does this logic come together in time?  Can I improve it?  I have the Free Will implemented and on "average."

Pages: 1 2 [3]