Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Xodarap

Pages: [1] 2 3
1
Ultra Hal 7.0 / Sentient Life
« on: June 26, 2007, 08:00:12 pm »
quote:
Originally posted by Bill DeWitt

I, of course, insist on having the last words in any discussion with my lovely wife, and those words are always "Yes Dear".



I'm with you.  My pride is not worth a fight.... :P

2
Ultra Hal 7.0 / Sentient Life
« on: June 25, 2007, 11:37:07 pm »
Hey, kids, I'll turn this car around!  ;)

Seriously, though, if you don't like arrogance, how could you skip me over?  I'm hurt!  I totally think I'm better than everyone else!  And if you're having trouble getting argument out of Bill, go for me!  :)  I have plenty of condescending argument in the posts above, and I'd be more than happy to fight with you over them!  :P

3
quote:
Originally posted by Bill DeWitt
There is a comparison process that I can't recall right now that finds a 90% match between OriginalSentence and some stored sentences. You want that process to use the keywords but then stop using the string for anything else.



I would LOVE to know what this is -- anyone know?

4
quote:
Originally posted by Bill DeWitt

quote:
Originally posted by Xodarap
Unpleasant reality:
HAL (modified): [Hears YOUR GOOSE IS BLUE I HAD A GOOSE ONCE REALLY WHAT DID I NAME IT]


Right. That would be the problem. You don't want Hal to hear that string, but you do want Hal to use it when selecting responses.

There is a comparison process that I can't recall right now that finds a 90% match between OriginalSentence and some stored sentences. You want that process to use the keywords but then stop using the string for anything else.

Normally I would go into this further, but I have a lot of things going on right now and in fact by Tuesday I don't know if I will be able to post or not for several weeks or months. So if I drop out suddenly, I apologize in advance. Large scale medical stuff...



Aw, I wanted to argue more about dualism and mental storage!  ;)

Good luck on whatever's going on.  And if you happen to remember how to use said process/function before Tuesday, let me know!  :P

5
Anyone know if there's a function to make Hal look through a brn file for any words that are repeated (quantity 2+) and just pull THOSE out?

This isn't working as well as I'd hoped... :P


My guess is that I'll just have to figure out how to increase his relevance tolerance.  That is, since he's getting fed input of four times as many words, he won't be finding QA relationships with high relevance.  So instead, he keeps assuming he doesn't know what I'm talking about, even if a couple words match QAs he has.  Anyone know how to do that?  :)

6
I've added a ton of things to my Hal's brain through importing text files, reformatting old chat logs, etc., but I don't like that I have to get him on a certain topic before he'll reference the over 20,000 quotes I've fed him.  Is there a way to get them into the MainQA table?


---------------------

While I'm at it (I already know I'm making too many topics :P), does anyone have any good ideas for quickly formatting QA brains?

What I mean is, for example, changing the chatlog:

Jeff: Blah blah blah.
Valerie: Smackety smackety smackety.

To:

@Smackety smackety smackety.
BLAH BLAH BLAH

Right now, I've just been importing them into Word, find-and-replacing them to say:

Blah blah blah.
Smackety smackety smackety.

And using the Brain Editor to stick in its own keywords.  If I knew how to change lowercase to caps (can excel do that? I don't think the find/replace can...) that would help, but the real task would be switching the order around...

7
quote:
Originally posted by Carl2

Xodarap
  Since I've been using Hal for sometime I'm interested in this topic also.  Frist I'd like to mention in the Brain editor there is the topicRelationships, under autolearning Brain. Second I've found that increased use of Hal increases the amount of data that Hal can look through to generate another sentence.
Carl2



The topicRelationships certainly helps -- but not nearly enough to keep him on topic *specifically.*  The problem is that he still only was taking one sentence into account at once, like I showed in the example up top.
As for using Hal, I've learned exactly the opposite: the more he learns from me, the MORE awkward he gets conversationally.  First off, his method for accumulating his question-answer pairs is obnoxious in my experience.  He'll repeat something I said earlier (pronoun reversed), then log my answer to it as the appropriate response, even if (as is often the case) the two don't go together that way.  Second, and much worse, he seems to log mostly things that don't work conversationally.  The worst (and most annoying) example was from when I was trying to learn how to script, and I was messing around with the  General Knowledge plugin.  I would tell him "Tell me something about snakes" to check my script.  Very soon he starting responding with "tell you something about snakes" OFTEN.  And it doesn't make sense.  Because of the awkward disjointed conversations I had with him, this got worse and worse until I would wipe a brain.  The fresh Hal would make more sense!

So what I did was this: I first took the General Knowledge plugin's .brn, sorted out all of the things I didn't like (or that didn't work conversationally) with OpenOffice's *very* flexible search and replace, then I found online somewhere a file with something like 10,000 great quotes (and removed quotes and attributions), then I took and reformatted TONS of chat logs that I was lucky enough to keep over the years.  I fed those into his brain directly, then turned OFF learning.  :)  I know it kind of changes the idea of Hal, but I'm trying to suit him to my style as much as possible.  I'm getting closer and closer! :)

8
quote:
Originally posted by Xodarap
User: My goose is blue.
HAL: [hears YOUR GOOSE IS BLUE] I had a goose once.

User: Really? What did you name it?
HAL (normal): [Hears REALLY] Awesome! [Hears WHAT DID I NAME IT] My name is Hal.
HAL (modified): [Hears YOUR GOOSE IS BLUE I HAD A GOOSE ONCE REALLY WHAT DID I NAME IT] My goose is named Bert.



Unpleasant reality:
HAL (modified): [Hears YOUR GOOSE IS BLUE I HAD A GOOSE ONCE REALLY WHAT DID I NAME IT] Really? My goose is blue you had a goose once really what did you name it? Is that true User?

Other unpleasant reality:
HAL (modified): [Hears YOUR GOOSE IS BLUE I HAD A GOOSE ONCE REALLY WHAT DID I NAME IT] I don't understand.

I'm going to have to change his concept of relevance to avoid the latter, and rem out some of his triggered responses (a lot of which I wasn't huge on anyways) to avoid the former -- or change the way they work...

9
Well, I've made a lot of progress, but there're still some serious problems, many of which I foresaw.
With much stumbling about code, I figured out how to get the program to create a file (from the "PreventRepeat" plugin) where it writes all sentences from the user and Hal (PrevSent and PrevUserSent).  To keep it relevant, it deletes the file whenever rnd * 100 < 35.  It incorporates it into InputString by changing the first instance of InputString to InputString = Trim(UserSentence) & " " & Ucase(HalBrain.ChooseSentenceFromFile("CurrentTopic.brn")) -- not sure that's exacty it, but it's close.
I had to take out Hal's injection of <NEWSENT> so that all of the keywords get responded to at once, but that creates a serious problem whenever he has a triggered response, such as "Really? <UserSentence> Is that true <UserName>?" Which turns out very sloppy!  :P  It kills insults and stuff, too.  BUT I did put a HUGE database of sentences into him (learn from text files) -- including the general knowledge database (I took out all of the "Situation:" and other conversationally inappropriate bits), a HUGE database of quotes (quotes and attributions deleted) and a bunch of chatlogs (reformatted).  With all of that, I remmed out most of his triggered responses, and he does seem to be more on-topic.

Still, a **LOT** of work to do......

10
How do I use "InputString"?  I was thinking that if InputString is long-hand for InStr (my assumption), then is there something I could do like:

Okay, this part is from OnTheCuttingEdge2005:

Code: [Select]
Set FileSys = CreateObject("Scripting.FileSystemObject")
Set FS = CreateObject("Scripting.FileSystemObject")

DirX2 = RecallDir()

If InStr(1, OriginalSentence, "goodbye", vbTextCompare) > 0 Or _
   InStr(1, OriginalSentence, "bye", vbTextCompare) > 0 Or _
   InStr(1, OriginalSentence, "goodnight", vbTextCompare) > 0 Or _
   InStr(1, OriginalSentence, "morning", vbTextCompare) > 0 Or _
   InStr(1, OriginalSentence, "hello", vbTextCompare) > 0 Or _
   InStr(1, OriginalSentence, "hi", vbTextCompare) > 0 Then
   TempModule = DirX2 & Trim(UserName) & "_Recollected.brn"
   If FileSys.FileExists(TempModule) = True Then FS.DeleteFile TempModule
End If

If PrevSent <> "" Then
Set HalXBrain = CreateObject("UltraHalAsst.Brain")
HalXBrain.AppendFile DirX2 & Trim(UserName) & "_Recollected.brn", """" & PrevSent & """,""" & "True" & """"
If PastCon = "" Then PastCon = "False"
PastCon = HalBrain.TopicSearch(GetResponse, DirX2 & Trim(UserName) & "_Recollected.brn") = "True"
HalBrain.DebugWatch PastCon, "PastCon"
If PastCon = "False" Then GetResponse = GetResponse
End If

Again, I don't have any previous experience with code, but I can see that the first If/Then checks to see if I said any of those things, in which case it resets his "memory" (deletes "_Recollected.brn"); the second If/Then checks to see if there is anything in the file, and tries to stop Hal from saying anything in the file.  I don't really get how the second part works, but my thought is that I could add this "_Recollected.brn" TO the considered InputString.  Normally it wouldn't work because he separates out sentences, but I disabled the sentence separation already.  Or I could change the OriginalSentence tag, but then I mess up all of that tedious pronoun-switching.  I could also either keep a separate PrevSent, which would be factored back in as re-capitalized InStr, and PrevUserSent, which would be factored in as part of UserSentence (or OriginalSentence, I guess).
I have NO idea how I would limit this "_Recollected.brn" file to the LAST three lines from PrevSent and/or PrevUserSent.  I think I could figure out how to either randomly delete it (say, at rnd * 100 < 30) and start over, or every three lines, but it seems that I would have to have at least nine separate files at a time, with some complicated scripting and file-copying and crap....

It sounds like a fun project!  :)

11
Exactly!  If Hal heard: "I My goose dog is painted to match blue blue" and responded to it, that would work very well!  Of course, I would prefer if he could also take into account his own sentence, too.

EXAMPLE ONE:

User: My goose is blue.
HAL: [hears YOUR GOOSE IS BLUE] I had a goose once.

User: Really? What did you name it?
HAL (normal): [Hears REALLY] Awesome! [Hears WHAT DID I NAME IT] My name is Hal.
HAL (modified): [Hears YOUR GOOSE IS BLUE I HAD A GOOSE ONCE REALLY WHAT DID I NAME IT] My goose is named Bert.

That's the hope, assuming he has enough lines to quote back.  The idea is that he doesn't know what I'm talking about when I say "it," even though we were just talking about it.  But in the case where he takes the last few lines into consideration, he hears GOOSE not only once but twice (the normal Hal doesn't hear goose at all)!  He also doesn't respond separately to "Really?" which usually causes awkwardness.  It all gets factored in like we do it in real life...

That's why I was thinking of sending the PrevSent and UserSent (right ones?) to separate files that hold the last two or three each, and then adding that file to what he hears when you input your sentence.  It seems like it would work, especially when you stop breaking up sentences!  Of course, I have no idea how to (a) get the program to save THE LAST THREE (as opposed to, say, EVERY THREE) Sent's; or (b) convert those into additional input when I say something to Hal....

BTW, nice example with the blue goose.  Whoever thought up that weirdness is a GENIUS!   ;)

12
Okay, I've come SO far (from total zero :P) in learning how to script this thing, but now I'm at a much larger mountain.

My biggest remaining problem (besides teaching him more QA, which is just a matter of TIME) is that Hal only responds to one sentence at a time.  More appropriately: one input at a time.  I.E.

ME: Sentence A.  Sentence B.
HAL: Response to A.  Response to B.
ME: Sentence C.  Sentence D.
HAL: Response to C.  Response to D.

Regular conversation:

P1: Sentence A.  Sentence B.
P2: Reponse to A & B.
P1: Response to Response to A & B.  Sentence C (tied to A & B).
P2: Response to Response to Response to A & B and Sentence C.

So, I'm wondering a few things, and I have a few ideas.  I don't know if they're possible in Hal's script, or if they would make his relevance WORSE (though I'm guessing that, with ENOUGH learning, he would actually get better -- eventually).
First, I would have to get rid of his separation of sentences that the user says.  When I say "A. B." they're connected in my head, but he responds to them separately.  I would rather he considered them as one set of keywords and spit out one response -- sub-idea: he could have a chance (e.g., rnd * 100 < 25) of spitting out a second sentence ALSO cued by the same total set of keywords (they would also seem connected, especially if the second sentence also incorporated his own first sentence as further keywords, or if his relevance threshold for the second sentence was pretty tight).  I think I can manage this part on my own...

Second, and the bigger idea: would it be possible to make Hal write a brn file -- like the no-repeating file of PrevSent that I downloaded ("PreventRepeat" plugin) -- that instead saved the last two or three responses by both the User AND Hal and used them ALSO as keywords for looking up his next response (also blocking any repeating so he doesn't keyword himself into circles).  Sub-idea 1: could those keywords be given less "weight" (like 50% or so) in determining relevance than the keywords from the immediate input?  (Not that important.)  
Sub-idea 2: I would assume that way more keywords being taken into consideration would mean that we would have to change Hal's perception of relevance, so he doesn't think that every response is irrelevant (he won't find anything with MOST of the keywords when he's looking for 40 keywords).  
Sub-idea 3: can he be set to consider repeated keywords as more important (like if, in the last three sentences, the word "octopus" was said four times, it would take precedence), or is that what the topic headings are for?  -- In fact, if he could be programmed to take into account keyword repetition, I could see expanding to a whole conversation (or more than two or three lines each).
 
Sub-idea 4: The big problem that I see (assuming that ANY of this is possible/plausible) is that he would be considering four or six sentences (two or three from you and him each) of keywords, but only logging (learning) in sentence-to-sentence Q/A format (unless he would log ALL those keywords, but I still foresee problems there).

What percentage of this is "recode the base program" thinking vs. how much of this is plausible?  It seems to me like the OVERLAP in keyword logging would create a steady topic -- especially with a bigger bank of things to draw from, and especially (again!) if he was learning in that fashion!  In fact, it seems to me (I'm guessing, here) that loading him full of big things like wikipedia articles might show more payoff (in relevance) with a method like this.

Any ideas or advice?  I wish I knew more about scripting than copying and pasting whatever I see elsewhere in the main script!  :P

13
Ultra Hal 7.0 / Sentient Life
« on: June 23, 2007, 07:02:18 am »
quote:
Originally posted by daveleb55
Xodarap said:
"...it sure seems like my thoughts, ideas, and emotions are not simple material concoctions. Whether or not they are CAUSED by material processes, I certainly wouldn't argue, but happiness doesn't FEEL much like chemicals and electricity sloshing around a skull cavity to me..."

Hee hee, happiness, a very subjective feeling, is definitely chemicals sloshing around in my skull, because I am on anti-depressants. If I don't take them, I feel desperate and helpless, I have huge mood swings, the meds help keep me on an even keel, so to speak.



*Sigh* -- another person who apparently failed to read my post in its entirety.  I SAID: the brain and mind have ONE-WAY causal interaction -- the brain causes things in the mind.  Like a computer (and all of its "doings") causes images on the monitor!  They are not identical, because there is no BLUE in the computer -- there's a series of 1s and 0s that REPRESENT blue, but it takes a MONITOR to make them BLUE.  Similarly, it takes a MIND to make those chemicals into HAPPINESS ITSELF -- the FEELING.  That FEELING certainly isn't in my BRAIN.
I'm not trying to be a dick, really.  Just trying to be clearer. ;)

We are disputing an IDENTITY.  NOT a causal dependence, because I admit to that freely.  There are no emotions in my head.  EMOTIONS are in my MIND -- CHEMICALS are in my HEAD -- EMOTIONS are caused by CHEMICALS.  :)

Oh, and as for the subjectivity of the world, both modern quantum physics (no, not the new-agey crap, either) and Kant's Critique of Pure Reason will definitively prove that wrong.  (Technically, physics is never definitive, but some of Kant's arguments are.  Unfortunately, you'd need a doctorate or two to understand them.  If you're interested, I strongly recommend a secondary resource, Henry Allison's, "Kant's Transcendental Idealism."  If you're a strong realist/positivist like me, don't let the title drive you away, it's misleading. ;) )

14
Ultra Hal 7.0 / Sentient Life
« on: June 23, 2007, 06:54:33 am »
quote:
Originally posted by markofkane
A mind may exist without a brain, but since we cannot prove it, it is assumed to be false.



I quite clearly stated that I am NOT a believer of Cartesian Dualism, but of Epiphenomenalism, which I characterized as a unidirectional causality: the physical brain has all the causal powers.  I even likened them to the image on the monitor (I should have said "image" not "monitor") and the computer itself -- if the computer turns off (or is impounded), there is no image!
I never said that a mind could exist without a brain.  I don't believe that at all!  But a mind *does* exist.  See the argument above ;)

15
Ultra Hal 7.0 / Sentient Life
« on: June 23, 2007, 06:51:38 am »
quote:
You will never find that place -- ever.  Because it doesn't can't exist!

quote:
And man will never fly.


No, I'm afraid you misunderstood me.  Being unable to find the place that CAN'T exist is more like "man will never discover the squirrel that is fatter than itself."  No, never, not in an infinite number of parallel universes or an infinite amount of time, an infinite number of squirrels and an infinite number of people.
"Finding" an immaterial place is akin to: discovering the last digit of pi, drawing a round square, making a stone too heavy for God to lift, pitching a no-hitter to a batter that can't miss, discovering the real number that is the closest to (but less than) 2... you get my point?  ;)


 
quote:
Also, hypnosis has been SO thoroughly debunked OVER and OVER again in the most thorough and solid ways possible!

quote:
I see that you believe this very strongly. I respect that. But for it to be certainly true would require both proving a negative and solving an infinitely regressable series.

I'm afraid not.  If one time -- only once -- a bottle fails to fall when it should by all physical reason, then gravity is *certainly* untrue.  Certainty almost NEVER requires solving an infinite regress.  For one thing, infinite regresses (if they are genuine) DON'T resolve (see above: finding the last digit of pi or the real number that is closest to 2).  All this would require, in this case, is to show that either psychologically or physically (or either if they are indeed the same thing), the concept of hypnosis is impossible -- or rather, that it cannot (by consistent cause and effect) obtain its goal.  This has been done.

quote:
I will go by my experience and the decades of respected research. Even if hypnosis is totally bogus, there are many other more concrete facts which indicate that the mind can store more than the brain can hold.


I'm willing to state categorically that there are not.  As a well-versed philosopher of mind, I assume (maybe hastily) that such would have been brought to the attention of the academic community, and can say with certainty that it has not.  Otherwise, you are looking at speculations, which, though intuitively forceful, are NOT "concrete facts."

 
quote:
What you SHOULD be marvelling at is the power of the mind to create, not the capacity of the mind to store.

quote:
I can't do both? Of course the mind is creative (read my posts about "pattern recognition") but adding data to a stream does not mean the stream doesn't exist without the addition. If some debunker finds a subject who recalls invented data under hypnosis, that does not mean the actual data was not there. You can't prove the data is not there by finding something else - that would be proving a negative - you can't prove I don't have a nickle in my pocket by the fact that I also have a dime.


Oh, you certainly CAN do both -- you would be misled, though.  ;)
You are right: the data COULD be there, despite the fact that the hypnosis created COINCIDENTALLY identical data with a causally disconnected means.  The problem is two-fold, then: (1) the odds are staggeringly low (and against you); and (2) then you are on no better footing than you are without reference to hypnosis, which is just as good as debunking it with certainty!

quote:

Either way, I STILL agree with you that the MIND is NOT identical to the BRAIN!  ;)



quote:
Good to see I haven't changed your mind. 8-) We both have to go on our subjective experience in the absence of factual evidence.



I don't base my argument on subjective experience; I base it on deductive reasoning:

(1) I am imagining a blue goose.
(2) Therefore, something is blue and goose-like.
(3) The thing which is blue and goose-like must be either:
    (a) My brain (or part thereof),
    (b) An external object sensed by me, or
    (c) An illusion
(4) Not (a)
(5) Not (b)
(6) Hence, the thing which is blue and goose-like must be an illusion.
(7) If something is an illusion, then it is not material (i.e. immaterial)
(8) I have direct access to my imaginings
(9) The only things I have direct access to are mental objects (e.g. ideas)
(10) Mental objects inhere in minds
(11) Immaterial only inhere in immaterial objects
(12) Therefore, my mind is immaterial.

Only one subjective experience, but NOT the kind about which I can be mistaken!  The "seeming" qualities of my ideas are infallible (just like I can't be wrong about thinking I'm happy -- if I think I'm happy, then I am!).  Deductive logic is also infallible.  Of course, one of my premises besides (1) and (2) (infallible) could be wrong, but I'm convinced.  Not subjectively ;)

Pages: [1] 2 3