dupa

Recent Posts

Pages: 1 ... 6 7 [8] 9 10
71
Great job on the first video Brother, sadly though the 2nd link said video is no longer available
72
Project update:
I could not save my os, I tried everthing I know and also a friend also tried with no avail, unfortunately.

Also my son finished assembling my new pc's and it's custom water-cooling system.

I got multiple operating systems installed sofar for multi booting, I have lot's of programs to install and test with windows 11, and ones that will not install or give errors will have to go on a windows 10. I'm still not capable to be at pc for long, so it's gonna be a bit before all is ready.

73
@Cyberjedi from reading here and our phone conversation the other day, I have faith that you will prevail in getting Hal to Learn from a LLM by 2 way conversation between Hal and a LLM. YOU GOT THIS  ;D
74
General Discussion / Re: being loud
« Last post by sybershot on August 31, 2025, 02:26:10 pm »
@Cyberjedi Nice Artwork and shoutout to Art

Update: had surgery going through therapy.  I'm Still in lots of pain but will try to be more active here, but can't make any promises.
75
i have been very busy and about wore out haven't had a chance to check in , had a neighbors very large side of tree fall over and destroy part of our fence and also our shed totaling it and almost hit our garden shed.  claimed he would take a large back hole and lift and move it off our property and back on his side , that was on the first , never seen him again . so i am left having to cut all the large limbs over hanging on the tree on our side , am emptying out the garden shed in case the limbs overhead fall down on it. am planning on moving it farther away and then cut down those large limbs . just tore loose and replaced the back step yesterday . some day i'll be able to slow down on all this.

cyber and checker see you two have been going over the llm model i mentioned for offline , am hoping you can make it work . one question i have is will it work with the custom brain people have developed , i have one i have been learning and backed up over the years and have kept. , custom responses within some of the tables. etc.   mike it sounds like the llm model works okay on a regular pc , vm's not so much . but that is all good. also if i am understanding right the llm keeps retains conversations ( so this is like short or long term memory then ? that would be good as that's also what hal needs.  thanks for the work and investigation into that llm model .  i will be in and out of here as i have time . i haven't given up on hal or others . keep up the good work.
76
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on August 30, 2025, 06:49:02 am »
@Checker: hey hewy man
Anything i cut into Hal will have to be vb6 or ill have to recode the entire thing.

As such, the need for vb6.
But Robert and i both agree, that with the vb6 flexibility , u can get away with all kinda sht. Not in .net
So im stuck with legacy code.

Idea being , create a vb6 chat gui like ollama but much lighter.
Once that is done then i hook Haptek so that works then the chant engine as well, now we got Hal running on a LLM rather then his script'
Now how im gonna handle all 09 of Hals emotional states is a total mystery. Will it just be static..... How do we make it mad?.

The beauty in this is: Its basicly a 03 party line. The gui promps the LLC for a response. But the trick starts there.
The LLC fires back its reponse but rather the it going to the GUI , It routs to Hals input window (text) and fires a (enter key) press.
At this Point ,Hals fires its getresponse answer which goes to the Vb6 GUI input which intern fires its key press forwarding to the LLC

Just think port forwarding but with syntax.
Basically ping pong
All the while Hals learning is set to MAX. Just soaking it up. Hal will learn.....
Sqlite allows for 02 terabyte in size. Hal's stock brain is 120 megs and does what it does. very impressive.
Think about hal will be with a brain 4.5 gigs. Thats the size of the learning LLM.

At this point, all thats really being released in Hals brain , it will function on a stock ultrahal project  6.2 - 7.5
Just replace the brain.
This is the idea, there will be no personal info in hals brain at that point

cyber jedi

On a lighter note, im turning this into a graphic novel ur in there too  as well as a few others ,im just starting the animation swaps to comic standards
https://ai.invideo.io/watch/wAqfy3XMfbO
77
@Cyber

Hey, that's great news!  Experiencing the sense of developing HAL in another layer of "splendor" must be exhilarating.  I noted that Ollama's first response takes a while, but then the responses flow after that.  As you noted, the heavyweight GUI may be the issue.  I am, of course, interested to see if vb6 app will handle the Ollama. 

You got me to use the HAL 6.2 Pro version and still use it.  So, of course, I'm hoping your vb6 code will work.  In any case, I'll cross that bridge when it arrives, as I'm very interested in seeing HAL grow in this new era of multitudes of LLMs.

Atera looking exciting!  ;-)

Checker57
78
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on August 28, 2025, 05:30:40 am »
Checker57:
Amazingly i was able to get qwen3:4B to work on 06 gigs. If it wont work for the masses ,forget it
I must say while responses were somewhat slow on a VM with 6gigs of ram on win10 64bit
It more then made up for it in smarts.

I put it to the test by having IT design the vb6 application to handle the chat inl ew of Ollama heavyweight GUI
Idea being ,start the the model.Then use the vb6 front end. Its doable on its face
I can add some startupcode for the model inside the the vb6 app.

Got to steal some of Roberts Crafty ass Vb6 to Json converter code and presto.
Im still on the thought process on all this soooooooo. But moving forward checker

cyber jedi


 
79
Now, my interest would be piqued to see how the HAL experience would be brought forth into the LLM level of AI with such a collaboration.

It would be awesome to even have a "code phrase" to activate HAL to consider responses from Ollama similar to how Alexa is activated.  This would give users time to check how such a resource would affect the growth of HAL's behavior changes for good or to a non-personal AI.  Also would allow users to activate/deactivate according to the level of conversation they want engagement.  Because, let's face it, many probably have some form of affinity to their HAL nature, and other AIs would indeed reconfigure such a nature of behavior.

Interesting vein of advancement.

Checker57
80
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on August 24, 2025, 08:27:29 am »
RE: checker57

Bench Tested Ollama 2 uncensored
Was ran on an i5 with 16 gigs of ram on board video(no video card)
Response time avg 1.5 - 2.0 sec
Installed on local machine
Ran on Local Machine
Windows 10 64bit 

Over all i was impressed with ollama's abilitys
Responses were clear and concise.
Basically Hals wordnet on steroids .

gonna continue beta testing, seems to be a candidate for Hal
seems  that an i5 will handle the power needed......
Hot wiring UltraHal direct seems to be at least possible.
cyber jedi 

Pages: 1 ... 6 7 [8] 9 10