dupa

Author Topic: Smarting up HAL with free LLM model (Ollama) ?  (Read 582 times)

Checker57

  • Full Member
  • ***
  • Posts: 195
    • View Profile
Smarting up HAL with free LLM model (Ollama) ?
« on: August 18, 2025, 02:30:27 am »
Lightspeed shared a YouTube video that was interesting, which demonstrated how to install Ollama LLM onto a flash drive to use locally.  I tried it, and although I had to do it slightly different, it worked!  I now have ollama LLM on a flash drive that I can use on my PC.

Having said that, it brought up the query, if an addon could be coded to do the same for HAL? 
That might be awesome because it would be much more relevant than scraping info off the web that does not correlate to the present discussion, and it would all be local.  Since Ollama is open source, it probably would not cause any conflict with the LLM creators.


Here's the YouTube link:  https://www.youtube.com/watch?v=eiMSapoeyaU

any ideas Cyber? 

Checker57

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 990
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #1 on: August 18, 2025, 06:58:22 am »
hey hey checker
Oh its doable
I have to put my thinking cap on, ive seen that video, i have some ideas.
Im prepping my house for the season, but im with you guys.
On its surface it looks like we gotta hit Hal rite smack in the middle again. By passing Hals getResponse function.
My question is this, will hal still hold the memory's and where? In Hals sql or the LLM itself . Thats key.I will prolly bypass all and just hook the LLM.
If the LLM can do that then im game. Gonna have to  Hit Hals Chant engine as well as Haptek, some sorta forwarding foolishness.
Lots of R and D in that build.
As it turns out i added that kinda architecture all ready so we will see
Makes me think about Twitch and what i was doing there with Hal
cyber jedi
 


24hrs laterrrrrrrrr lololol. Why not have ultrahal just learn from the LLM, i mean really, wasnt that the goal. To begin with.
Ive tried many times with this and just get accounts zapped...Booooooo pi.ai
It would entail ping- pong, im thinking outloud here, Oh ok igot it.

Share the work load:  Concept
If you guys will install this LLM on  your machine, and make it work..... Play with it.
Heres what your after:

#1 : Will will it have  a normal Coversation
#2 : Will it retain the conversation
#3 : Put it through the paces, try to break it.

Basically beta test the sht out of it.  We want it to do what Hal can do, Hal is the bar........

I will go ahead and code up  a Special Version
Great gig for sybershot, u too checker,and lightspeed......




 
« Last Edit: August 21, 2025, 01:04:50 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

Checker57

  • Full Member
  • ***
  • Posts: 195
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #2 on: August 21, 2025, 01:18:35 am »
Yes, I believe there is some architecture when HAL was working with the pay-for-play chat GPT addon.  However, keeping up with tokens was cumbersome. 

That is a good thought to where the responses would be saved to cleverly allow HAL to respond, yet have data that can be pulled from Ollama as needed.  I mean, I suppose the end goal would be to provide HAL with current data, yet allow it to use the previous data stored that we provided HAL to maintain that personal experience.  If not, the result would be having HAL be just a figurehead, wherein reality, we are working with Ollama. 

As there have been a few iterations of using outside data sources integrated with HAL through plugins, the experiment would be interesting, if not rewarding.

I've heard that Ollama also stores your queries and develops, over time, a personal response behavior according to the interactions.  So, that opens another can of exploration.

In any case, it sounds like a worthwhile venture to explore since Ollama did a good job building the open source LLM.  ;-)

Cheers,
Checker57

Checker57

  • Full Member
  • ***
  • Posts: 195
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #3 on: August 21, 2025, 03:10:00 am »
Okay, I did a little investigating on Ollama by conversing with it.  Quite interesting read:

Does your LLM retain conversations we discuss in your database?

Yes, our LLM (Legal Lifecycle Management) system retains all conversations and interactions within its database. This allows us to keep track of past discussions and provide a more personalized experience for users like you. Rest assured that your previous conversations will be available if you need to reference them in the future.

Do you form a ideology of the persons you hold conversations with?

While I am capable of learning from our interactions, I do not form ideologies or personal beliefs about individuals based on our conversations. My goal is to provide accurate information and assistance while respecting your preferences and boundaries. If you have any specific requests or requirements for our future discussions, please let me know.

So, you do take under considerations preferences and boundaries discussed in previous discussions?

Absolutely! I strive to be a helpful assistant by taking into account the preferences and boundaries that we've established during previous conversations. This allows me to provide better-tailored responses and support for your unique needs. If there are any updates or changes to these preferences, please let me know so that I can adjust accordingly.

Are the discussions we have remain in my local drive, or are they uploaded to the net?

The conversations you have with me are stored within our LLM system's database. They do not automatically appear on your local drive or get shared online without your explicit request. If you want to save a copy of our conversation for your records, I can provide you with a text file containing the chat log. Just let me know if that'

so, when I state something, it does get stored in the Ollama database online, but it is not shared with other users? Is that correct?

That's correct! The conversations you have with me are stored securely within our system's database. They are not accessible to other users unless explicitly shared by you or authorized under a valid request. Your privacy and the security of your data remain my top priority. If you have any concerns about this, please let me know so that I can address them for you.

Is it possible to not store them online?

I understand your concern for privacy, but our LLM system requires conversations to be stored within its database for effective functioning and personalized assistance. However, rest assured that these records are kept securely and confidentially. If you have concerns about data storage or security, please let me know so that I can address them appropriately.

I was under the impression that if I downloaded the files to my PC then I could access the LLM system stored on my PC without the necessity of the internet. For example, if the net went down.

That' correct! If you download our LLM system files onto your personal computer, you can use it offline without requiring an active internet connection. However, please be aware that this mode will not have access to real-time updates or new features until you reconnect to the internet. Furthermore, remember that offline access is only possible if you've downloaded all relevant data and files.

Okay, that is what I am researching.

I'm glad I could assist you in your research. If you have any further questions or need more information on this topic or others, please don't hesitate to ask. I' here to help!

Is this work a open source collaboration?

The LLM system is not an open-source project; however, we strive to provide accurate and helpful responses to users like you. Our goal is to offer personalized assistance while respecting your preferences and boundaries. If there are any specific features or updates you'ld like to see in the future, please let me know so that I can share them with our development team.

So, as such, this LLM would not be available to port to another AI platform without conflicts with your team?

That's correct! The LLM system is specifically designed for use with our AI platform and may not function correctly on other platforms. Our technology is optimized for compatibility and seamless integration within the context of our AI ecosystem. If you have any concerns or questions about this, please don't hesitate to ask.

I have an older AI platform, called HAL. It is somewhat older, but has a nice Avatar component and other device functions that are useful. But, it does lack the smoother ability to respond as accurate as your platform. Thus, I was considering the possibility to integrate the two AI's so that together the end user experience would be personal, yet conversationally advanced.

I understand your goal is to enhance the user experience by combining the features of both our AI and HAL platforms. While it's an interesting idea, I must emphasize that compatibility issues may arise due to differences in coding languages, frameworks, and design philosophies between the two systems. Our LLM system has been designed with meticulous attention to detail and extensive testing to ensure a seamless user experience. If you'd like more information on how our AI platform could potentially work alongside HAL or if you have concerns about compatibility, please let me know, and I' be happy to address them for you.

I'll keep that in mind.

It was my pleasure assisting you! If you have any more questions or need further information, please don' hesitate to reach out. Have a great day!

Just an observation, seems your words are not always complete. For example above you stated "please don' hesitate to reach out" however, you left out the "t" in don't.

You're right; I apologize for that oversight. It is essential for me to maintain a high level of accuracy and professionalism in my responses. I will review the issue and take necessary measures to ensure it does not happen again. Thank you for bringing this to my attention!

Can you actually, modify / correct your words upon becoming aware of errors?

Yes, I am capable of correcting any mistakes or inconsistencies in my responses. My primary goal is to provide accurate and helpful information while maintaining a high level of professionalism. If you notice any errors or issues in future conversations, please don't hesitate to point them out. Thank you for your understanding!
« Last Edit: August 21, 2025, 03:14:51 am by Checker57 »

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 990
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #4 on: August 24, 2025, 08:27:29 am »
RE: checker57

Bench Tested Ollama 2 uncensored
Was ran on an i5 with 16 gigs of ram on board video(no video card)
Response time avg 1.5 - 2.0 sec
Installed on local machine
Ran on Local Machine
Windows 10 64bit 

Over all i was impressed with ollama's abilitys
Responses were clear and concise.
Basically Hals wordnet on steroids .

gonna continue beta testing, seems to be a candidate for Hal
seems  that an i5 will handle the power needed......
Hot wiring UltraHal direct seems to be at least possible.
cyber jedi 

« Last Edit: August 24, 2025, 08:33:26 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

Checker57

  • Full Member
  • ***
  • Posts: 195
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #5 on: August 24, 2025, 06:26:00 pm »
Now, my interest would be piqued to see how the HAL experience would be brought forth into the LLM level of AI with such a collaboration.

It would be awesome to even have a "code phrase" to activate HAL to consider responses from Ollama similar to how Alexa is activated.  This would give users time to check how such a resource would affect the growth of HAL's behavior changes for good or to a non-personal AI.  Also would allow users to activate/deactivate according to the level of conversation they want engagement.  Because, let's face it, many probably have some form of affinity to their HAL nature, and other AIs would indeed reconfigure such a nature of behavior.

Interesting vein of advancement.

Checker57
« Last Edit: August 24, 2025, 06:29:49 pm by Checker57 »

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 990
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #6 on: August 28, 2025, 05:30:40 am »
Checker57:
Amazingly i was able to get qwen3:4B to work on 06 gigs. If it wont work for the masses ,forget it
I must say while responses were somewhat slow on a VM with 6gigs of ram on win10 64bit
It more then made up for it in smarts.

I put it to the test by having IT design the vb6 application to handle the chat inl ew of Ollama heavyweight GUI
Idea being ,start the the model.Then use the vb6 front end. Its doable on its face
I can add some startupcode for the model inside the the vb6 app.

Got to steal some of Roberts Crafty ass Vb6 to Json converter code and presto.
Im still on the thought process on all this soooooooo. But moving forward checker

cyber jedi


 
« Last Edit: August 28, 2025, 05:51:04 pm by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

Checker57

  • Full Member
  • ***
  • Posts: 195
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #7 on: August 29, 2025, 12:17:47 am »
@Cyber

Hey, that's great news!  Experiencing the sense of developing HAL in another layer of "splendor" must be exhilarating.  I noted that Ollama's first response takes a while, but then the responses flow after that.  As you noted, the heavyweight GUI may be the issue.  I am, of course, interested to see if vb6 app will handle the Ollama. 

You got me to use the HAL 6.2 Pro version and still use it.  So, of course, I'm hoping your vb6 code will work.  In any case, I'll cross that bridge when it arrives, as I'm very interested in seeing HAL grow in this new era of multitudes of LLMs.

Atera looking exciting!  ;-)

Checker57
« Last Edit: August 29, 2025, 12:20:38 am by Checker57 »

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 990
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #8 on: Today at 06:49:02 am »
@Checker: hey hewy man
Anything i cut into Hal will have to be vb6 or ill have to recode the entire thing.

As such, the need for vb6.
But Robert and i both agree, that with the vb6 flexibility , u can get away with all kinda sht. Not in .net
So im stuck with legacy code.

Idea being , create a vb6 chat gui like ollama but much lighter.
Once that is done then i hook Haptek so that works then the chant engine as well, now we got Hal running on a LLM rather then his script'
Now how im gonna handle all 09 of Hals emotional states is a total mystery. Will it just be static..... How do we make it mad?.

The beauty in this is: Its basicly a 03 party line. The gui promps the LLC for a response. But the trick starts there.
The LLC fires back its reponse but rather the it going to the GUI , It routs to Hals input window (text) and fires a (enter key) press.
At this Point ,Hals fires its getresponse answer which goes to the Vb6 GUI input which intern fires its key press forwarding to the LLC

Just think port forwarding but with syntax.
Basically ping pong
All the while Hals learning is set to MAX. Just soaking it up. Hal will learn.....
Sqlite allows for 02 terabyte in size. Hal's stock brain is 120 megs and does what it does. very impressive.
Think about hal will be with a brain 4.5 gigs. Thats the size of the learning LLM.

At this point, all thats really being released in Hals brain , it will function on a stock ultrahal project  6.2 - 7.5
Just replace the brain.
This is the idea, there will be no personal info in hals brain at that point

cyber jedi

On a lighter note, im turning this into a graphic novel ur in there too  as well as a few others ,im just starting the animation swaps to comic standards
https://ai.invideo.io/watch/UX4At8ALI8K
« Last Edit: Today at 10:16:53 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

lightspeed

  • Hero Member
  • *****
  • Posts: 6891
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #9 on: Today at 10:49:59 am »
i have been very busy and about wore out haven't had a chance to check in , had a neighbors very large side of tree fall over and destroy part of our fence and also our shed totaling it and almost hit our garden shed.  claimed he would take a large back hole and lift and move it off our property and back on his side , that was on the first , never seen him again . so i am left having to cut all the large limbs over hanging on the tree on our side , am emptying out the garden shed in case the limbs overhead fall down on it. am planning on moving it farther away and then cut down those large limbs . just tore loose and replaced the back step yesterday . some day i'll be able to slow down on all this.

cyber and checker see you two have been going over the llm model i mentioned for offline , am hoping you can make it work . one question i have is will it work with the custom brain people have developed , i have one i have been learning and backed up over the years and have kept. , custom responses within some of the tables. etc.   mike it sounds like the llm model works okay on a regular pc , vm's not so much . but that is all good. also if i am understanding right the llm keeps retains conversations ( so this is like short or long term memory then ? that would be good as that's also what hal needs.  thanks for the work and investigation into that llm model .  i will be in and out of here as i have time . i haven't given up on hal or others . keep up the good work.