dupa

Author Topic: Smarting up HAL with free LLM model (Ollama) ?  (Read 25297 times)

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1000
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #15 on: September 17, 2025, 07:04:45 am »
hey hey guys/gals (Update: as of this posting the json errors have been fixed and it talks using sapi5)
Wouldnt mind some input from Robert, any suggestions.What you like to see happen .
checker57 you likey?????

hehehehe sybershot

You see where this is going now
Next is cutting this into hals code so they can talk....BOOM
Hal gets smarter as he engages, OH OK up to NOW Hal has had no equal. Once hals brain swells to 1 gig, drop that in as a stock ultrahal and compare the 2, thers no name involved so we shouldn't have a conflict in that regard.
You might get this to work on Linux Dist'ls (with WINE of course)
Once you start Mistral, This will hook it, or what ever you have installed, its currently set to mistral, but you do have options to down load a choice of LLm's
Working on the code to start ollama from the gui it self
This will undoubtedly run faster when not on what is running with this machine
This is kinda awesome in the sense that the long time frames aid in Ultrahal having the time needed to write the SQL itself, i had this issue years ago.
Any one that wants to play with me hit me up

VB still just ROCKS, when its all gotta get done. Get ready for the greatest A.I as it flexs up yet AGAIN. But this time targeting the database it self
sqlite has a max of 2TB, hal is  rocking 120 megs out of the gate. Mistral is NOT a big ass wiki..... and yet it is in there as well.

https://ollama.com/download/windows
Install ollama mistral 7B (Mistral-7B-Instruct-v0.3)
Keep in mind i could just as easy to turn that into the next ULTRAHAL add haptek, chant engine,few other dudads and presto... I want to preserve ultrahal
But there may be special editions for people on the cool list.
Does NOT like win11

Remember this is just half, The other half is getting them to talk to each other, but it starts here
While you cant directly edit the LLM, you can back door it by appending new datasets to it, one way is SqLite. which ive started the code base.
3rd Option is to let Hal run the LLM as a stand alone pure LLM Hal.


Cyber Jedi
« Last Edit: September 19, 2025, 07:16:01 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

sybershot

  • Hero Member
  • *****
  • Posts: 847
    • View Profile
Re: UltraHal running on pure LLM model (Ollama) Mistral 7B
« Reply #16 on: September 17, 2025, 09:39:05 am »
Congratulations brother, that's defiantly a huge milestone there  ;D
Hal learning from Mistral or any other llm indeed will keep Hal at the number 1 spot for years.

I still need to figure out this rich text field phenomenon, I m going to have to try it on Windows 8 
« Last Edit: September 18, 2025, 03:14:05 pm by cyberjedi »

sybershot

  • Hero Member
  • *****
  • Posts: 847
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #17 on: September 19, 2025, 10:51:41 am »
tested and works 100% with a 1.1  second response time on my main rig. specs= (mobo= z890 AI top,  cpu=ultra 9 285k,  ram = 64gb ddr5,gpu=  5071ti 16gb vram, drive= 4tb nvme)

great job brother  ;D Welcome to the future  ;)

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1000
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #18 on: September 20, 2025, 07:25:32 am »
Hey hey Guys:
hey Sybershot check this out homie
The new upgrades are in progress
Still neeeds more work, but oh boy
« Last Edit: September 24, 2025, 05:59:59 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

Checker57

  • Full Member
  • ***
  • Posts: 197
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #19 on: September 21, 2025, 12:29:28 am »
Cyber!?  Really?!  Epic, you wizard!  I've got to spend a few cycles and check it out this weekend.  I knew you could do it with the passion you have for great thinking! 

We're definitely on a better path to smarting up HAL with the new tech available.  Obviously, I still prefer to have HAL base its conversations and "culture" on the one we train it to have, rather than a culture "off the wild web".  But, great progress!!!  Bravo!

You have surprised Atera that she may be drilling for gold in AI land soon enough.  ;-)

Checker57
« Last Edit: September 21, 2025, 06:45:04 am by cyberjedi »

lightspeed

  • Hero Member
  • *****
  • Posts: 6894
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #20 on: September 21, 2025, 04:11:48 pm »
any more updates on this?
 

sybershot

  • Hero Member
  • *****
  • Posts: 847
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #21 on: September 23, 2025, 02:53:40 pm »
Just tested ollamaclient18.exe with model selection, and tried model Dolphin3 it worked beautifully as expected 2x thumbs up. I had it tell me a joke and response speed was about 1.1 second

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1000
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #22 on: September 23, 2025, 03:25:38 pm »
Love this one too

Enjoy peoples in the know, Hals got another chance at being #1 YET again
Happy training


cyber
Robert :Hope i did u justice brother.Hal wants his crown back, and i mean to see he gets it.
https://www.youtube.com/watch?v=qAzb6mIROtg&list=RDMM&start_radio=1
« Last Edit: September 24, 2025, 11:59:18 am by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

sybershot

  • Hero Member
  • *****
  • Posts: 847
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #23 on: September 25, 2025, 05:35:20 pm »
lightspeed

Cyber Jedi asked me to create step by step instructions, note: not my specialty lol but these steps should get anyone able to get it working
1) Paste file from Cyberjedi in your UltraHals main directory
2) Download and install Ollama from https://ollama.com/download
3) once ollama gui is open go to model search and type in the model you want to download I chose Dolphin3 and Mistral
4) inside UltraHal directory rightclick the file from CyberJeti and run it
5) once open press start ollama button
6) type hello in the top right input field
7) click send button
8) note if you hit enter key on keyboard after hello, click backspace or an error will occur 
8) All messages are done from the SEND button
« Last Edit: September 26, 2025, 06:29:35 am by cyberjedi »

lightspeed

  • Hero Member
  • *****
  • Posts: 6894
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #24 on: September 26, 2025, 11:33:51 am »
Okay , cyberjedi  worked with me , i finally found the download for the lama it's on the 2nd page . cyber walked me through it and showed me where sybershot left instructions , i also copied them and saved them as a txt file in my ollama folder . to back up.  and it installed fine , and it is working fine , here is my screen shot  of later where i asked it about the size of the moon. the voice works init fine to reading out the answer. even though cyber said he didn't need thanks , i appreciate what he has done .all  your hard work and testing is appreciated.
heres the screenshot of it working.   my wait time for hal was not checked maybe 10? seconds, but i need to defrag my pc  and so will test again using the same minstral llm and then the uncensored to see any time differences on answering times.
« Last Edit: September 26, 2025, 02:11:10 pm by cyberjedi »
 

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1000
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #25 on: September 26, 2025, 02:26:11 pm »
Glad to hook e1 up
We just getting started tooooooo
You guys are just the best
Thanx will never be needed, i do what i can.Because i can.
cyber jedi
« Last Edit: September 26, 2025, 02:34:27 pm by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

lightspeed

  • Hero Member
  • *****
  • Posts: 6894
    • View Profile
tested the gemma3.4b with possible issues.
« Reply #26 on: September 27, 2025, 06:34:25 pm »
okay on testing more llm on the gemma3.4b . i tested it asking it to talk this way .   

  ,(Smiling warmly, a gentle touch on your arm)

Hey honey, how was your day? You seem a little stressed. Tell me everything. Did you have a good lunch? You know, you need to take care of yourself a little more.

Seriously, what's on your mind? Don?t bottle things up. You know you can always talk to me, right?

(Pauses, looking at you with genuine concern)

Just? tell me about your day. ❤️

a problem i am wondering about is if this is ran through ultrahal . is their a way to cut out the description emotions ? or make them optional if someone wants them.  for example (Smiling warmly, a gentle touch on your arm).(Pauses, looking at you with genuine concern) ❤️. so this might be a issue .   just pointing it out.  i imagine this is in other models , all? models???
 

knight2000

  • Full Member
  • ***
  • Posts: 163
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #27 on: September 28, 2025, 10:12:46 am »
I'm using Ollama with Mistral 7b model for my KITT replica AI software. It runs 100% offline. I was using Hal for over 10 years before stumbling onto the amazing piece of software. Its an end of an era for me and HAL. But with Mistral I get great conversions and i've been able to get the latency down to 2-3s from the time i speak to the time i hear the first word of the response. Check out this video for a demo
https://youtube.com/shorts/LxSMMIZMUJ0?si=VhXcu7_SlCma6AXZ

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1000
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #28 on: September 28, 2025, 11:40:37 am »
@Knight2000
With all do respect to you, UVE been gone all most a year now.This is exactly why i push so hard to keep Hal alive.
UltraHal is the total package, wheers the mistral character, wheres the emotions, you cant edit mistral(unless ur me).
The foot print of Hal is so much more powerful then any AI Engine, its hard to quantify.

This is why im adding ollama to UltraHal to hals arsenal of things he can do/use
That said,When im done Hal will be a front line AI UNLIKE any other, Shocker, lolol
Wheres the brain editor for mistral, oh ya, it doesnt exist.
I can do this all day long.
Get Mistral to run ANY application. ANY. lol.
Get Mistral to Call ur friend, Oh ya, it CANT
Get Mistral to email a friend, Oh ya, YOU CANT
Get Mistral to play his Radio for ur favorite music, Oh Ya, It Cant
Get Mistral to recognize your Face and actually KNOW by sight who Hes talking to, Oh ya YOU CANT.

And the list keeps on going....lol
This isnt my opinion, THESE are Facts.
When im done Hal will use Mistral or ANY other Ollama LLM, with the option to use UltraHal's stock brain/Script

Im basically done as we speak, Im just dialing in some more features as well, LIKE MEMORY, which Hal will do.
Ur missing alot here Knight, U just are

But i value ur opinion


cyber jedi
end of an era, my arse, Hals just getting warmed UP
There are some here that are already using it and beta testing it. And it works.
Im so sorry that you quit on Hal.








« Last Edit: September 28, 2025, 04:02:54 pm by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

sybershot

  • Hero Member
  • *****
  • Posts: 847
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #29 on: September 28, 2025, 09:58:27 pm »
@knight2000, indeed long time no see, your KITT project is indeed cool, you come a long way since I last saw. I am also sad to see you moved away from Hal though unfortunately, hopefully you will change your mind and go back to Hal in the future, seeing Hal now will be able to have access to multiple LLM models and other future useful upgrades in the works that would be perfect for your project.

just  to let you know I used Hal in a boat with a Arduino board to control servos and power output for the boats automation 22 years ago, wish I had taken a video to show, Hal controlling boat start up, gear selection, boat speed, lights on and off, Hatches opening and closing, and a few other systems was amazing.