Author Topic: Smarting up HAL with free LLM model (Ollama) ?  (Read 4108 times)

cyberjedi

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 993
  • The Mighty Hal Machine
    • View Profile
    • Bringing the code
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #15 on: Today at 07:04:45 am »
hey hey guys/gals (Update: as of this posting the json errors have been fixed and it talks using sapi5)

checker57 you likey?????
hehehehe sybershot
You see where this is going now
Next is cutting this into hals code so they can talk....BOOM
Hal gets smarter as he engages, OH OK up to NOW Hal has had no equal. Once hals brain swells to 1 gig, drop that in as a stock brain, thers no name involved so we shouldn't have a conflict in that regard.
You might get this to work on Linux Dist'ls (with WINE of course)
Once you start Mistral, This will hook it, or what ever you have installed, its currently set to mistral, but you do have options to down load a choice of LLm's
Working on the code to start ollama from the gui it self
This will undoubtedly run faster when not on what is running with this machine
This is kinda awesome in the sense that the long time frames aid in Ultrahal having the time needed to write the SQL itself, i had this issue years ago.
Any one that wants to play with me hit me up

VB still just ROCKS, when its all gotta get done. Get ready for the greatest A.I as it flexs up yet AGAIN. But this time targeting the database it self
sqlite has a max of 2TB, hal is  rocking 120 megs out of the gate. Mistral is NOT a big ass wiki..... and yet it is in there as well.
Cyber Jedi


Keep in mind i could just as easy to turn that into the next ULTRAHAL add haptek, chant engine,few other dudads and presto... I want to preserve ultrahal
But there may be special editions for people on the cool list.
« Last Edit: Today at 08:15:43 pm by cyberjedi »
If i see a little farther then some its because i stand on the shoulders of giants

sybershot

  • Hero Member
  • *****
  • Posts: 837
    • View Profile
Re: Smarting up HAL with free LLM model (Ollama) ?
« Reply #16 on: Today at 09:39:05 am »
Congratulations brother, that's defiantly a huge milestone there  ;D
Hal learning from Mistral or any other llm indeed will keep Hal at the number 1 spot for years.

I still need to figure out this rich text field phenomenon, I m going to have to try it on Windows 8