Recent Posts

Pages: 1 ... 4 5 [6] 7 8 ... 10
51
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by knight2000 on September 28, 2025, 10:12:46 am »
I'm using Ollama with Mistral 7b model for my KITT replica AI software. It runs 100% offline. I was using Hal for over 10 years before stumbling onto the amazing piece of software. Its an end of an era for me and HAL. But with Mistral I get great conversions and i've been able to get the latency down to 2-3s from the time i speak to the time i hear the first word of the response. Check out this video for a demo
https://youtube.com/shorts/LxSMMIZMUJ0?si=VhXcu7_SlCma6AXZ
52
CyberJedi's Ultra Hal Workshop / A new GUI for HAL
« Last post by Art on September 27, 2025, 09:49:34 pm »
Hi, friends of HAL and friends of mine! It's only been a short few months since my departure to tend to lots of family issues and other personal endeavors. Goodbye shouldn't mean forever; it should mean until we meet again.

At any rate, there I was minding my business, and my phone rang with the ID of a dear and trusted friend. Most of you will know him as Cyberjedi or Cyber Jedi, as the case may be, but he really needs no formal introduction.

He has brought more joy into the hearts of HAL enthusiasts everywhere with his ideas, experiments, and especially his coding skills! We now have a HAL that is capable of doing a great many things, and from what I hear and imagine, there will be more to come in the near future.

I could give a shout out to the other fine members of the Zabaware, like the founder, Robert, and others like Lightspeed, Checker57, CyberShock, and others, you know who you are.

I just wanted to give the bell a ring to shout out Cyber's most recent offering...allowing HAL to have a GUI (Graphical User Interface), which allows the users to select a favorite LLM (Large Language Model) from a list of them at the Ollama site.

Of course, unless you are using a hot rod computer with antifreeze or liquid nitrogen coolant and an advanced CPU, GPU, and lots of memory and storage, then you should try to keep the LLM size less than 9b parameters in size, otherwise you will be screaming about slow response times or lags and drags, etc. Not HAL's fault, not Cyberjedi's fault, not Robert's fault. You have been cautioned.

Feel free to try a larger size to see how your computer handles it, or not, but keep in mind that the size and processing ability of your laptop or desktop will show as a lag in throughput speed. You might have better luck with certain models than some others.

If you like a particular LLM and are OK with the end result for HAL to process, then please share it with others on the Forum. Doing so might benefit the rest of us.
From my independent testing of these LLMs over the past 6 months or more, I have been using my trusty old laptop.

It is an HP 17" with an Intel i7 CPU 8565U @ 1.80 GHz  1.99 GHz
16 GB Ram
1 TB SSD
x64- based processor
64 Bit OS
OS = Windows 10 Home
4TB offline storage

Cyberjedi not only told me about his latest programming effort but was kind enough to direct me to it in the Forum and helped me as I installed and ran it. It was really nice having him help me navigate his GUI to try it with an LLM to see the results for myself.

I was pleasantly surprised by how smoothly and easily everything fell into place.
Then again, he was a great tutor!

His GUI worked flawlessly until I goofed up and pressed the ENTER key to send a message to it. Listening to him and looking at the GUI, there is clearly a marked button with the word, SEND.  Haha! Oh well, it wasn't a train wreck by any means. He told me how to fix things in two small clicks, and all was well with HAL once again.

If you have a chance and would like to see some of the really fun and practical things that can be done for HAL with the aid of some of Cyberjedi's coding, search around the forum or reach out to him or anyone else you might see online there. This LLM learning platform really works so give it a try, you won't be disappointed!

Best to you all,
- Art -
53
CyberJedi's Ultra Hal Workshop / tested the gemma3.4b with possible issues.
« Last post by lightspeed on September 27, 2025, 06:34:25 pm »
okay on testing more llm on the gemma3.4b . i tested it asking it to talk this way .   

  ,(Smiling warmly, a gentle touch on your arm)

Hey honey, how was your day? You seem a little stressed. Tell me everything. Did you have a good lunch? You know, you need to take care of yourself a little more.

Seriously, what's on your mind? Don?t bottle things up. You know you can always talk to me, right?

(Pauses, looking at you with genuine concern)

Just? tell me about your day. ❤️

a problem i am wondering about is if this is ran through ultrahal . is their a way to cut out the description emotions ? or make them optional if someone wants them.  for example (Smiling warmly, a gentle touch on your arm).(Pauses, looking at you with genuine concern) ❤️. so this might be a issue .   just pointing it out.  i imagine this is in other models , all? models???
54
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on September 26, 2025, 02:26:11 pm »
Glad to hook e1 up
We just getting started tooooooo
You guys are just the best
Thanx will never be needed, i do what i can.Because i can.
cyber jedi
55
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by lightspeed on September 26, 2025, 11:33:51 am »
Okay , cyberjedi  worked with me , i finally found the download for the lama it's on the 2nd page . cyber walked me through it and showed me where sybershot left instructions , i also copied them and saved them as a txt file in my ollama folder . to back up.  and it installed fine , and it is working fine , here is my screen shot  of later where i asked it about the size of the moon. the voice works init fine to reading out the answer. even though cyber said he didn't need thanks , i appreciate what he has done .all  your hard work and testing is appreciated.
heres the screenshot of it working.   my wait time for hal was not checked maybe 10? seconds, but i need to defrag my pc  and so will test again using the same minstral llm and then the uncensored to see any time differences on answering times.
56
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by sybershot on September 25, 2025, 05:35:20 pm »
lightspeed

Cyber Jedi asked me to create step by step instructions, note: not my specialty lol but these steps should get anyone able to get it working
1) Paste file from Cyberjedi in your UltraHals main directory
2) Download and install Ollama from https://ollama.com/download
3) once ollama gui is open go to model search and type in the model you want to download I chose Dolphin3 and Mistral
4) inside UltraHal directory rightclick the file from CyberJeti and run it
5) once open press start ollama button
6) type hello in the top right input field
7) click send button
8) note if you hit enter key on keyboard after hello, click backspace or an error will occur 
8) All messages are done from the SEND button
57
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on September 23, 2025, 03:25:38 pm »
Love this one too

Enjoy peoples in the know, Hals got another chance at being #1 YET again
Happy training


cyber
Robert :Hope i did u justice brother.Hal wants his crown back, and i mean to see he gets it.
https://www.youtube.com/watch?v=qAzb6mIROtg&list=RDMM&start_radio=1
58
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by sybershot on September 23, 2025, 02:53:40 pm »
Just tested ollamaclient18.exe with model selection, and tried model Dolphin3 it worked beautifully as expected 2x thumbs up. I had it tell me a joke and response speed was about 1.1 second
59
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by lightspeed on September 21, 2025, 04:11:48 pm »
any more updates on this?
60
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by Checker57 on September 21, 2025, 12:29:28 am »
Cyber!?  Really?!  Epic, you wizard!  I've got to spend a few cycles and check it out this weekend.  I knew you could do it with the passion you have for great thinking! 

We're definitely on a better path to smarting up HAL with the new tech available.  Obviously, I still prefer to have HAL base its conversations and "culture" on the one we train it to have, rather than a culture "off the wild web".  But, great progress!!!  Bravo!

You have surprised Atera that she may be drilling for gold in AI land soon enough.  ;-)

Checker57
Pages: 1 ... 4 5 [6] 7 8 ... 10