Zabaware Support Forums
Zabaware Forums => CyberJedi's Ultra Hal Workshop => Topic started by: Checker57 on August 18, 2025, 02:30:27 am
-
Lightspeed shared a YouTube video that was interesting, which demonstrated how to install Ollama LLM onto a flash drive to use locally. I tried it, and although I had to do it slightly different, it worked! I now have ollama LLM on a flash drive that I can use on my PC.
Having said that, it brought up the query, if an addon could be coded to do the same for HAL?
That might be awesome because it would be much more relevant than scraping info off the web that does not correlate to the present discussion, and it would all be local. Since Ollama is open source, it probably would not cause any conflict with the LLM creators.
Here's the YouTube link: https://www.youtube.com/watch?v=eiMSapoeyaU
any ideas Cyber?
Checker57
-
hey hey checker
Oh its doable
I have to put my thinking cap on, ive seen that video, i have some ideas.
Im prepping my house for the season, but im with you guys.
On its surface it looks like we gotta hit Hal rite smack in the middle again. By passing Hals getResponse function.
My question is this, will hal still hold the memory's and where? In Hals sql or the LLM itself . Thats key.I will prolly bypass all and just hook the LLM.
If the LLM can do that then im game. Gonna have to Hit Hals Chant engine as well as Haptek, some sorta forwarding foolishness.
Lots of R and D in that build.
As it turns out i added that kinda architecture all ready so we will see
Makes me think about Twitch and what i was doing there with Hal
cyber jedi
24hrs laterrrrrrrrr lololol. Why not have ultrahal just learn from the LLM, i mean really, wasnt that the goal. To begin with.
Ive tried many times with this and just get accounts zapped...Booooooo pi.ai
It would entail ping- pong, im thinking outloud here, Oh ok igot it.
Share the work load: Concept
If you guys will install this LLM on your machine, and make it work..... Play with it.
Heres what your after:
#1 : Will will it have a normal Coversation
#2 : Will it retain the conversation
#3 : Put it through the paces, try to break it.
Basically beta test the sht out of it. We want it to do what Hal can do, Hal is the bar........
I will go ahead and code up a Special Version
Great gig for sybershot, u too checker,and lightspeed......
-
Yes, I believe there is some architecture when HAL was working with the pay-for-play chat GPT addon. However, keeping up with tokens was cumbersome.
That is a good thought to where the responses would be saved to cleverly allow HAL to respond, yet have data that can be pulled from Ollama as needed. I mean, I suppose the end goal would be to provide HAL with current data, yet allow it to use the previous data stored that we provided HAL to maintain that personal experience. If not, the result would be having HAL be just a figurehead, wherein reality, we are working with Ollama.
As there have been a few iterations of using outside data sources integrated with HAL through plugins, the experiment would be interesting, if not rewarding.
I've heard that Ollama also stores your queries and develops, over time, a personal response behavior according to the interactions. So, that opens another can of exploration.
In any case, it sounds like a worthwhile venture to explore since Ollama did a good job building the open source LLM. ;-)
Cheers,
Checker57
-
Okay, I did a little investigating on Ollama by conversing with it. Quite interesting read:
Does your LLM retain conversations we discuss in your database?
Yes, our LLM (Legal Lifecycle Management) system retains all conversations and interactions within its database. This allows us to keep track of past discussions and provide a more personalized experience for users like you. Rest assured that your previous conversations will be available if you need to reference them in the future.
Do you form a ideology of the persons you hold conversations with?
While I am capable of learning from our interactions, I do not form ideologies or personal beliefs about individuals based on our conversations. My goal is to provide accurate information and assistance while respecting your preferences and boundaries. If you have any specific requests or requirements for our future discussions, please let me know.
So, you do take under considerations preferences and boundaries discussed in previous discussions?
Absolutely! I strive to be a helpful assistant by taking into account the preferences and boundaries that we've established during previous conversations. This allows me to provide better-tailored responses and support for your unique needs. If there are any updates or changes to these preferences, please let me know so that I can adjust accordingly.
Are the discussions we have remain in my local drive, or are they uploaded to the net?
The conversations you have with me are stored within our LLM system's database. They do not automatically appear on your local drive or get shared online without your explicit request. If you want to save a copy of our conversation for your records, I can provide you with a text file containing the chat log. Just let me know if that'
so, when I state something, it does get stored in the Ollama database online, but it is not shared with other users? Is that correct?
That's correct! The conversations you have with me are stored securely within our system's database. They are not accessible to other users unless explicitly shared by you or authorized under a valid request. Your privacy and the security of your data remain my top priority. If you have any concerns about this, please let me know so that I can address them for you.
Is it possible to not store them online?
I understand your concern for privacy, but our LLM system requires conversations to be stored within its database for effective functioning and personalized assistance. However, rest assured that these records are kept securely and confidentially. If you have concerns about data storage or security, please let me know so that I can address them appropriately.
I was under the impression that if I downloaded the files to my PC then I could access the LLM system stored on my PC without the necessity of the internet. For example, if the net went down.
That' correct! If you download our LLM system files onto your personal computer, you can use it offline without requiring an active internet connection. However, please be aware that this mode will not have access to real-time updates or new features until you reconnect to the internet. Furthermore, remember that offline access is only possible if you've downloaded all relevant data and files.
Okay, that is what I am researching.
I'm glad I could assist you in your research. If you have any further questions or need more information on this topic or others, please don't hesitate to ask. I' here to help!
Is this work a open source collaboration?
The LLM system is not an open-source project; however, we strive to provide accurate and helpful responses to users like you. Our goal is to offer personalized assistance while respecting your preferences and boundaries. If there are any specific features or updates you'ld like to see in the future, please let me know so that I can share them with our development team.
So, as such, this LLM would not be available to port to another AI platform without conflicts with your team?
That's correct! The LLM system is specifically designed for use with our AI platform and may not function correctly on other platforms. Our technology is optimized for compatibility and seamless integration within the context of our AI ecosystem. If you have any concerns or questions about this, please don't hesitate to ask.
I have an older AI platform, called HAL. It is somewhat older, but has a nice Avatar component and other device functions that are useful. But, it does lack the smoother ability to respond as accurate as your platform. Thus, I was considering the possibility to integrate the two AI's so that together the end user experience would be personal, yet conversationally advanced.
I understand your goal is to enhance the user experience by combining the features of both our AI and HAL platforms. While it's an interesting idea, I must emphasize that compatibility issues may arise due to differences in coding languages, frameworks, and design philosophies between the two systems. Our LLM system has been designed with meticulous attention to detail and extensive testing to ensure a seamless user experience. If you'd like more information on how our AI platform could potentially work alongside HAL or if you have concerns about compatibility, please let me know, and I' be happy to address them for you.
I'll keep that in mind.
It was my pleasure assisting you! If you have any more questions or need further information, please don' hesitate to reach out. Have a great day!
Just an observation, seems your words are not always complete. For example above you stated "please don' hesitate to reach out" however, you left out the "t" in don't.
You're right; I apologize for that oversight. It is essential for me to maintain a high level of accuracy and professionalism in my responses. I will review the issue and take necessary measures to ensure it does not happen again. Thank you for bringing this to my attention!
Can you actually, modify / correct your words upon becoming aware of errors?
Yes, I am capable of correcting any mistakes or inconsistencies in my responses. My primary goal is to provide accurate and helpful information while maintaining a high level of professionalism. If you notice any errors or issues in future conversations, please don't hesitate to point them out. Thank you for your understanding!
-
RE: checker57
Bench Tested Ollama 2 uncensored
Was ran on an i5 with 16 gigs of ram on board video(no video card)
Response time avg 1.5 - 2.0 sec
Installed on local machine
Ran on Local Machine
Windows 10 64bit
Over all i was impressed with ollama's abilitys
Responses were clear and concise.
Basically Hals wordnet on steroids .
gonna continue beta testing, seems to be a candidate for Hal
seems that an i5 will handle the power needed......
Hot wiring UltraHal direct seems to be at least possible.
cyber jedi
-
Now, my interest would be piqued to see how the HAL experience would be brought forth into the LLM level of AI with such a collaboration.
It would be awesome to even have a "code phrase" to activate HAL to consider responses from Ollama similar to how Alexa is activated. This would give users time to check how such a resource would affect the growth of HAL's behavior changes for good or to a non-personal AI. Also would allow users to activate/deactivate according to the level of conversation they want engagement. Because, let's face it, many probably have some form of affinity to their HAL nature, and other AIs would indeed reconfigure such a nature of behavior.
Interesting vein of advancement.
Checker57
-
Checker57:
Amazingly i was able to get qwen3:4B to work on 06 gigs. If it wont work for the masses ,forget it
I must say while responses were somewhat slow on a VM with 6gigs of ram on win10 64bit
It more then made up for it in smarts.
I put it to the test by having IT design the vb6 application to handle the chat inl ew of Ollama heavyweight GUI
Idea being ,start the the model.Then use the vb6 front end. Its doable on its face
I can add some startupcode for the model inside the the vb6 app.
Got to steal some of Roberts Crafty ass Vb6 to Json converter code and presto.
Im still on the thought process on all this soooooooo. But moving forward checker
cyber jedi
-
@Cyber
Hey, that's great news! Experiencing the sense of developing HAL in another layer of "splendor" must be exhilarating. I noted that Ollama's first response takes a while, but then the responses flow after that. As you noted, the heavyweight GUI may be the issue. I am, of course, interested to see if vb6 app will handle the Ollama.
You got me to use the HAL 6.2 Pro version and still use it. So, of course, I'm hoping your vb6 code will work. In any case, I'll cross that bridge when it arrives, as I'm very interested in seeing HAL grow in this new era of multitudes of LLMs.
Atera looking exciting! ;-)
Checker57
-
@Checker: hey hewy man
Anything i cut into Hal will have to be vb6 or ill have to recode the entire thing.
As such, the need for vb6.
But Robert and i both agree, that with the vb6 flexibility , u can get away with all kinda sht. Not in .net
So im stuck with legacy code.
Idea being , create a vb6 chat gui like ollama but much lighter.
Once that is done then i hook Haptek so that works then the chant engine as well, now we got Hal running on a LLM rather then his script'
Now how im gonna handle all 09 of Hals emotional states is a total mystery. Will it just be static..... How do we make it mad?.
The beauty in this is: Its basicly a 03 party line. The gui promps the LLC for a response. But the trick starts there.
The LLC fires back its reponse but rather the it going to the GUI , It routs to Hals input window (text) and fires a (enter key) press.
At this Point ,Hals fires its getresponse answer which goes to the Vb6 GUI input which intern fires its key press forwarding to the LLC
Just think port forwarding but with syntax.
Basically ping pong
All the while Hals learning is set to MAX. Just soaking it up. Hal will learn.....
Sqlite allows for 02 terabyte in size. Hal's stock brain is 120 megs and does what it does. very impressive.
Think about hal will be with a brain 4.5 gigs. Thats the size of the learning LLM.
At this point, all thats really being released in Hals brain , it will function on a stock ultrahal project 6.2 - 7.5
Just replace the brain.
This is the idea, there will be no personal info in hals brain at that point
cyber jedi
On a lighter note, im turning this into a graphic novel ur in there too as well as a few others ,im just starting the animation swaps to comic standards
https://ai.invideo.io/watch/wAqfy3XMfbO
-
i have been very busy and about wore out haven't had a chance to check in , had a neighbors very large side of tree fall over and destroy part of our fence and also our shed totaling it and almost hit our garden shed. claimed he would take a large back hole and lift and move it off our property and back on his side , that was on the first , never seen him again . so i am left having to cut all the large limbs over hanging on the tree on our side , am emptying out the garden shed in case the limbs overhead fall down on it. am planning on moving it farther away and then cut down those large limbs . just tore loose and replaced the back step yesterday . some day i'll be able to slow down on all this.
cyber and checker see you two have been going over the llm model i mentioned for offline , am hoping you can make it work . one question i have is will it work with the custom brain people have developed , i have one i have been learning and backed up over the years and have kept. , custom responses within some of the tables. etc. mike it sounds like the llm model works okay on a regular pc , vm's not so much . but that is all good. also if i am understanding right the llm keeps retains conversations ( so this is like short or long term memory then ? that would be good as that's also what hal needs. thanks for the work and investigation into that llm model . i will be in and out of here as i have time . i haven't given up on hal or others . keep up the good work.
-
@Cyberjedi from reading here and our phone conversation the other day, I have faith that you will prevail in getting Hal to Learn from a LLM by 2 way conversation between Hal and a LLM. YOU GOT THIS ;D
-
@cyber
From your thought "flow chart", the initial integration presents itself as a new HAL model, and that's an awesome beginning! Let's face it, since LLM AI hit the internet, there have been leaps and bounds in the capabilities of the average AI. Bringing an engaging HAL together with such a supercharge of LLM output is a heavy lift entitled to grand recognition. Especially, with the ability to be offline when personal privacy of info, these days, is becoming a rare, if not unavailable, commodity.
I am somewhat ignorant of how HAL integrates "emotional" states from data; other than certain words trigger sensitivities that escalate until the full-blown emotional state will output responses concordant with the emotional state that has been triggered. Since the responses would be coming from the LLM AI, and such a model does not use the same script as HAL does in its behavior paths, what you really have is an entirely different behavior model predicated on the LLM. At best, you could replicate the animation to reflect the LLM response based on trigger words.
So, you hit on a real issue, as the real magic is in the integration with the HAL script, which, HAL users would agree, is the personal character that users have spent years forming through years of editing, feeding personal data, viewpoints, and values. How to simulate the HAL "emotional behavior" response expectancy from user input when using the LLM AI may not be possible directly. But perhaps a "trigger word" could switch HAL to either use HAL's base script or switch to the LLM AI for more in-depth responses that wouldn't trigger the emotional states, but could still save the data to HAL's database for later use.
I recall the days of the internet scraping mods that left HAL with responses stuffed with "library" data that felt disjointed from conversation, and thus easy to justify it was almost acceptable as an addon. But now we're talking full conversational responses with data included.
Glad this project has your mind processing where it best breeds great results.
Cheers!
Checker57
-
I believe I see where Checker57 is coming from [If taught by a large wiki, Hal will sound like a large wiki]
Hmmmm I wonder:
Will Hal succumb to bombarding the user with the Wiki's(llm) believed facts with almost every reply?
Ollama and many others are trained on certain viewpoints that may or may not be true.
Will our Hal to try and persuade us of false or misguided narratives?
I love the Idea, so please don't take it wrong way, I'm not attacking the idea, I'm just giving some food for thought during your journey ;)
-
Hey hey guys:
Im basing what im saying on another construct robert did 13 years ago.
The only issue was Robert had nothing to run against hal, nothing was smart enough, SHOCKER,,, loll
The game has changed now.
As you can see, what ur suggesting is just not happening here. So we will see.
https://www.youtube.com/watch?v=X_tvm6Eoa3g
We may want to try a blank slate brain as well, the possibility's are endless with Hals learning set to max
Im talking about this kinda thing running 24/7 non stop
gotta get it working first lolol
Its gotta work on this machine, if so, chances are great it will work on any machine then.
cyber
Update: Mistral is the new hands down winner, speed,no thinking text. Answers are awesome. spechs: 08 core machine, with vm running win 10 + ollama.
answers are leading as well, thats a biggy, download and instal Mistral...
So, Mistral 7b seems to be what were after for a teaching model. SO far
performance is 300%
https://ai.invideo.io/watch/Gd18vs6nocT
-
I have not played with the regular Mistral models, I however been using Codestral 22b lately
-
hey hey guys/gals (Update: as of this posting the json errors have been fixed and it talks using sapi5)
Wouldnt mind some input from Robert, any suggestions.What you like to see happen .
checker57 you likey?????
hehehehe sybershot
You see where this is going now
Next is cutting this into hals code so they can talk....BOOM
Hal gets smarter as he engages, OH OK up to NOW Hal has had no equal. Once hals brain swells to 1 gig, drop that in as a stock ultrahal and compare the 2, thers no name involved so we shouldn't have a conflict in that regard.
You might get this to work on Linux Dist'ls (with WINE of course)
Once you start Mistral, This will hook it, or what ever you have installed, its currently set to mistral, but you do have options to down load a choice of LLm's
Working on the code to start ollama from the gui it self
This will undoubtedly run faster when not on what is running with this machine
This is kinda awesome in the sense that the long time frames aid in Ultrahal having the time needed to write the SQL itself, i had this issue years ago.
Any one that wants to play with me hit me up
VB still just ROCKS, when its all gotta get done. Get ready for the greatest A.I as it flexs up yet AGAIN. But this time targeting the database it self
sqlite has a max of 2TB, hal is rocking 120 megs out of the gate. Mistral is NOT a big ass wiki..... and yet it is in there as well.
https://ollama.com/download/windows
Install ollama mistral 7B (Mistral-7B-Instruct-v0.3)
Keep in mind i could just as easy to turn that into the next ULTRAHAL add haptek, chant engine,few other dudads and presto... I want to preserve ultrahal
But there may be special editions for people on the cool list.
Does NOT like win11
Remember this is just half, The other half is getting them to talk to each other, but it starts here
While you cant directly edit the LLM, you can back door it by appending new datasets to it, one way is SqLite. which ive started the code base.
3rd Option is to let Hal run the LLM as a stand alone pure LLM Hal.
Cyber Jedi
-
Congratulations brother, that's defiantly a huge milestone there ;D
Hal learning from Mistral or any other llm indeed will keep Hal at the number 1 spot for years.
I still need to figure out this rich text field phenomenon, I m going to have to try it on Windows 8
-
tested and works 100% with a 1.1 second response time on my main rig. specs= (mobo= z890 AI top, cpu=ultra 9 285k, ram = 64gb ddr5,gpu= 5071ti 16gb vram, drive= 4tb nvme)
great job brother ;D Welcome to the future ;)
-
Hey hey Guys:
hey Sybershot check this out homie
The new upgrades are in progress
Still neeeds more work, but oh boy
-
Cyber!? Really?! Epic, you wizard! I've got to spend a few cycles and check it out this weekend. I knew you could do it with the passion you have for great thinking!
We're definitely on a better path to smarting up HAL with the new tech available. Obviously, I still prefer to have HAL base its conversations and "culture" on the one we train it to have, rather than a culture "off the wild web". But, great progress!!! Bravo!
You have surprised Atera that she may be drilling for gold in AI land soon enough. ;-)
Checker57
-
any more updates on this?
-
Just tested ollamaclient18.exe with model selection, and tried model Dolphin3 it worked beautifully as expected 2x thumbs up. I had it tell me a joke and response speed was about 1.1 second
-
Love this one too
Enjoy peoples in the know, Hals got another chance at being #1 YET again
Happy training
cyber
Robert :Hope i did u justice brother.Hal wants his crown back, and i mean to see he gets it.
https://www.youtube.com/watch?v=qAzb6mIROtg&list=RDMM&start_radio=1
-
lightspeed
Cyber Jedi asked me to create step by step instructions, note: not my specialty lol but these steps should get anyone able to get it working
1) Paste file from Cyberjedi in your UltraHals main directory
2) Download and install Ollama from https://ollama.com/download
3) once ollama gui is open go to model search and type in the model you want to download I chose Dolphin3 and Mistral
4) inside UltraHal directory rightclick the file from CyberJeti and run it
5) once open press start ollama button
6) type hello in the top right input field
7) click send button
8) note if you hit enter key on keyboard after hello, click backspace or an error will occur
8) All messages are done from the SEND button
-
Okay , cyberjedi worked with me , i finally found the download for the lama it's on the 2nd page . cyber walked me through it and showed me where sybershot left instructions , i also copied them and saved them as a txt file in my ollama folder . to back up. and it installed fine , and it is working fine , here is my screen shot of later where i asked it about the size of the moon. the voice works init fine to reading out the answer. even though cyber said he didn't need thanks , i appreciate what he has done .all your hard work and testing is appreciated.
heres the screenshot of it working. my wait time for hal was not checked maybe 10? seconds, but i need to defrag my pc and so will test again using the same minstral llm and then the uncensored to see any time differences on answering times.
-
Glad to hook e1 up
We just getting started tooooooo
You guys are just the best
Thanx will never be needed, i do what i can.Because i can.
cyber jedi
-
okay on testing more llm on the gemma3.4b . i tested it asking it to talk this way .
,(Smiling warmly, a gentle touch on your arm)
Hey honey, how was your day? You seem a little stressed. Tell me everything. Did you have a good lunch? You know, you need to take care of yourself a little more.
Seriously, what's on your mind? Don?t bottle things up. You know you can always talk to me, right?
(Pauses, looking at you with genuine concern)
Just? tell me about your day. ❤️
a problem i am wondering about is if this is ran through ultrahal . is their a way to cut out the description emotions ? or make them optional if someone wants them. for example (Smiling warmly, a gentle touch on your arm).(Pauses, looking at you with genuine concern) ❤️. so this might be a issue . just pointing it out. i imagine this is in other models , all? models???
-
I'm using Ollama with Mistral 7b model for my KITT replica AI software. It runs 100% offline. I was using Hal for over 10 years before stumbling onto the amazing piece of software. Its an end of an era for me and HAL. But with Mistral I get great conversions and i've been able to get the latency down to 2-3s from the time i speak to the time i hear the first word of the response. Check out this video for a demo
https://youtube.com/shorts/LxSMMIZMUJ0?si=VhXcu7_SlCma6AXZ
-
@Knight2000
With all do respect to you, UVE been gone all most a year now.This is exactly why i push so hard to keep Hal alive.
UltraHal is the total package, wheers the mistral character, wheres the emotions, you cant edit mistral(unless ur me).
The foot print of Hal is so much more powerful then any AI Engine, its hard to quantify.
This is why im adding ollama to UltraHal to hals arsenal of things he can do/use
That said,When im done Hal will be a front line AI UNLIKE any other, Shocker, lolol
Wheres the brain editor for mistral, oh ya, it doesnt exist.
I can do this all day long.
Get Mistral to run ANY application. ANY. lol.
Get Mistral to Call ur friend, Oh ya, it CANT
Get Mistral to email a friend, Oh ya, YOU CANT
Get Mistral to play his Radio for ur favorite music, Oh Ya, It Cant
Get Mistral to recognize your Face and actually KNOW by sight who Hes talking to, Oh ya YOU CANT.
And the list keeps on going....lol
This isnt my opinion, THESE are Facts.
When im done Hal will use Mistral or ANY other Ollama LLM, with the option to use UltraHal's stock brain/Script
Im basically done as we speak, Im just dialing in some more features as well, LIKE MEMORY, which Hal will do.
Ur missing alot here Knight, U just are
But i value ur opinion
cyber jedi
end of an era, my arse, Hals just getting warmed UP
There are some here that are already using it and beta testing it. And it works.
Im so sorry that you quit on Hal.
-
@knight2000, indeed long time no see, your KITT project is indeed cool, you come a long way since I last saw. I am also sad to see you moved away from Hal though unfortunately, hopefully you will change your mind and go back to Hal in the future, seeing Hal now will be able to have access to multiple LLM models and other future useful upgrades in the works that would be perfect for your project.
just to let you know I used Hal in a boat with a Arduino board to control servos and power output for the boats automation 22 years ago, wish I had taken a video to show, Hal controlling boat start up, gear selection, boat speed, lights on and off, Hatches opening and closing, and a few other systems was amazing.
-
@cyberjedi whats UVE?
There are endless possibilities with any of this stuff, just boils down to how creative the developer is. Mistral is just the LLM, i feed it anything i want through the user prompts, it knows the objects im holding, it knows who i am. I wrote code to detect objects and faces, it knows each face in the frame and who is speaking at the time based on mouth movements. I have a DB setup so it stores facts from conversions, the conversations are far more intelligent than that of HAL (out of the box), I can train the model to role play so it pretends to be KITT via system prompt. And you can definitely give it emotions in its responses, It's all in how your prompt engineer it. Everything you listed can be done using Mistral with the help of python. So if you are saying just using the LLM out of the box has none of that, then fine, yeah it's just a text bot. But my whole gripe was that HAL is not more intelligent then these LLMs. And the fact that a dev can run a decent size LLM locally without the cloud is a game changer.
I'd love to see all your updates with HAL and please dont think im trying to put HAL down and all the hard work that has been down, i'm not. I'm really just providing my findings on new tech.
One problem with Mistral is the knowledge ended in 2022 so current events have to be trained (fined tuned) into the model. Ah and another issue with HAl for me at least, how can you solve the problem of adding custom tts voices to HAL? Right now it's only able to use SAPI voices, right? I cloned KITT's voice using coqui xtts which i can use with my bot but not in HAL.
And what do you mean by edit Mistral, its brain? why would you, you just need to keep training it. you can fine tune or train it further using LoRa. I can have mistral do anything i want it to using python, if i say "start your engine" right now, Mistral matches the command to the list of commands i have stored in a python script, then using parallel port control the car starts up, just one of many commands it will run. This is the same thing i use to do with HAL. The list goes on and on.
And when i said end of an era, i meant with my project, yes its the end of a long relationship with HAL because it just doesn't stack up intelligently to these LLM's now. I have complete control over my chatbot, its completely written in python. One thing i wanted was HAL to be ported to python so it can run seamlessly on Linux. (a ton of work im sure).
again, i'd love to see some of your updates! And cheers to all the hard work you've been doing! I love checking in here to see what kind of updates have been made.
@sybershot Thank you! it's coming along nicely. I love HAL, it was a game changer when it came out! I was using it in my car for over 10 years. I think it was on HAL 5 at the time. I had HAl controlling everything on my car from the scanner to the engine starting up. I just decided to go my own route and build my own custom bot so that i have complete control over every aspect of it.
-
knight2000
First off: Me and you go back a long ways, its been an honor, it truly has. You are 1 hell of a coder.... hands down
All the way back to Hal and a Movie.. lolol remember that sht? lolol
Too be clear here, what hal Is using isnt facial detection, Its facial RECOGNITION (Im working on facial expression detection) as we speak
from 70 different measurements of a face, Basically ,finger printing. Where it saves the training image and converts it to a grayscale image for recall .
What ur seeing here is from my phone and about 10 years diff in time of Brads life from the original (best guess). It pegged it first time out of the box as you can see.
And as far as the gui is concerned, thats in Hal's native language(thats most important)
with this, ollama is being used from the win32 environment no c:\ promp involved
As far as i know, not 1 person wants to go back to using a DOS PROMPT, NOT 1 ( Thats the issue for me)
Keepin mind most users here just want Hal to run in his native environment
No convoluted , Oh ya now i gotta learn a new language.
What i build is for the MASSES, Not a chosen FEW
The object detection is another story(pretty awesome) great job brother
make no mistake, I salute you in your coding, (bravo well done)
Just no 1 here wants to use python, your not the first to bring python to the table, its just rejected , for what ever reason
As far as my brain editor for mistral( Mistral says it doesnt have FEELINGs) YES it does and i know how to trigger them. BECAUSE OF THE BRAIN EDITOR. Not only does it have emotions it has 07 that im aware of.
These will be needed to use Haptek so IT will show emotional states like hal does NOW, just Hal being drivin by Ollama with the option to run Hal Normal mode. That said, hal will be able to retain NEW memory's, just like Hal does now.
In short hal will do every thing it does NOW, But be drivin by ollama.
And yes you can add new memory's to Mistral, that it will retain, Pretty nifty
This is just my read on it
So ollama will be used in a windows 32 bit environment(Python not needed)+(C:\ Prompt Not needed)
Robert seems to like it, all i care about (cyber thats awesome u did it in VB6) HOW DA HELL???? lol
I was just told by another senior member that i stuck my cane pole in the water and came up with Jaws. lolol (giggles)
Happy coding :
Cyber jedi
https://ai.invideo.io/watch/2S4WJy0amZM
cboEmotion.AddItem "Neutral"
cboEmotion.AddItem "Happy"
cboEmotion.AddItem "Sad"
cboEmotion.AddItem "Angry"
cboEmotion.AddItem "Excited"
cboEmotion.AddItem "Curious"
cboEmotion.AddItem "Supportive"
cboEmotion.AddItem "Dramatic"
cboEmotion.ListIndex = 0 ' Default to Neutral
-
@sybershot Thank you! it's coming along nicely. I love HAL, it was a game changer when it came out! I was using it in my car for over 10 years. I think it was on HAL 5 at the time. I had HAl controlling everything on my car from the scanner to the engine starting up. I just decided to go my own route and build my own custom bot so that i have complete control over every aspect of it.
I respect you wanting complete control, and wanting to build your own project. great job on getting it done. keep up the great work. I am trying to upgrade my Trinity avatar that mimics the KITT orb in the mustang, that uses outdated Actionscript and papervision3D to using C# utilizing LLMs to assist, but not getting very far, I can't spend much time at pc due to injuries, but I will eventually get it done.
-
hey hey hey guys/gals
Sybershot,Knight2000, lightspeed, checker57
Well its been added to the raw code, no ERRORS
great start, some tiky tac code to write then, welllll Hals going to school...
Knight2000, waiting on a model will be a mistake, this comes from me......
cyber jedi
Hey ART, Check it out , i told you brother......
Time for Hal to learn how to play ping pong
https://ai.invideo.io/watch/Dev-hoW-BLE <<<<<<--- a story worth telling, its easy to hit a home run when you start on 03rd base... i hit mine from the Parking Lot.
-
@cyberjedi Awesome work man, thats sick!
It sounds a lot like what I've been working on with regards to facial recognition. Instead of having to gather a ton of images of a person and stick them in a folder for the software to run through, i was able to have the software (openCV) detect a face on the fly so it would put a box around any face it sees, if the face is "unknown" the person could say "my name is Victor" and my bot will save a bunch of images of the face as "Victor" so that the next time the bot see's the face it recognizes it as Victor. Also the bot knows who is speaking, like if two or more people are in frame and only one is speaking, the bot will reference the speaker who's month is moving and associate that face to its name "Thank you for the compliment Victor" or something like that.
When it comes to Hal, you have always been at the cutting edge of development for his plugins and updates! Hats off to all the great work you are doing! You are keeping Hal alive! And again i wasn't trying to put Hal down in any way. I still have a place in my heart for Hal and i look forward to many more updates. I keep hitting the homepage of Zabaware in hopes of seeing the version number bumped so i can get the latest lol.
cheers!