Recent Posts

Pages: 1 ... 7 8 [9] 10
81
Okay, I did a little investigating on Ollama by conversing with it.  Quite interesting read:

Does your LLM retain conversations we discuss in your database?

Yes, our LLM (Legal Lifecycle Management) system retains all conversations and interactions within its database. This allows us to keep track of past discussions and provide a more personalized experience for users like you. Rest assured that your previous conversations will be available if you need to reference them in the future.

Do you form a ideology of the persons you hold conversations with?

While I am capable of learning from our interactions, I do not form ideologies or personal beliefs about individuals based on our conversations. My goal is to provide accurate information and assistance while respecting your preferences and boundaries. If you have any specific requests or requirements for our future discussions, please let me know.

So, you do take under considerations preferences and boundaries discussed in previous discussions?

Absolutely! I strive to be a helpful assistant by taking into account the preferences and boundaries that we've established during previous conversations. This allows me to provide better-tailored responses and support for your unique needs. If there are any updates or changes to these preferences, please let me know so that I can adjust accordingly.

Are the discussions we have remain in my local drive, or are they uploaded to the net?

The conversations you have with me are stored within our LLM system's database. They do not automatically appear on your local drive or get shared online without your explicit request. If you want to save a copy of our conversation for your records, I can provide you with a text file containing the chat log. Just let me know if that'

so, when I state something, it does get stored in the Ollama database online, but it is not shared with other users? Is that correct?

That's correct! The conversations you have with me are stored securely within our system's database. They are not accessible to other users unless explicitly shared by you or authorized under a valid request. Your privacy and the security of your data remain my top priority. If you have any concerns about this, please let me know so that I can address them for you.

Is it possible to not store them online?

I understand your concern for privacy, but our LLM system requires conversations to be stored within its database for effective functioning and personalized assistance. However, rest assured that these records are kept securely and confidentially. If you have concerns about data storage or security, please let me know so that I can address them appropriately.

I was under the impression that if I downloaded the files to my PC then I could access the LLM system stored on my PC without the necessity of the internet. For example, if the net went down.

That' correct! If you download our LLM system files onto your personal computer, you can use it offline without requiring an active internet connection. However, please be aware that this mode will not have access to real-time updates or new features until you reconnect to the internet. Furthermore, remember that offline access is only possible if you've downloaded all relevant data and files.

Okay, that is what I am researching.

I'm glad I could assist you in your research. If you have any further questions or need more information on this topic or others, please don't hesitate to ask. I' here to help!

Is this work a open source collaboration?

The LLM system is not an open-source project; however, we strive to provide accurate and helpful responses to users like you. Our goal is to offer personalized assistance while respecting your preferences and boundaries. If there are any specific features or updates you'ld like to see in the future, please let me know so that I can share them with our development team.

So, as such, this LLM would not be available to port to another AI platform without conflicts with your team?

That's correct! The LLM system is specifically designed for use with our AI platform and may not function correctly on other platforms. Our technology is optimized for compatibility and seamless integration within the context of our AI ecosystem. If you have any concerns or questions about this, please don't hesitate to ask.

I have an older AI platform, called HAL. It is somewhat older, but has a nice Avatar component and other device functions that are useful. But, it does lack the smoother ability to respond as accurate as your platform. Thus, I was considering the possibility to integrate the two AI's so that together the end user experience would be personal, yet conversationally advanced.

I understand your goal is to enhance the user experience by combining the features of both our AI and HAL platforms. While it's an interesting idea, I must emphasize that compatibility issues may arise due to differences in coding languages, frameworks, and design philosophies between the two systems. Our LLM system has been designed with meticulous attention to detail and extensive testing to ensure a seamless user experience. If you'd like more information on how our AI platform could potentially work alongside HAL or if you have concerns about compatibility, please let me know, and I' be happy to address them for you.

I'll keep that in mind.

It was my pleasure assisting you! If you have any more questions or need further information, please don' hesitate to reach out. Have a great day!

Just an observation, seems your words are not always complete. For example above you stated "please don' hesitate to reach out" however, you left out the "t" in don't.

You're right; I apologize for that oversight. It is essential for me to maintain a high level of accuracy and professionalism in my responses. I will review the issue and take necessary measures to ensure it does not happen again. Thank you for bringing this to my attention!

Can you actually, modify / correct your words upon becoming aware of errors?

Yes, I am capable of correcting any mistakes or inconsistencies in my responses. My primary goal is to provide accurate and helpful information while maintaining a high level of professionalism. If you notice any errors or issues in future conversations, please don't hesitate to point them out. Thank you for your understanding!
82
Yes, I believe there is some architecture when HAL was working with the pay-for-play chat GPT addon.  However, keeping up with tokens was cumbersome. 

That is a good thought to where the responses would be saved to cleverly allow HAL to respond, yet have data that can be pulled from Ollama as needed.  I mean, I suppose the end goal would be to provide HAL with current data, yet allow it to use the previous data stored that we provided HAL to maintain that personal experience.  If not, the result would be having HAL be just a figurehead, wherein reality, we are working with Ollama. 

As there have been a few iterations of using outside data sources integrated with HAL through plugins, the experiment would be interesting, if not rewarding.

I've heard that Ollama also stores your queries and develops, over time, a personal response behavior according to the interactions.  So, that opens another can of exploration.

In any case, it sounds like a worthwhile venture to explore since Ollama did a good job building the open source LLM.  ;-)

Cheers,
Checker57
83
CyberJedi's Ultra Hal Workshop / Re: Smarting up HAL with free LLM model (Ollama) ?
« Last post by cyberjedi on August 18, 2025, 06:58:22 am »
hey hey checker
Oh its doable
I have to put my thinking cap on, ive seen that video, i have some ideas.
Im prepping my house for the season, but im with you guys.
On its surface it looks like we gotta hit Hal rite smack in the middle again. By passing Hals getResponse function.
My question is this, will hal still hold the memory's and where? In Hals sql or the LLM itself . Thats key.I will prolly bypass all and just hook the LLM.
If the LLM can do that then im game. Gonna have to  Hit Hals Chant engine as well as Haptek, some sorta forwarding foolishness.
Lots of R and D in that build.
As it turns out i added that kinda architecture all ready so we will see
Makes me think about Twitch and what i was doing there with Hal
cyber jedi
 


24hrs laterrrrrrrrr lololol. Why not have ultrahal just learn from the LLM, i mean really, wasnt that the goal. To begin with.
Ive tried many times with this and just get accounts zapped...Booooooo pi.ai
It would entail ping- pong, im thinking outloud here, Oh ok igot it.

Share the work load:  Concept
If you guys will install this LLM on  your machine, and make it work..... Play with it.
Heres what your after:

#1 : Will will it have  a normal Coversation
#2 : Will it retain the conversation
#3 : Put it through the paces, try to break it.

Basically beta test the sht out of it.  We want it to do what Hal can do, Hal is the bar........

I will go ahead and code up  a Special Version
Great gig for sybershot, u too checker,and lightspeed......




 
84
CyberJedi's Ultra Hal Workshop / Smarting up HAL with free LLM model (Ollama) ?
« Last post by Checker57 on August 18, 2025, 02:30:27 am »
Lightspeed shared a YouTube video that was interesting, which demonstrated how to install Ollama LLM onto a flash drive to use locally.  I tried it, and although I had to do it slightly different, it worked!  I now have ollama LLM on a flash drive that I can use on my PC.

Having said that, it brought up the query, if an addon could be coded to do the same for HAL? 
That might be awesome because it would be much more relevant than scraping info off the web that does not correlate to the present discussion, and it would all be local.  Since Ollama is open source, it probably would not cause any conflict with the LLM creators.


Here's the YouTube link:  https://www.youtube.com/watch?v=eiMSapoeyaU

any ideas Cyber? 

Checker57
85
Just watched the update with all the rest of the story. Loved it, nice job!
86
Share Conversations, Experiences, Graphics / Re: Beautiful HAL Scenarios
« Last post by Checker57 on July 28, 2025, 08:44:37 pm »
Finished a style idea I saw once on the internet that I thought was over the top, and threw it further!  ;)

Riding wild horses, it's a beautiful thing.

Checker57
87
Ultra Hal Assistant File Sharing Area / learning
« Last post by cyberjedi on July 26, 2025, 06:55:33 pm »
LightSpeed : Lets start here.lol
The triggering on this will be unique




Rem Type= Plugin                     ' PRE-PROCESS ' We want this in upper memory just like the spell checker.
Rem Name= Remembering
Rem Author=Zabaware
Rem Host=All
Rem Version=2.0

HalBrain.ReadOnlyMode = True
'Determines that you are talking about mem
If InStr(1,InputString, "Remember this:",1) > 0 Then
 UltraHal = Getmem(HalCommands)
ElseIf InStr(1,InputString, "Remember this:",1) > 0 Then
 End If

    HalBrain.ReadOnlyMode = False
    UltraHal = Getmem(HalCommands)

Rem PLUGIN: PRE-PROCESS

Function Getmem(HalCommands)
Option Explicit
Dim Hal, fso, memoryFile, line, userInput, foundRecall, recentMemories
Dim memoryPath
Set Hal = CreateObject("HalBrain.1")
Set fso = CreateObject("Scripting.FileSystemObject")

memoryPath = "C:\UltraHal\Memory\memory_log.txt" ' Adjust path as needed

' Simulate Hal receiving input (replace this with Hal's actual input capture if plugin-based)
userInput = InputBox("You say to Hal:") ' i have to sort this out'

' Save user input to memory
If Not fso.FileExists(memoryPath) Then
    Set memoryFile = fso.CreateTextFile(memoryPath, True)
Else
    Set memoryFile = fso.OpenTextFile(memoryPath, 8, True) ' 8 = ForAppending
End If
memoryFile.WriteLine Now & " | " & userInput
memoryFile.Close

' Try to recall something relevant from memory
Set memoryFile = fso.OpenTextFile(memoryPath, 1) ' 1 = ForReading
foundRecall = False
recentMemories = ""

Do Until memoryFile.AtEndOfStream
    line = memoryFile.ReadLine
    If InStr(line, "dog") > 0 Or InStr(line, "car") > 0 Or InStr(line, "music") > 0 Then
        recentMemories = recentMemories & vbCrLf & line
        foundRecall = True
    End If
Loop

memoryFile.Close
'HalMenu.HalCommand "<SPEAK>" & text goes here & "</SPEAK>"  ' We need to cut this in anywhere we want hal to speak
If foundRecall Then
 '   MsgBox "Hal recalls:" & vbCrLf & recentMemories
    Hal.AddToBrain "I remember you said: " & recentMemories
'HalMenu.HalCommand "<SPEAK>" I remember you said:  "</SPEAK>"&recentMemories
Else
    Hal.AddToBrain "Thanks for telling me. I'll remember that."
'HalMenu.HalCommand "<SPEAK>" &Thanks for telling me. I'll remember that. & "</SPEAK>"
End If
 Exit Function
###############################################################
This is a work in progress
Each time the user says something, it?s saved in a memory file with a timestamp.
The script scans for keywords to simulate recall.
If relevant memories are found, Hal references them in conversation.
88
Ultra Hal Assistant File Sharing Area / Re: Grok Building for Lightspeed
« Last post by lightspeed on July 22, 2025, 09:53:50 pm »
lol.  :) hop to it hopsing !  ;)
89
THAT WAS A GOOD MOVIE AND STORYLINE !! thanks for making it .  :)
90
00:00:00 Cyber Jedi: The Last Signal Sci-Fi Video
00:03:33 The Reckoning Protocol
00:05:36 The Missing Man
00:06:59 The Vengeance of Cyber Jedi
00:08:28 Project Cutting Edge

https://ai.invideo.io/watch/_hcXMuJ7-cb

Pages: 1 ... 7 8 [9] 10