dupa

Author Topic: GodelPy for Hal (Local GPT for Hal, with memory and functionality)  (Read 10094 times)

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #45 on: February 15, 2023, 08:05:38 pm »
Yes, the trade off for low-power, low-cost GPT solutions is a simply trained model. This means sometimes the model will swear, this is expected, as swear words are indeed words. To prevent this, it would require the model being specifically trained on Hal's online conversational data, removing the swears entirely, which I do not have access to. This is something Robert will have to adapt into this plugin should this be a plugin he wishes to adopt to bypass the paid GPT model and give Hal back his basic functionality.

Also, I wanted to address some confusion surrounding these language models. The GPT model used here, Godel, is hosted locally on your computer, there is no server, it will be downloaded from a host. The Godel model *DOES NOT* update in any way or fashion as I have not programmed it to. This means that microsoft is not biasly swinging Hal's responses nor will the model decline or improve in responses overtime. It is completely static other than Hal's default learning.

Hangtime: Yes, these models are summations of entire human knowledge repositories like reddit or Wiki, this should go without saying that if your hardware is less than "2023" it's going to take some time for the model to make a response. This is the trade off for a local and portable system.

I have further improved this plugin, for example, the non-proto plugin uses 2 models to inference (as mentioned in the first post) and includes tighter tags on "maindata" and regular data, however I will not be including it on this forum and for now there will be no more updates until sufficient testing is complete. (when I get feedback from at least 10 users that's working and is an improvement)

It seems this plugin works for a base prototype and I hope to see it expanded upon by Robert himself.

Thanks for playing everyone!! See you next time!

-Spitfire2600
« Last Edit: February 15, 2023, 08:12:16 pm by Spitfire2600 »
 

Checker57

  • Full Member
  • ***
  • Posts: 138
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #46 on: February 27, 2023, 08:01:15 pm »
Hey spitfire,

Very interesting plugin.  Worked for me without any issues other than expected lag due to hardware limitations. 


First of all, I do appreciate your clarifications on how this GPT model functions.  Albeit, I had one thought to nail down.  If read your post right, "The GPT model used here, Godel, is hosted locally on your computer, there is no server, it will be downloaded from a host. The Godel model *DOES NOT* update in any way or fashion as I have not programmed it to" means to me that our input data is not going out on the web but rather stored on our own local computer?  Is that right?  Is the database downloaded in a java file format?

Obviously, that's very important to many users as they take their personal interactions and shared information seriously. 

I do like its sticking to the subject matter, will play with it more before giving my personal conclusions. 
« Last Edit: February 27, 2023, 09:38:05 pm by Checker57 »

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #47 on: March 01, 2023, 10:52:56 pm »
Hey Checker57!

To address your question as detailed as I can, you are mostly correct. When the code is run, if the language model has not been downloaded then it will proceed to, then execute the rest of the program. This Godel model does not update, it is held locally at "C:\Users\USER\.cache\huggingface\hub" - this will display the Godel model. If you wish to see what makes the model, continue in the GODEL folder to snapshots, go to the end of that folder chain and you'll see all of the model files. I shouldn't have to say it, but don't alter these in any way.

The model itself is a PyTorch model. These are built using millions/billions/trillions of parameters to specify word vectors for Python to interact with, though indeed, you can use Java to build, train, and deploy them as well, as the 2 languages are very compatible with each other.

So, the model does not update, at least, not yet. I am working on a basic training for the model to remember directly, without Hal, user information, this way all information is stored as part of the language model itself however the training required for the model is exceeding acceptable memory limits, so as is, it works that Hal is the long term memory and basic brain, when Hal doesn't have an answer or that answer is vague, unrelated, or not part of a function GetResponse, then the model will take any responses from all along with any related information the model contains, along with any knowledge from the internet, and any previous conversation data to then spruce up Hal's original response or generate a new one entirely. As I designed it, Hal will retain both the user query AND the model output, which mean in a way Hal learns from the language model the more you use it, making his Reponses more intelligent and thus even better responses from the model.

Of course this is all personalized to each user, as no data generated by the model is ever available online, as it's run locally.

As far as hardware lag, yeah, sorry, it's an absolute ton of data processing for the model to generate responses to be human/Hal/like. I can say I use an m.2 drive for my c drive, I have a 6 core i5, 24 gigs of ddr4 ram, and a 3060 with 12 gigs of video ram. My inference time with this code is roughly 5-8 seconds, depending on how much data is scraped from the web and fed to the model with any user query.

Thanks for trying it out, I hope it's working well for you. I will soon update a few pieces of the program to include a hard-coded swear filter, I know that's been an issue some folks are having, which I should have foreseen to be honest.

-Spitfire2600

 

 
« Last Edit: March 01, 2023, 10:59:46 pm by Spitfire2600 »
 

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3848
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #48 on: March 02, 2023, 09:09:24 am »
I appreciate the somewhat detailed breakdown of its processes and the potential for near-future updates/upgrades/modifications.

Hopefully, as we discussed, the Tables could be addressed in a possible future version.

It would be desirable for Hal to connect its Godel-based responses to or with Hal's existing brain/database to the point where Hal does not get lost during conversations and would know to whom it is speaking/interacting.

I'm sure we respect the amount of work you have put into making this whole thing work as effectively and efficiently as you have done.

I am looking forward to future updates!

Thanks,

- Art -
« Last Edit: March 02, 2023, 09:13:52 am by Art »
In the world of AI it's the thought that counts!

- Art -

edakade

  • Newbie
  • *
  • Posts: 9
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #49 on: March 05, 2023, 09:11:15 am »
You mentioned that the godel responses are appended to the .brn but I haven't found where, exactly. You know... in case I need to delete some unsavory responses, haha.

Thanks for your efforts.

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #50 on: March 28, 2023, 05:00:06 pm »
Hello all!

I have updated the godel.py script to include a swear filter. I know that has been an issue for some people. The model will no longer swear, at least it shouldn't. Just replace the godel.py script from the zip file and you're set.   

-Spitfire2600
 

brians2009

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #51 on: April 11, 2023, 01:32:22 pm »
Thanks for putting this together!  An offline GPT + memory and functionality sounds amazing!  I've been going through the steps and get a failure when it tries to install torch:
ERROR: Could not find a version that satisfies the requirement torch (from -r C:\Program Files (x86)\Zabaware\Ultra Hal 7\Control\Godel\requirements.txt (line 7)) (from versions: none)

The only thing I did differently was to download an Intel installation file for Python (the link you shared seems to be for an AMD chip).  Is there something I should look into or a workaround to get PyTorch installed or something?

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #52 on: April 18, 2023, 06:32:41 pm »
Hey brians2009,

The python link I shared is simply a compatibility for AMD, all functionality for intel remains intact with that link.

After you install python 3.7 from my link there, you should have "pip" functions in your CMD window. Simply run "pip install torch" and it will install. No need to download any other packages.

Spitfire2600!
 

Shastalore

  • Newbie
  • *
  • Posts: 12
  • The eyes are fake but headgear is real. Be afraid.
    • View Profile
    • Some of my portable green energy designs and prototypes:
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #53 on: June 11, 2023, 12:27:36 am »
So to make a long story short... the way I see it, in order to properly run your excellent GodelPy / Local GPT for Hal, with memory and functionality, as well as a reasonable response time in conversation (much less than your 5-8 seconds), this seems to be the required minimum hardware, better known as a gaming computer:

- Windows 10 64-bit (Windows 11 64-bit??)
- 2TB m.2 solid-state drive (How much SSD memory will GodelPy / Local GPT for Hal actually need?).
- 4.75GHz multi-core CPU
- 32 GB of ddr5 RAM (and dual-channel for best speed)
- 3060 with 12 GB of dedicated video RAM

Am I correct on this? As I, as well as others here are also wondering what type of investment will it take to properly run Local GPT for Hal without performance disappointments... although the prices for such mini gaming PCs are dropping down towards the $ 400 - $ 500 range.
Just for laughs, and to become fully familiar with your installation instructions, I installed your setup on my Windows 10 64-bit laptop: 2.16GHz Intel Celeron N2840 CPU (Mobile Intel HD graphics with shared graphics memory), 8GB DDR3L Memory, and a 500GB Crucial MX500 SATA 2.5-inch Solid State Drive (6GB/second). And, of course being only an office laptop, Local GPT for Hal didn't work.

We all thank you for your efforts on this, as I find it much more valuable than running Hal 7.5 on the cloud... with Big Brother looking over our shoulder.
« Last Edit: June 15, 2023, 11:03:00 am by Shastalore »
Ultra Hal 7.5, offline, on a Toshiba Satellite R15-S829 Tablet with Windows XP Tablet 2005 operating system. For accurate speech recognition: IBM ViaVoice Standard 10 via Plantronics Audio 60 3.5mm ear/mic headset plugged into a hacked VXI Parrott "Translator" Noise Cancelling R11506/P41TR module.

Honvai

  • Jr. Member
  • **
  • Posts: 51
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #54 on: June 28, 2023, 10:20:20 am »
How to add manually more knowledge to this mod?

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3848
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #55 on: June 28, 2023, 10:25:02 pm »
There is an answer to your question but I shall defer to Spitfire2600 to provide any clarification if he is still around.

IF you have a decent computer upon which to run it.

IF you are not going to have an issue over it taking a couple of seconds longer than a human to respond.

I found his PyGodel adaptation to be very nicely done adding a noticeable measure of ability to Hal.
My main issue was not the fault of Spitfire2600 nor his mod. My issue is that there was no provision within Hal or otherwise for it to use my existing Hal's brain and no ability to know or remember who I am or anything about me or itself. This is due to the fact that the tables weren't created or written to. SO I basically had a wonderfully knowledgeable and quite articulate chatbot.

That's fine except that I already have a Google Home device.

Good luck!
In the world of AI it's the thought that counts!

- Art -

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #56 on: July 14, 2023, 03:38:18 pm »
Shastalore,

You say it did not work. Could you elaborate? You enabled Godel in hal's plugins, etc? This is designed to run on CPU or GPU, as long as you have at least 8gb of ram and correctly installed python 3.7 and its modules per my instructions, the program should have either run or given an error.

As far as storage space, the model is approximately a 1gb, with the modules in python running somewhere around 1 gb (torch, namely)

As this was designed to run even on low end systems (provided enough ram is available, but who is running a PC these days with less than 8 gbs and expecting anything?) Now, if you have a high end CPU/GPU/RAM/M.2/Etc then the program will run faster, as should not need to be explained. Now, the catch is, while Python takes advantage of more powerful systems, Hal is still VBscript and limited to 1 core or any CPU, there's no way around this. So while the godel code will execute faster, the model will be loaded faster, hal will remain at a fixed speed when processing the data, so even the most powerful computers should expect a 2-4 second delay in responses.

Now, included in Godel is a recursive webscraping program. How it works is uses fuzzy logic to determine what the user has asked, if it was a question, and then compare articles using a wikipedia module to find the most relevant information based on a scoring metric. It's not perfect, in fact, I have already written a better version, however, webscraping is a very delicate and blurry subject for corporations, so reddit, YT, google, twitter are all no-goes unless you want an IP permaban, so for this reason the included webscraper is legal, effective, and safe and only uses wikipedia modules. That said, it does still take an additional second or 2 to find the relevant information and bring it to the forefront.

Minimum hardware, I would say as an old joke, is anything that can run minecraft, but in reality, it's basically ram. 8-10 gigs ram minimum, as with any process these days.


Honvai asked if there is a way to "add information", the short answer is no. This model has been already been trained on a vast network of information and conversational responses. However, if you were to finetune the model (I can do, I can eventually inform others how to on this forum, require massive GPU memory) then you can "add" new information. So, basically, the model works like a large word vector, each word corresponding to several other words, like a big web. Those words have "scores" (weights) and fine tuning the model adjusts those weights. So, in short, you're not "adding new information" as it already contains every word humans have ever written. When the weights are updated, it is able to form sentences in a new way, like an enormous neural network, actually, that's exactly what it is; an enormous neural network.

If I missed anything let me know.

Spitfire2600
 
 

Art

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3848
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #57 on: July 15, 2023, 11:47:07 am »
I believe one other concern that a few of us had was the inability of the PyGodel model to allow Hal to use any existing Plugins. Some of them were precisely tuned to enhance Hal's interaction and functionality.

Would they all be rendered inoperable and useless? How well could the Godel model handle similar issues found in various plugins?

There will be a tradeoff, but if there were a way to minimize it, that would be a bonus for us.
In the world of AI it's the thought that counts!

- Art -

Honvai

  • Jr. Member
  • **
  • Posts: 51
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #58 on: July 19, 2023, 01:13:42 am »
C:\Users\USER>"C:\Program Files (x86)\Zabaware\Ultra Hal 7\Control\Godel\godel.py"
Traceback (most recent call last):
  File "C:\Program Files (x86)\Zabaware\Ultra Hal 7\Control\Godel\godel.py", line 5, in <module>
    import psutil
ModuleNotFoundError: No module named 'psutil'

I fixed this by running godel.py in Python 3.7.9 Shell.
« Last Edit: July 24, 2023, 04:24:44 am by Honvai »

Spitfire2600

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
Re: GodelPy for Hal (Local GPT for Hal, with memory and functionality)
« Reply #59 on: July 25, 2023, 12:33:49 am »
Honvai, if you encounter a "module not found" issue, you can correct it, in this case, by typing "pip install psutil" in cmd. This will solve your error.

Spitfire