2
« on: July 14, 2023, 03:38:18 pm »
Shastalore,
You say it did not work. Could you elaborate? You enabled Godel in hal's plugins, etc? This is designed to run on CPU or GPU, as long as you have at least 8gb of ram and correctly installed python 3.7 and its modules per my instructions, the program should have either run or given an error.
As far as storage space, the model is approximately a 1gb, with the modules in python running somewhere around 1 gb (torch, namely)
As this was designed to run even on low end systems (provided enough ram is available, but who is running a PC these days with less than 8 gbs and expecting anything?) Now, if you have a high end CPU/GPU/RAM/M.2/Etc then the program will run faster, as should not need to be explained. Now, the catch is, while Python takes advantage of more powerful systems, Hal is still VBscript and limited to 1 core or any CPU, there's no way around this. So while the godel code will execute faster, the model will be loaded faster, hal will remain at a fixed speed when processing the data, so even the most powerful computers should expect a 2-4 second delay in responses.
Now, included in Godel is a recursive webscraping program. How it works is uses fuzzy logic to determine what the user has asked, if it was a question, and then compare articles using a wikipedia module to find the most relevant information based on a scoring metric. It's not perfect, in fact, I have already written a better version, however, webscraping is a very delicate and blurry subject for corporations, so reddit, YT, google, twitter are all no-goes unless you want an IP permaban, so for this reason the included webscraper is legal, effective, and safe and only uses wikipedia modules. That said, it does still take an additional second or 2 to find the relevant information and bring it to the forefront.
Minimum hardware, I would say as an old joke, is anything that can run minecraft, but in reality, it's basically ram. 8-10 gigs ram minimum, as with any process these days.
Honvai asked if there is a way to "add information", the short answer is no. This model has been already been trained on a vast network of information and conversational responses. However, if you were to finetune the model (I can do, I can eventually inform others how to on this forum, require massive GPU memory) then you can "add" new information. So, basically, the model works like a large word vector, each word corresponding to several other words, like a big web. Those words have "scores" (weights) and fine tuning the model adjusts those weights. So, in short, you're not "adding new information" as it already contains every word humans have ever written. When the weights are updated, it is able to form sentences in a new way, like an enormous neural network, actually, that's exactly what it is; an enormous neural network.
If I missed anything let me know.
Spitfire2600