Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Everything went in without an incident.

Just be sure to backup/copy your existing Hal files (uhp, db) and follow the instructions to the letter.

Make sure you are using the default brain when chatting, providing you didn't change any of the referenced lines in a custom brain or for a custom brain.

An indicator that all is working will be if you see a maindata: respond for some of your interactions with Hal.

I am anxiously awaiting to see how any improvements take place over time. I don't expect to see big changes right away but after several chats/exchanges.

I am keeping copies of my chat logs with HAL for reference. You should keep some logs as well. They could prove to be quite useful in the near future.

Thanks!!
22
Thanks Art! Always looking out!

*I have so many GodelThisThat programs as experiments I sort of lost place for a second  ;D
23
Man, you're always thinking outside the box! It seems that Hal just keeps giving. I'm sure many of us can't wait to try out this latest offering but Everyone wishing to try this or any other modification to their Ultra Hals should always Make copies of Hal's UHP and Brain files before running any new Hal files.

NOTE: Just a slight but important spelling error where the text reads: "finally, GodelPy.UHP will have 3 directory locations..."
Please change it to read: "finally, PyGodel.UHP will have 3 directory locations"
 


Thanks Spitfire2600!
24
Hey everyone!! So, sadly, I have abandoned ConceptNet in favor something a little more impressive. After the incredible success of GPT3, I modified a few models to be compatible with Hal for local inferencing if anyone wants a portable version of the smartest AI on the internet.

This code was developed from a Goal Oriented model named "GODEL" and a QA system called "Flan". With some help from ChatGPT, I was able to write and assemble the code to work with Hal, which means all of Hal's learning abilities and functionality are now for use with GPT. Something, I think, will interest Robert. At least, here's my goal with this prototype. 

Let's get started!! 10 easy steps for genius Hal!

Installation:

1. We need to install Python 3.7.9 - Not Python 3.8, not Python 3.7.8. The modules used are a specific cross between machine learning and dated (more functional) algorithms, as a result ONLY 3.7.9 will support this specific set of instructions.
    Follow this link, download the file and launch.
    https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe

2. Now this is important, do not change any optional features. You will see an "Add Python to PATH" check box at the bottom of the installer, be sure to click this and then allow the program to install to its default location. 

3. Reboot system.

4. Move all files from the zip directly into Hals directory as is. You'll see an "Ultra Hal 7" folder in the zip, open that and copy those contents directly as is, including the "Control" folder right in to Hals directory. 

5. Open Control/Godel directory. Inside, you'll see "createreqs.bat" - open CMD and drag "createreqs.bat" into cmd and hit enter. Python will begin downloading and installing the necessary components for the Godel code to run properly.

6. Now we want to test that the modules have been installed. Here is a list, simply copy and paste one at a time into cmd and press enter, If it says something like "already satisfied" then we're golden!

    pip install wikipedia==1.4.0
    pip install psutil==5.9.4
    pip install transformers==4.24.0
    pip install fuzzywuzzy
    pip install accelerate==0.15.0
    pip install SentencePiece
    pip install torch

7. If everything went smoothly then the hard part is done and we can begin a small test to make sure everything is in order. To make sure everything is installed correctly, simply open CMD for a final time, then drag and drop the godel.py python script from the Control\Godel directory into CMD and then hit enter. This will run the script and we should be downloading a series of files, this is the model we will use for Hal. A "data2.txt" file should appear in the Control/Godel directory, then all is ready for Hal.

8. The Hal7.UHP will need to be dropped in "C:\Users\User\AppData\Roaming\Zabaware\Ultra Hal 7" folder. You can rename this uhp file and again on lines 2 & 5 to match your brain name if you know how to do it, but please make backups of your brain and your original uhp. Otherwise this will work simply with default Hal7 brain.

9. finally, PyGodel.UHP will have 3 directory locations you need to change to match your file locations. Please update lines 9, 35, and 39 accordingly. All errors will likely stem from here, so please take note your directory matches exactly.

Hal is ready to go!!!


Inferencing can take time. You'll notice with your first run of the code, a lot of things are downloading. These are the models Hal will use for conversation. They are being saved to a cache location so there is no need to move them, although, you could if you like, just be sure to define the directories in the Godel python script if you're familiar. I designed this code so even python noobs *should* be able to install and use.

Requirements:
Now, I have no specific requirements other than at least 12 gigs of ram and a decent CPU. But, if you have an NVIDIA GPU with more than 6gigs, you'll notice a huge speed up. The faster all your components, ddr3 => ddr4 for example, will see massive increases in speed. It's all hardware. But it should run as long as you have enough ram 12-16 gigs is sufficient.

Models:
Again, this uses Godel and Flan, both highly specialized models. You'll notice in the python script these codes are defined as "base" and "small"  Both of the models have a "base" and "large" version, which you can change by replacing the "base" with "large" however, be aware the larger the models, the more resources you will require to inference them. I find the best success with "base" flan model and "large" godel models. 

Code:
This code is written primarily in python, other than Hal's interface for using it, written in vb. This is an attempt to show a proof of concept for Robert that Hal is indeed actually compatible with Python, even if the code must be wrapped in VBS, we can still expand functionality with Python.

Also, this code includes a very basic memory/context function for the model to use, which creates and appends an SQL3 database. Sadly, Python can not interact with Hal's brain that I have found yet, but Hal can interact with any regular SQL3 database via VBS. I am working on expanding this functionality.

This code takes Hal's responses on 2 conditions, what I have deemed "maindata" and "regular" data. "maindata" simply means there is a function related to this response, for example "what is 5+5", while the Godel model itself is not designed for these tasks, Hal is, so we want to preserve this response by tagging it with "maindata" so that the response is saved overtop of our model inference. Secondly, "regular" data is basic data Hal will regularly respond with, such as a "too short" response. These responses are not tagged with "maindata" and will allow the model to inference your input along with Hal's response and past context AND present knowledge to create something new and unique, not a repeat or rehash of old information. If Hal is without an answer, and we do want this to happen sometimes so the model can act as Hal's brain, then both models will infer based on your input and give the most concise response possible.

For the most part, this works fairly well, especially with the larger models.

In conclusion, I hope everyone will find this interesting and I am very excited to see what people are able to expand on and how intelligent their Hal's become.

Please keep me updated on all errors, suggestions, recommendations, comments, etc!

I will reveal more in edits about how the magic trick works, but first I want to make the card appear, ya know  ;)

-Spitfire!  ;D
25
Thank you Art for message.

Wishing you nice success.
Be Well from Will and Mr Data :):]
26
Ultra Hal 7.5 Beta - Now powered by OpenAI GPT-3 / Re: Anyone Still using A.I. HAL?
« Last post by Art on January 30, 2023, 08:32:39 am »
Will,

Check your messages.
27
Thanks Art yes im cautious.
Ive lost inventions and business before. Had some flat stollen in my opinion.
Not pleasant.
Had my science taken,  others steelin it really grates on me.
When i hear it said they smartest by media and that science named i get very upset.
Anyway watch me invent x10.

Ok ease up there, my usual is one more step, another inch,
one more button, nice. You know i like nice.

Maybe Zaba gpt shall see and know who really got the answer. It might also not know enough to get the story right yet.
As per a science question i saw it get wrong but nobody seemed to say so. When people dont know its wrong is
Going to bug. 
I spoze im going to do my own homework as usual,
but i can imagine if we did work together zap.
Thanks for interacting Art.
Be well.



28
Ultra Hal 7.5 Beta - Now powered by OpenAI GPT-3 / Re: Anyone Still using A.I. HAL?
« Last post by Art on January 29, 2023, 10:35:53 pm »
Will,

Not being an attorney, I can only go on a few thoughts in this regard.

The HAL-GPT3 Chatbot output is not written to any tables so Hal does not keep a record of your exchange. (you might be able to do so but not Hal).

In theory, your questions, and Hal's output, you could claim the work since the conversation is just between you two (plus maybe the cloud people at the GPT-3 servers...who knows?

One has to consider the legal status with respect to valid or actual responses done by or made by said interaction with GPT-3. I'm sure there is no claim of certainty or validity in GPT-3's answers being truthful, useful, or factual by any stretch of the word.

Ask them about proprietary licenses or who owns any formulas, ideas, rights, or potential money from the use of the said program (GPT-3, etc.)

You are certainly free to explore your notes from the exchange of ideas and responses between you and Hal/GPT-3 but again, how factual might those answers be? I for one, would not be willing to risk any inventions based on GPT-3 formulas or methods.

Lastly, upon further exploration, one might find, some disclaimers in the fine print of the ChatAI/GPT-3 website or its agents, etc. It might state that you are on your own if or when using any output from GPT-3 or their platforms.

I understand your eagerness to dive into this exciting knowledge system further but my friend, please proceed with caution and ask those people lots of pertinent questions.

Make good choices and be well!

- Art -
29
Hi Art.
Youve been here a long time.
If zaba gpt does work for me who owns the work?
Example, i give zaba gpt  new science and ask it
To draw it. Saving me from having to draw it.
Am i 100% owner of the picture and all money made from it.
Example 2.
I give new science explination and zaba gpt
Puts it in equation form. Do i own equation and all money made?
Example 3.
I draw a new science and ask zaba gpt to calculate an aspect to save me time, do i own that new math? And all money made from it?
Usually i do the work myself and license it.
Thats after checking to see its not already been done.
Zaba gpt could be useful in this but i fear theft,
As has been a big issue with humans.
And why i do my own inventions, even when on behalf of others.
Over time ive esculated inventions, matching new level of value to what level of tek would such value require.
Then i invent a new.
 



30
Ultra Hal 7.5 Beta - Now powered by OpenAI GPT-3 / Re: Anyone Still using A.I. HAL?
« Last post by Art on January 29, 2023, 11:05:13 am »
No, Hal 7.5 still gives one the option of Cloud, Cloud w/ GPT-3 (Davinci + Credits) or just local on your machine only.

It's Free as well, at the moment.

Your existing Plugins should continue to work fine.

Be well, Will & Mr. Data!!
Pages: 1 2 [3] 4 5 ... 10