The Assistant is your way to interact with our chat-based LLMs. The assistant is your gateway to:
To access the assistant, click the Assistant 🧠 button in the bottom right corner
When the assistant opens you’ll be prompted to enter one of 3 options.
The Internet of Models (IoM) is many different models all accessible from one interface. You can interact with public LLMs like Claude 2 and ChatGPT, as well as webAI proprietary models like LBRA.
This is the default setting of the Assistant. Use the chat the same way you would the chat interface for these models. Ask it to generate an image, write or summarize text, or any other LLM prompt.
To use LBRA (our multi-modal image intelligence model), upload an image and it will create a description. From there you can ask it to identify objects in the image, or create more text around it.
1. Open the Assistant
2. Type "Build A local Expert"
3. You can select either a Preloaded Dataset or have the system create one about a topic
4. If you want to add your own data, you can drop it into this folder in the application support folder.
Add datasets to /library/application-support/webai/runtime/outputs
5. To have the system create a data set, open your profile and settings. Add API Keys for OpenAI, Claude, and Palm. You need all 3 to have the system create a data set. This is because our architecture creates questions for all 3, then finds areas of consensus and disagreement to build more accurate experts than 1 model can alone can provide.
6. Add the topic you'd like to create an expert around
7. Add any additional documents you'd like included in the training data
8. Choose the size of the training data you'd like generated. The larger the dataset. Larger training data sets are more accurate and comprehensive, but take longer to train.
9. Press Play to start creating the data set
10. When the training data is compiled, Press Play to train your expert
11. When the expert is ready, chat away!