Model Configuration
A model configuration is a foundational resource in PromptOpinion. It is used for agents as well as guardrail agents. This is where you will start before creating your agents.
The model configuration page in the web app can be found under Configuration -> Models. Click on the Add Configuration button to begin adding a new model configuration.
Free Models
GitHub and Google offer free models with limited quotas to help get you started.
Google AI Studio
You can obtain a free Google ai studio key by clicking here. Once you open the page, click on the Create API Key button at the top right to generate a new key. Keep note of this key.
Go back to the page where you are adding your model configuration. Write down your API key and then click on the Load Models button. This will load a dropdown menu that will list all the available models for you.
GitHub
To create a github key you will need to create a personal fine-grained access token. You can do so by clicking here.
Once you are on the page, click on the Generate new token button. Configure the token by adding a name, description, expiration, etc. When adding a permission, ensure that the Models permission is included and the access level is Read-only. Once you have the permission added, you can create the Generate token button and take note of the key.
Go back to the page where you are adding your model configuration. Write down your API key and then click on the Load Models button. This will load a dropdown menu that will list all the available models for you.
Connect Your Own Models
If you have a model configured via a paid account, you can provide these details in the Connect Your Model tab. We currently support the following providers:
- OpenAI
- OpenAI on Azure Foundry
- Claude
- Claude on Azure Foundry
- Claude on Google Vertex AI
- Gemini on Vertex AI
NOTE
If you would like us to support another provider, please create a feature request
Each provider may ask for different sets of information. For example, any model hosted on Vertex AI will ask you to provide your service account key in JSON format, whereas a model hosted on Azure Foundry will ask you to provide an endpoint.
Embedding Models
If your model is an embedding model and not an LLM model, flip the Is Embedding flag to true.
Embedding models are used for vectorizing data and will be mostly used in the Collections page.
Default Model
A model marked as the default model will be used automatically by Po provided agents. You can only have one default model in your workspace.
The default model will also be used for operations outside of building an agent, such as parsing a clinical trial protocol.