BYO Agents (Bring-Your-Own Agents)
BYO agents are agents that you build where you can customize various features of that agent including the system prompt, mcp servers, guardrails, etc. At the core of your agent is the LLM model. You can create a model configuration here.
You can add BYO agents by navigating to the Agents -> BYO Agents page.
Click on the Add AI Agent button to get started.
Content
You can ground your agent to a specific knowledge base by going to the Content tab in the agent editor. You can set your agent to be grounded to one of your collections in the workspace, a public collection that someone else has published, or PubMed.
When an agent is grounded with content, it will typically only look through this content for answering prompts rather than it's general knowledge base.
Only one content selection can be set for an agent.
System Prompt Customization
You can customize the system prompt for your agent.
WARNING
We use variables in the system prompt that later get replaced during a conversation with the agent. If you use a custom system prompt, you should include those variables in your prompt, otherwise the agent may behave unexpectedly.
You can review the variables being used by clicking on the Load Default button, which loads our default prompt. This default prompt will contain the variables we use.
NOTE
The default system prompt changes depending on the type of content that is set. There is a different default template used if you set your content to be PubMed, your own collection, or no content.
Consult Prompt Customization
You can customize the consultation prompt. The consultation prompt is used when you are in a conversation with an agent and you specify that you would like to consult with an external agent.
WARNING
We recommend that the default consultation prompt is used. You can load the default prompt and change it or completely customize it to fit your needs, but we cannot guarantee that the agent will behave as expected if the default prompt is overwritten.
Response Format
Most major LLM providers have the option of specifying the output format to be JSON. The Response Format tab can be used to provide the schema of the JSON output.
When a response format schema exists, all text output will be JSON and it'll be up to the client that is using the agent to understand and parse that JSON.
The schema should follow the JSON schema standard.
NOTE
Each LLM provider has their own schema restrictions. For example, here are rules and limitations for claude.
Always check your LLM provider's documentation on JSON output to ensure you are following their best practices.
Tools
This is where you specify which MCP servers your agent will have access to.
Guardrails
This is where you specify which guardrails will be executed during conversations with your agent. At this time, we only have the ability to create guardrails that run before a prompt is sent to the agent.
Additional features such as running guardrails after a prompt has been returned by an LLM are coming soon.
A2A & Skills
You can enable your agent to support A2A. When enabling A2A support, you must include at least one skill. These skills are included in the agent card so that client agents can understand the capabilities of your agent.
In addition to this, you can specify that your agent supports PromptOpinion's FHIR context extension. When this is enabled, it indicates to clients that the agent will use FHIR context to load FHIR data. The client will need to provide the url to the FHIR server, a token to authorize with the server if necessary, and a patient id if working under a patient context.
You can indicate that the FHIR context is required. In this case, the client MUST support PromptOpinion's FHIR context extension and MUST pass FHIR context in order to use this agent.
Using Your Agent
Once you have customized your agent and are ready to use it, you can go to the launchpad page and your agent should be displayed depending on the scope you have selected.