prompt.md — Capability Prompts
Each capability can include a prompt.md file alongside its manifest.yaml. When the orchestrator assembles the system prompt for a conversation, it appends each active capability’s prompt.md after the agent’s agent.md. This gives you a place to provide detailed, tool-specific instructions without cluttering the main agent prompt.
When to use prompt.md
Section titled “When to use prompt.md”Use prompt.md when:
- The tool has a nuanced API that the LLM needs guidance on (e.g. specific parameter formats).
- You want to provide examples of correct tool usage.
- There are edge cases or error handling patterns the LLM should follow.
Skip prompt.md when the tool is simple enough that the description fields in manifest.yaml are sufficient.
Example
Section titled “Example”For a weather capability, prompt.md might look like:
## Using the weather_lookup tool
Call `weather_lookup` when the user asks about current weather or a forecast.
### Parameter guidance- `location`: Prefer city names ("Berlin", "New York"). If the user gives a country without a city, ask them to be more specific.- `units`: Default to `metric`. Only switch to `imperial` if the user explicitly asks for Fahrenheit or miles.
### Response formatting- Report temperature, conditions, and humidity.- For forecasts, summarise the next 3 days in a short list.- If the API returns an error, tell the user the lookup failed and suggest they try again with a different location.
### Examples- User: "What's the weather in Tokyo?" → call `weather_lookup(location="Tokyo", units="metric")`- User: "How hot is it in Phoenix in Fahrenheit?" → call `weather_lookup(location="Phoenix", units="imperial")`How prompt assembly works
Section titled “How prompt assembly works”The final system prompt sent to the LLM is assembled in this order:
agent.md— the agent’s core prompt- Capability
prompt.mdfiles — one per active capability, in the order listed inagent.yaml - Runtime context — template variables like
{{user_name}}and{{date}}
Best practices
Section titled “Best practices”- Use headings — The LLM parses structured Markdown more reliably than prose.
- Include examples — Show concrete tool call examples. This dramatically improves the model’s accuracy with parameter formatting.
- Describe error handling — Tell the LLM what to do when the tool returns an error.
- Don’t repeat manifest info — The
descriptionfields frommanifest.yamlare already sent to the model. Useprompt.mdfor guidance that goes beyond basic descriptions.
Next steps
Section titled “Next steps”See the gRPC Interface to understand the transport layer, or check out the Example: Weather Agent for a full working example with prompt.md included.