Skip to content

prompt.md — Capability Prompts

Each capability can include a prompt.md file alongside its manifest.yaml. When the orchestrator assembles the system prompt for a conversation, it appends each active capability’s prompt.md after the agent’s agent.md. This gives you a place to provide detailed, tool-specific instructions without cluttering the main agent prompt.

Use prompt.md when:

  • The tool has a nuanced API that the LLM needs guidance on (e.g. specific parameter formats).
  • You want to provide examples of correct tool usage.
  • There are edge cases or error handling patterns the LLM should follow.

Skip prompt.md when the tool is simple enough that the description fields in manifest.yaml are sufficient.

For a weather capability, prompt.md might look like:

capabilities/weather/prompt.md
## Using the weather_lookup tool
Call `weather_lookup` when the user asks about current weather or a forecast.
### Parameter guidance
- `location`: Prefer city names ("Berlin", "New York"). If the user gives a country without a city, ask them to be more specific.
- `units`: Default to `metric`. Only switch to `imperial` if the user explicitly asks for Fahrenheit or miles.
### Response formatting
- Report temperature, conditions, and humidity.
- For forecasts, summarise the next 3 days in a short list.
- If the API returns an error, tell the user the lookup failed and suggest they try again with a different location.
### Examples
- User: "What's the weather in Tokyo?" → call `weather_lookup(location="Tokyo", units="metric")`
- User: "How hot is it in Phoenix in Fahrenheit?" → call `weather_lookup(location="Phoenix", units="imperial")`

The final system prompt sent to the LLM is assembled in this order:

  1. agent.md — the agent’s core prompt
  2. Capability prompt.md files — one per active capability, in the order listed in agent.yaml
  3. Runtime context — template variables like {{user_name}} and {{date}}
  • Use headings — The LLM parses structured Markdown more reliably than prose.
  • Include examples — Show concrete tool call examples. This dramatically improves the model’s accuracy with parameter formatting.
  • Describe error handling — Tell the LLM what to do when the tool returns an error.
  • Don’t repeat manifest info — The description fields from manifest.yaml are already sent to the model. Use prompt.md for guidance that goes beyond basic descriptions.

See the gRPC Interface to understand the transport layer, or check out the Example: Weather Agent for a full working example with prompt.md included.