Skip to content

Developer Guide Overview

Welcome to the Selu developer guide. This section covers everything you need to build, test, and publish agents for the Selu marketplace.

Selu has three core layers that work together to turn a user message into an intelligent response:

  1. Orchestrator — The central runtime that receives messages from channels, routes them to the correct agent, manages conversation memory, and coordinates tool calls. It runs as a long-lived process inside the Selu Docker stack.
  2. Agents — Declarative packages that define what an AI assistant does. Each agent is described by an agent.yaml manifest and an agent.md system prompt, plus an optional set of capabilities. Agents don’t contain application code themselves — they configure the orchestrator.
  3. Capabilities — The hands and feet of an agent. A capability is a gRPC micro-service running in its own Docker container. When the orchestrator decides an agent needs to take an action (call an API, read a file, query a database), it invokes a capability over gRPC.

Agents

Declarative YAML + Markdown packages that define personality, routing, and which capabilities are available.

Capabilities

Containerised gRPC services that give agents the ability to take real-world actions — weather lookups, calendar access, web search, and more.

Orchestrator

The Selu runtime that ties it all together: routing, memory, LLM calls, and tool invocation.

Built-in Tools

Platform-level tools like emit_event and delegate_to_agent that every agent can use without a custom capability.

Here’s what happens when a user sends a message:

  1. A channel (web chat, iMessage, Telegram) delivers the message to the orchestrator.
  2. The orchestrator looks up the active session and determines which agent should handle the message based on routing rules.
  3. The agent’s system prompt (agent.md) and conversation history are assembled into an LLM request.
  4. If the LLM responds with a tool call, the orchestrator invokes the matching capability via gRPC and feeds the result back to the model.
  5. Steps 3–4 repeat until the LLM produces a final text response, which is sent back through the channel.

As a developer, you can contribute at two levels:

  • Agent packages — Define new personas, wire up existing capabilities, and configure routing. No code required — just YAML and Markdown.
  • Capabilities — Write a small gRPC service in any language, package it as a Docker image, and give agents new superpowers.

Start with the Package Structure to understand how an agent package is laid out, or jump straight to Build Your First Agent for a hands-on tutorial.