gRPC Interface
All capabilities communicate with the Selu orchestrator through a single gRPC service defined in capability.proto. This keeps the interface uniform regardless of what language your capability is written in.
The proto definition
Section titled “The proto definition”syntax = "proto3";
package selu.capability.v1;
service CapabilityService { // Invoke a tool within this capability. rpc Invoke(InvokeRequest) returns (InvokeResponse);
// Health check (optional, used by orchestrator for readiness). rpc HealthCheck(HealthCheckRequest) returns (HealthCheckResponse);}
message InvokeRequest { // Name of the tool to invoke (matches manifest.yaml tools[*].name). string tool_name = 1;
// JSON-encoded parameters from the LLM's tool call. string parameters = 2;
// Opaque session context passed by the orchestrator. map<string, string> context = 3;}
message InvokeResponse { // JSON-encoded result returned to the LLM. string result = 1;
// Indicates whether the invocation succeeded. bool success = 2;
// Human-readable error message (only set when success is false). string error = 3;}
message HealthCheckRequest {}
message HealthCheckResponse { bool healthy = 1;}InvokeRequest
Section titled “InvokeRequest”| Field | Type | Description |
|---|---|---|
tool_name | string | Matches the name field in your manifest.yaml tools list. A single capability can expose multiple tools. |
parameters | string | JSON object with the parameters the LLM provided. Parse this in your handler. |
context | map<string, string> | Metadata from the orchestrator — includes session_id, user_id, and any custom context. Do not rely on specific keys being present; treat this as optional. |
InvokeResponse
Section titled “InvokeResponse”| Field | Type | Description |
|---|---|---|
result | string | JSON-encoded result. This is injected into the LLM conversation as the tool result. |
success | bool | true if the tool executed correctly. |
error | string | Error message shown to the LLM when success is false. The LLM uses this to explain the failure to the user. |
Server implementation
Section titled “Server implementation”Your gRPC server must listen on port 50051 inside the container. The orchestrator connects to this port automatically.
Here’s a minimal Python example:
import grpcfrom concurrent import futuresimport jsonimport capability_pb2 as pb2import capability_pb2_grpc as pb2_grpc
class CapabilityServicer(pb2_grpc.CapabilityServiceServicer): def Invoke(self, request, context): params = json.loads(request.parameters)
if request.tool_name == "weather_lookup": location = params.get("location", "unknown") # ... call weather API ... return pb2.InvokeResponse( result=json.dumps({"temperature": 22, "conditions": "sunny"}), success=True, )
return pb2.InvokeResponse( success=False, error=f"Unknown tool: {request.tool_name}", )
def HealthCheck(self, request, context): return pb2.HealthCheckResponse(healthy=True)
def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=4)) pb2_grpc.add_CapabilityServiceServicer_to_server(CapabilityServicer(), server) server.add_insecure_port("[::]:50051") server.start() server.wait_for_termination()
if __name__ == "__main__": serve()Code generation
Section titled “Code generation”Generate stubs from capability.proto using standard gRPC tooling for your language. Selu publishes the proto file at:
https://github.com/selu-bot/proto/blob/main/capability/v1/capability.protoNext steps
Section titled “Next steps”- Container Guidelines — Network, resource, and security rules for capability containers.
- Example: Weather Agent — See a full implementation end to end.