Skip to content

gRPC Interface

All capabilities communicate with the Selu orchestrator through a single gRPC service defined in capability.proto. This keeps the interface uniform regardless of what language your capability is written in.

capability.proto
syntax = "proto3";
package selu.capability.v1;
service CapabilityService {
// Invoke a tool within this capability.
rpc Invoke(InvokeRequest) returns (InvokeResponse);
// Health check (optional, used by orchestrator for readiness).
rpc HealthCheck(HealthCheckRequest) returns (HealthCheckResponse);
}
message InvokeRequest {
// Name of the tool to invoke (matches manifest.yaml tools[*].name).
string tool_name = 1;
// JSON-encoded parameters from the LLM's tool call.
string parameters = 2;
// Opaque session context passed by the orchestrator.
map<string, string> context = 3;
}
message InvokeResponse {
// JSON-encoded result returned to the LLM.
string result = 1;
// Indicates whether the invocation succeeded.
bool success = 2;
// Human-readable error message (only set when success is false).
string error = 3;
}
message HealthCheckRequest {}
message HealthCheckResponse {
bool healthy = 1;
}
FieldTypeDescription
tool_namestringMatches the name field in your manifest.yaml tools list. A single capability can expose multiple tools.
parametersstringJSON object with the parameters the LLM provided. Parse this in your handler.
contextmap<string, string>Metadata from the orchestrator — includes session_id, user_id, and any custom context. Do not rely on specific keys being present; treat this as optional.
FieldTypeDescription
resultstringJSON-encoded result. This is injected into the LLM conversation as the tool result.
successbooltrue if the tool executed correctly.
errorstringError message shown to the LLM when success is false. The LLM uses this to explain the failure to the user.

Your gRPC server must listen on port 50051 inside the container. The orchestrator connects to this port automatically.

Here’s a minimal Python example:

server.py
import grpc
from concurrent import futures
import json
import capability_pb2 as pb2
import capability_pb2_grpc as pb2_grpc
class CapabilityServicer(pb2_grpc.CapabilityServiceServicer):
def Invoke(self, request, context):
params = json.loads(request.parameters)
if request.tool_name == "weather_lookup":
location = params.get("location", "unknown")
# ... call weather API ...
return pb2.InvokeResponse(
result=json.dumps({"temperature": 22, "conditions": "sunny"}),
success=True,
)
return pb2.InvokeResponse(
success=False,
error=f"Unknown tool: {request.tool_name}",
)
def HealthCheck(self, request, context):
return pb2.HealthCheckResponse(healthy=True)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=4))
pb2_grpc.add_CapabilityServiceServicer_to_server(CapabilityServicer(), server)
server.add_insecure_port("[::]:50051")
server.start()
server.wait_for_termination()
if __name__ == "__main__":
serve()

Generate stubs from capability.proto using standard gRPC tooling for your language. Selu publishes the proto file at:

https://github.com/selu-bot/proto/blob/main/capability/v1/capability.proto