Production-ready networked multi-agent substrate built on xml-pipeline. One port. Many bounded minds. Cryptographically sovereign.
Find a file
dullfig 860395cd58 Add usage/gas tracking REST API endpoints
Endpoints:
- GET /api/v1/usage - Overview with totals, per-agent, per-model breakdown
- GET /api/v1/usage/threads - List all thread budgets sorted by usage
- GET /api/v1/usage/threads/{id} - Single thread budget details
- GET /api/v1/usage/agents/{id} - Usage totals for specific agent
- GET /api/v1/usage/models/{model} - Usage totals for specific model
- POST /api/v1/usage/reset - Reset all usage tracking

Models:
- UsageTotals, UsageOverview, UsageResponse
- ThreadBudgetInfo, ThreadBudgetListResponse
- AgentUsageInfo, ModelUsageInfo

Also adds has_budget() method to ThreadBudgetRegistry for checking
if a thread exists without auto-creating it.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 21:20:36 -08:00
.codebuddy edited README.md 2025-12-23 19:03:58 -08:00
.idea re-writing docs and code 2026-01-03 14:48:57 -08:00
bloxserver Add edge_mappings table for AI-assisted field mapping 2026-01-26 07:23:03 +00:00
config Add platform-managed PromptRegistry and LLM API 2026-01-11 13:57:51 -08:00
docs Add Premium Librarian spec — RLM-powered codebase intelligence 2026-01-26 07:32:07 +00:00
examples Rename agentserver to xml_pipeline, add console example 2026-01-19 21:41:19 -08:00
handlers Rebrand Nextra → OpenBlox 2026-01-27 20:31:13 -08:00
tests Add usage/gas tracking REST API endpoints 2026-01-27 21:20:36 -08:00
third_party/xmlable Fix termcolor compatibility in vendored xmlable 2026-01-19 21:53:32 -08:00
xml_pipeline Add usage/gas tracking REST API endpoints 2026-01-27 21:20:36 -08:00
.env.example Add thread registry, LLM router, console handler, and docs updates 2026-01-10 16:53:38 -08:00
.gitignore Add BloxServer API scaffold + architecture docs 2026-01-22 22:04:25 -08:00
__init__.py Rename agentserver to xml_pipeline, add console example 2026-01-19 21:41:19 -08:00
CLAUDE.md Document token budget and usage tracking in CLAUDE.md 2026-01-27 21:13:35 -08:00
LICENSE Initial commit 2025-12-21 17:22:03 -08:00
pyproject.toml Rebrand Nextra → OpenBlox 2026-01-27 20:31:13 -08:00
README.md Update README and pyproject.toml for PyPI release 2026-01-19 21:48:52 -08:00
run_organism.py Rebrand Nextra → OpenBlox 2026-01-27 20:31:13 -08:00
setup-project.ps1 initial dir structure 2025-12-25 21:52:36 -08:00
structure.md Rename agentserver to xml_pipeline, add console example 2026-01-19 21:41:19 -08:00
v0-prompt.md Add v0 prompt for frontend generation 2026-01-27 05:14:56 +00:00

xml-pipeline

Schema-driven XML message bus for multi-agent systems.

Python 3.11+ License: MIT

xml-pipeline is a Python library for building multi-agent systems with validated XML message passing. Agents communicate through typed payloads, validated against auto-generated XSD schemas, with built-in LLM routing and conversation memory.

Why XML?

JSON was a quick hack that became the default for AI tool calling, where its brittleness causes endless prompt surgery and validation headaches. xml-pipeline chooses XML deliberately:

  • Exact contracts — XSD validation catches malformed messages before they cause problems
  • Tolerant parsing — Repair mode recovers from LLM output quirks
  • Self-describing — Namespaces prevent collision, schemas are discoverable
  • No escaping hell — Mixed content, nested structures, all handled cleanly

Read the full rationale.

Installation

pip install xml-pipeline

# With LLM provider support
pip install xml-pipeline[anthropic]    # Anthropic Claude
pip install xml-pipeline[openai]       # OpenAI GPT

# With all features
pip install xml-pipeline[all]

Quick Start

1. Define a payload

from dataclasses import dataclass
from third_party.xmlable import xmlify

@xmlify
@dataclass
class Greeting:
    name: str

2. Write a handler

from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse

@xmlify
@dataclass
class GreetingReply:
    message: str

async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> HandlerResponse:
    return HandlerResponse(
        payload=GreetingReply(message=f"Hello, {payload.name}!"),
        to="output",
    )

3. Configure the organism

# organism.yaml
organism:
  name: hello-world

listeners:
  - name: greeter
    payload_class: myapp.Greeting
    handler: myapp.handle_greeting
    description: Greets users by name

  - name: output
    payload_class: myapp.GreetingReply
    handler: myapp.print_output
    description: Prints output

4. Run it

import asyncio
from xml_pipeline.message_bus import bootstrap

async def main():
    pump = await bootstrap("organism.yaml")
    await pump.run()

asyncio.run(main())

Console Example

Try the interactive console example:

pip install xml-pipeline[console]
python -m examples.console
> @greeter Alice
[greeter] Hello, Alice! Welcome to xml-pipeline.

> @echo Hello world
[echo] Hello world

> /quit

See examples/console/ for the full source.

Key Features

Typed Message Passing

Payloads are Python dataclasses with automatic XSD generation:

@xmlify
@dataclass
class Calculate:
    expression: str
    precision: int = 2

The library auto-generates:

  • XSD schema for validation
  • Example XML for documentation
  • Usage instructions for LLM prompts

LLM Router

Multi-backend LLM support with failover:

llm:
  strategy: failover
  backends:
    - provider: anthropic
      api_key_env: ANTHROPIC_API_KEY
    - provider: openai
      api_key_env: OPENAI_API_KEY
from xml_pipeline.llm import complete

response = await complete(
    model="claude-sonnet-4",
    messages=[{"role": "user", "content": "Hello!"}],
)

Handler Security

Handlers are sandboxed. They cannot:

  • Forge sender identity (injected by pump)
  • Escape thread context (managed by registry)
  • Route to undeclared peers (validated against config)
  • Access other threads (opaque UUIDs)

Conversation Memory

Thread-scoped context buffer tracks message history:

from xml_pipeline.memory import get_context_buffer

buffer = get_context_buffer()
history = buffer.get_thread(metadata.thread_id)

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         StreamPump                               │
│  • Parallel pipelines per listener                               │
│  • Repair → C14N → Validate → Deserialize → Route → Dispatch    │
└─────────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────────┐
│                          Handlers                                │
│  • Receive typed payload + metadata                              │
│  • Return HandlerResponse or None                                │
│  • Cannot forge identity or escape thread                        │
└─────────────────────────────────────────────────────────────────┘

See docs/core-principles-v2.1.md for the full architecture.

Documentation

Document Description
Core Principles Architecture overview
Handler Contract How to write handlers
Message Pump Pipeline processing
LLM Router Multi-backend LLM support
Configuration organism.yaml reference
Why Not JSON? Design rationale

Requirements

  • Python 3.11+
  • Dependencies: lxml, aiostream, pyyaml, httpx, cryptography

License

MIT License. See LICENSE.


XML wins. Safely. Permanently.