Rename agentserver to xml_pipeline, add console example

OSS restructuring for open-core model:
- Rename package from agentserver/ to xml_pipeline/
- Update all imports (44 Python files, 31 docs/configs)
- Update pyproject.toml for OSS distribution (v0.3.0)
- Move prompt_toolkit from core to optional [console] extra
- Remove auth/server/lsp from core optional deps (-> Nextra)

New console example in examples/console/:
- Self-contained demo with handlers and config
- Uses prompt_toolkit (optional, falls back to input())
- No password auth, no TUI, no LSP — just the basics
- Shows how to use xml-pipeline as a library

Import changes:
- from agentserver.* -> from xml_pipeline.*
- CLI entry points updated: xml_pipeline.cli:main

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
dullfig 2026-01-19 21:41:19 -08:00
parent 3ffab8a3dd
commit e653d63bc1
123 changed files with 5012 additions and 247 deletions

376
CLAUDE.md Normal file
View file

@ -0,0 +1,376 @@
# AgentServer (xml-pipeline)
A tamper-proof nervous system for multi-agent AI systems using XML as the sovereign wire format. AgentServer provides a schema-driven, Turing-complete message bus where agents communicate through validated XML payloads, with automatic XSD generation, handler isolation, and built-in security guarantees against agent misbehavior.
**Version:** 0.2.0
## Tech Stack
| Layer | Technology | Version | Purpose |
|-------|------------|---------|---------|
| Runtime | Python | 3.11+ | Async-first, type-hinted codebase |
| Streaming | aiostream | 0.5+ | Stream-based message pipeline with fan-out |
| XML Processing | lxml | Latest | XSD validation, C14N normalization, repair |
| Serialization | xmlable | vendored | Dataclass ↔ XML round-trip with auto-XSD |
| Config | PyYAML | Latest | Organism configuration (organism.yaml) |
| Crypto | cryptography | Latest | Ed25519 identity keys for signing |
| Console | prompt_toolkit | 3.0+ | Interactive TUI console |
| HTTP | httpx | 0.27+ | LLM backend communication |
| Case conversion | pyhumps | Latest | Snake/camel case conversion |
## Quick Start
```bash
# Prerequisites
# - Python 3.11 or higher
# - pip (or uv/pipx for faster installs)
# Clone and setup
git clone <repo-url>
cd xml-pipeline
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # Linux/macOS
# Install with all features
pip install -e ".[all]"
# Or minimal install + specific features
pip install -e "." # Core only
pip install -e ".[anthropic]" # + Anthropic SDK
pip install -e ".[server]" # + WebSocket server
# Configure environment
cp .env.example .env
# Edit .env to add your API keys (XAI_API_KEY, ANTHROPIC_API_KEY, etc.)
# Run the organism
python run_organism.py config/organism.yaml
# Or use CLI
xml-pipeline run config/organism.yaml
xp run config/organism.yaml # Short alias
# Run tests
pip install -e ".[test]"
pytest tests/ -v
```
## Project Structure
```
xml-pipeline/
├── xml_pipeline/ # Main package
│ ├── auth/ # Authentication (TOTP, sessions, users)
│ ├── config/ # Config loading and templates
│ ├── console/ # TUI console and secure console
│ ├── listeners/ # Listener implementations and examples
│ ├── llm/ # LLM router, backends, token bucket
│ ├── memory/ # Context buffer for conversation history
│ ├── message_bus/ # Core message pump and pipeline
│ │ ├── steps/ # Pipeline steps (repair, c14n, validation, etc.)
│ │ ├── stream_pump.py # Main aiostream-based pump
│ │ ├── message_state.py # Message state dataclass
│ │ ├── thread_registry.py # Opaque UUID ↔ call chain mapping
│ │ └── system_pipeline.py # External message injection
│ ├── platform/ # Platform-level APIs (prompt registry, LLM API)
│ ├── primitives/ # System message types (Boot, TodoUntil, etc.)
│ ├── prompts/ # System prompts (no_paperclippers, etc.)
│ ├── schema/ # XSD schema files
│ ├── server/ # HTTP/WebSocket server
│ ├── tools/ # Native tools (files, shell, search, etc.)
│ └── utils/ # Shared utilities
├── config/ # Example organism configurations
├── docs/ # Architecture and design docs
├── examples/ # Example MCP servers and integrations
├── handlers/ # Example message handlers
├── tests/ # pytest test suite
├── third_party/ # Vendored dependencies
│ └── xmlable/ # XML serialization library
├── pyproject.toml # Project metadata and dependencies
├── run_organism.py # Main entry point with TUI
└── organism.yaml # Default organism config (if present)
```
## Architecture Overview
AgentServer implements a stream-based message pump where all communication flows through validated XML envelopes. The architecture enforces strict isolation between handlers (untrusted code) and the system (trusted zone).
```
┌─────────────────────────────────────────────────────────────────────┐
│ TRUSTED ZONE (System) │
│ • Thread registry (UUID ↔ call chain mapping) │
│ • Listener registry (name → peers, schema) │
│ • Envelope injection (<from>, <thread>, <to>) │
│ • Peer constraint enforcement │
└─────────────────────────────────────────────────────────────────────┘
Coroutine Capture Boundary
┌─────────────────────────────────────────────────────────────────────┐
│ UNTRUSTED ZONE (Handlers) │
│ • Receive typed payload + metadata │
│ • Return HandlerResponse or None │
│ • Cannot forge identity, escape thread, or probe topology │
└─────────────────────────────────────────────────────────────────────┘
```
**Message Flow:**
1. Raw bytes → Repair → C14N → Envelope validation → Payload extraction
2. Thread assignment → XSD validation → Deserialization → Routing
3. Handler dispatch → Response wrapping → Re-injection
### Key Modules
| Module | Location | Purpose |
|--------|----------|---------|
| StreamPump | `xml_pipeline/message_bus/stream_pump.py` | Main message pump with aiostream pipeline |
| MessageState | `xml_pipeline/message_bus/message_state.py` | State object flowing through pipeline steps |
| ThreadRegistry | `xml_pipeline/message_bus/thread_registry.py` | Maps opaque UUIDs to call chains |
| SystemPipeline | `xml_pipeline/message_bus/system_pipeline.py` | External message injection (console, webhooks) |
| LLMRouter | `xml_pipeline/llm/router.py` | Multi-backend LLM routing with failover |
| PromptRegistry | `xml_pipeline/platform/prompt_registry.py` | Immutable system prompt storage |
| ContextBuffer | `xml_pipeline/memory/context_buffer.py` | Conversation history per thread |
## Development Guidelines
### File Naming
- Python files: `snake_case.py` (e.g., `stream_pump.py`, `message_state.py`)
- Config files: `snake_case.yaml` or `kebab-case.yaml`
- Test files: `test_*.py` in `tests/` directory
### Code Naming
- Classes: `PascalCase` (e.g., `StreamPump`, `MessageState`, `HandlerResponse`)
- Functions/methods: `snake_case` (e.g., `repair_step`, `handle_greeting`)
- Variables: `snake_case` (e.g., `thread_id`, `payload_class`)
- Constants: `SCREAMING_SNAKE_CASE` (e.g., `MAX_FILE_SIZE`, `ROUTING_ERROR`)
- Private members: `_leading_underscore` (e.g., `_running`, `_registry`)
- Async functions: regular `snake_case`, no special prefix
### Payload Classes (xmlify pattern)
```python
from dataclasses import dataclass
from third_party.xmlable import xmlify
@xmlify
@dataclass
class Greeting:
"""Incoming greeting request."""
name: str
```
### Handler Pattern
```python
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> HandlerResponse:
"""Handler receives typed payload + metadata, returns HandlerResponse."""
return HandlerResponse(
payload=GreetingResponse(message="Hello!"),
to="next-listener",
)
```
### Import Order
1. `from __future__ import annotations` (if needed)
2. Standard library imports
3. Third-party imports (lxml, aiostream, etc.)
4. Local imports from `xml_pipeline.*`
5. Local imports from `third_party.*`
### Type Hints
- Always use type hints for function parameters and return types
- Use `from __future__ import annotations` for forward references
- MyPy is configured with `disallow_untyped_defs = true`
## Available Commands
| Command | Description |
|---------|-------------|
| `xml-pipeline run [config]` | Run organism from config file |
| `xml-pipeline init [name]` | Create new organism config template |
| `xml-pipeline check [config]` | Validate config without running |
| `xml-pipeline version` | Show version and installed features |
| `xp run [config]` | Short alias for xml-pipeline run |
| `python run_organism.py [config]` | Run with TUI console |
| `python run_organism.py --simple [config]` | Run with simple console |
| `pytest tests/ -v` | Run test suite |
| `pytest tests/test_pipeline_steps.py -v` | Run specific test file |
## Environment Variables
| Variable | Required | Description | Example |
|----------|----------|-------------|---------|
| `XAI_API_KEY` | For xAI | xAI (Grok) API key | `xai-...` |
| `ANTHROPIC_API_KEY` | For Anthropic | Anthropic (Claude) API key | `sk-ant-...` |
| `OPENAI_API_KEY` | For OpenAI | OpenAI API key | `sk-...` |
## Testing
- **Location:** `tests/` directory
- **Framework:** pytest with pytest-asyncio
- **Pattern:** `test_*.py` files, classes prefixed with `Test`, methods with `test_`
- **Async tests:** Use `@pytest.mark.asyncio` decorator
- **Markers:** `@pytest.mark.slow`, `@pytest.mark.integration`
- **Coverage:** No explicit target, focus on pipeline step coverage
```bash
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_pipeline_steps.py -v
# Run tests matching pattern
pytest tests/ -v -k "repair"
# Skip slow tests
pytest tests/ -v -m "not slow"
```
## Organism Configuration
Organisms are configured via YAML files (default: `config/organism.yaml`).
See @docs/configuration.md for full reference.
```yaml
organism:
name: my-organism
port: 8765
llm:
strategy: failover
backends:
- provider: xai
api_key_env: XAI_API_KEY
listeners:
- name: greeter
payload_class: handlers.hello.Greeting
handler: handlers.hello.handle_greeting
description: Greeting agent
agent: true
peers: [shouter]
prompt: |
You are a friendly greeter agent.
```
## Security Model
- **Handler Isolation:** Handlers cannot forge identity, escape threads, or probe topology
- **Peer Constraints:** Agents can only send to declared peers in config
- **Opaque Thread UUIDs:** Handlers see only UUIDs, never internal call chains
- **Envelope Injection:** `<from>`, `<thread>`, `<to>` always set by system, never by handlers
- **OOB Channel:** Privileged commands use separate localhost-only channel
## Message Envelope Format
All messages use the universal envelope with namespace `https://xml-pipeline.org/ns/envelope/v1`:
```xml
<message xmlns="https://xml-pipeline.org/ns/envelope/v1">
<meta>
<from>greeter</from>
<to>shouter</to>
<thread>550e8400-e29b-41d4-a716-446655440000</thread>
</meta>
<Greeting xmlns="">
<name>Alice</name>
</Greeting>
</message>
```
## Pipeline Steps
Messages flow through these processing stages:
1. **repair_step** — Fix malformed XML using lxml recover mode
2. **c14n_step** — Canonicalize XML (Exclusive C14N)
3. **envelope_validation_step** — Verify `<message>` structure against envelope.xsd
4. **payload_extraction_step** — Extract payload element from envelope
5. **thread_assignment_step** — Assign or inherit thread UUID
6. **xsd_validation_step** — Validate payload against listener's schema
7. **deserialization** — XML → typed @xmlify dataclass
## Optional Dependencies
```bash
# LLM providers
pip install xml-pipeline[anthropic] # Anthropic SDK
pip install xml-pipeline[openai] # OpenAI SDK
# Tool backends
pip install xml-pipeline[redis] # Distributed key-value store
pip install xml-pipeline[search] # DuckDuckGo search
# Server features
pip install xml-pipeline[auth] # TOTP + Argon2 authentication
pip install xml-pipeline[server] # WebSocket server
# Everything
pip install xml-pipeline[all]
# Development (includes all + mypy + ruff)
pip install xml-pipeline[dev]
```
## Native Tools
The project includes built-in tool implementations in `xml_pipeline/tools/`:
| Tool | File | Purpose |
|------|------|---------|
| calculate | `calculate.py` | Math expression evaluation |
| fetch | `fetch.py` | HTTP requests |
| files | `files.py` | File system operations |
| shell | `shell.py` | Shell command execution |
| search | `search.py` | Web search (DuckDuckGo) |
| keyvalue | `keyvalue.py` | Key-value storage (Redis optional) |
| convert | `convert.py` | Data format conversion |
| librarian | `librarian.py` | Documentation lookup |
## System Primitives
Built-in message types in `xml_pipeline/primitives/`:
| Primitive | Purpose |
|-----------|---------|
| `Boot` | Organism initialization message |
| `TodoUntil` | Register a watcher for expected response |
| `TodoComplete` | Close a registered watcher |
| `TextInput` | User text input from console |
| `TextOutput` | Text output to console |
## Additional Resources
- @docs/core-principles-v2.1.md — Single source of truth for architecture
- @docs/message-pump-v2.1.md — Message pump implementation details
- @docs/handler-contract-v2.1.md — Handler interface specification
- @docs/llm-router-v2.1.md — LLM backend abstraction
- @docs/secure-console-v3.md — Console and authentication
- @docs/platform-architecture.md — Platform-level APIs
- @docs/native_tools.md — Native tool implementations
- @docs/primitives.md — System primitives reference (includes thread lifecycle)
- @docs/configuration.md — Organism configuration reference
- @docs/lsp-integration.md — LSP editor support for YAML and AssemblyScript
- @docs/split-config.md — Split configuration architecture
- @docs/why-not-json.md — Rationale for XML over JSON
## Skill Usage Guide
When working on tasks involving these technologies, invoke the corresponding skill:
| Skill | Invoke When |
|-------|-------------|
| pyhumps | Converts between snake_case and camelCase naming conventions |
| xmlable | Manages dataclass ↔ XML serialization and automatic XSD generation |
| pyyaml | Loads and validates organism.yaml configuration files |
| cryptography | Implements Ed25519 identity keys for signing and federation auth |
| httpx | Handles async HTTP requests for LLM backend communication |
| aiostream | Implements stream-based message pipeline with concurrent fan-out processing |
| prompt-toolkit | Builds interactive TUI console with password input and command history |
| lxml | Handles XML processing, XSD validation, C14N normalization, and repair |
| python | Manages async-first Python 3.11+ codebase with type hints and dataclasses |
| pytest | Runs async test suite with pytest-asyncio fixtures and markers |

View file

@ -22,7 +22,7 @@ See [Core Architectural Principles](docs/core-principles-v2.1.md) for the single
## Core Philosophy
- **Autonomous DNA:** Listeners declare their contract via `@xmlify` dataclasses; the organism auto-generates XSDs, examples, and tool prompts.
- **Schema-Locked Intelligence:** Payloads validated directly against XSD (lxml) → deserialized to typed instances → pure handlers.
- **Multi-Response Tolerance:** Handlers return raw bytes; bus wraps in `<dummy></dummy>` and extracts multiple payloads (perfect for parallel tool calls or dirty LLM output).
- **Multi-Response Tolerance:** Handlers return `HandlerResponse` dataclasses; bus extracts payloads and routes them (perfect for parallel tool calls or multi-step workflows).
- **Computational Sovereignty:** Turing-complete via blind self-calls, subthreading primitives, concurrent broadcast, and visible reasoning — all bounded by private thread hierarchy and local-only control.
## Developer Experience — Create a Listener in 12 Lines
@ -30,9 +30,9 @@ See [Core Architectural Principles](docs/core-principles-v2.1.md) for the single
Just declare a dataclass contract and a one-line human description. The organism handles validation, XSD, examples, and tool prompts automatically.
```python
from xmlable import xmlify
from dataclasses import dataclass
from xml_pipeline import Listener, bus # bus is the global MessageBus
from third_party.xmlable import xmlify
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
@xmlify
@dataclass
@ -40,16 +40,22 @@ class AddPayload:
a: int
b: int
def add_handler(payload: AddPayload) -> bytes:
result = payload.a + payload.b
return f"<result>{result}</result>".encode("utf-8")
@xmlify
@dataclass
class ResultPayload:
value: int
Listener(
payload_class=AddPayload,
handler=add_handler,
name="calculator.add",
description="Adds two integers and returns their sum."
).register() # ← Boom: XSD, example, prompt auto-generated + registered
async def add_handler(payload: AddPayload, metadata: HandlerMetadata) -> HandlerResponse:
"""Handlers MUST be async and return HandlerResponse."""
result = payload.a + payload.b
return HandlerResponse.respond(payload=ResultPayload(value=result))
# In organism.yaml:
# listeners:
# - name: calculator.add
# payload_class: mymodule.AddPayload
# handler: mymodule.add_handler
# description: "Adds two integers and returns their sum."
```
The organism now speaks `<add>` — fully validated, typed, and discoverable.<br/>

View file

@ -7,7 +7,7 @@ Secure, XML-centric multi-listener organism server.
Stream-based message pump with aiostream for fan-out handling.
"""
from agentserver.message_bus import (
from xml_pipeline.message_bus import (
StreamPump,
ConfigLoader,
Listener,

View file

@ -68,6 +68,7 @@ class HandlerMetadata:
own_name: str | None = None # This listener's name (only if agent: true)
is_self_call: bool = False # True if message is from self
usage_instructions: str = "" # Auto-generated peer schemas for LLM prompts
todo_nudge: str = "" # System note about pending/raised todos
```
### Field Rationale
@ -79,6 +80,42 @@ class HandlerMetadata:
| `own_name` | Enables self-referential reasoning. Only populated for `agent: true` listeners. |
| `is_self_call` | Detect self-messages (e.g., `<todo-until>` loops). |
| `usage_instructions` | Auto-generated from peer schemas. Inject into LLM system prompt. |
| `todo_nudge` | System-generated reminder about pending todos. See Todo Registry below. |
### Todo Nudge (for LLM Agents)
The `todo_nudge` field is populated by the pump when an agent has raised "eyebrows" —
registered watchers from `TodoUntil` that have received matching responses.
**How it works:**
1. Agent registers a todo watcher via `TodoUntil` primitive
2. When expected response arrives, the watcher is "raised" (condition met)
3. On next handler call to that agent, `todo_nudge` contains a reminder
4. Agent should check `todo_nudge` and close completed todos
**Example nudge content:**
```
SYSTEM NOTE: The following todos appear complete and should be closed:
- watcher_id: abc123 (registered for: calculator.add response)
Call todo_registry.close(watcher_id) to acknowledge.
```
**Usage in handler:**
```python
async def agent_handler(payload, metadata: HandlerMetadata) -> HandlerResponse:
# Check for completed todos
if metadata.todo_nudge:
# Parse and close completed watchers
todo_registry = get_todo_registry()
raised = todo_registry.get_raised_for(metadata.thread_id, metadata.own_name)
for watcher in raised:
todo_registry.close(watcher.watcher_id)
# Continue with normal handler logic...
```
**Note:** This is an internal mechanism for LLM agent task tracking. Most handlers
can ignore this field. If empty, there are no pending todo notifications.
## Security Model
@ -155,7 +192,7 @@ async def add_handler(payload: AddPayload, metadata: HandlerMetadata) -> Handler
```python
async def research_handler(payload: ResearchPayload, metadata: HandlerMetadata) -> HandlerResponse:
from agentserver.llm import complete
from xml_pipeline.llm import complete
# Build prompt with peer awareness
system_prompt = metadata.usage_instructions + "\n\nYou are a research agent."

View file

@ -74,7 +74,7 @@ Optional flags:
```python
from xmlable import xmlify
from dataclasses import dataclass
from agentserver.message_bus.message_state import HandlerMetadata, HandlerResponse
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
@xmlify
@dataclass

View file

@ -43,7 +43,7 @@ The LLM router provides a unified interface for LLM calls. Agents simply request
### Simple Call
```python
from agentserver.llm import complete
from xml_pipeline.llm import complete
response = await complete(
model="grok-4.1",
@ -71,7 +71,7 @@ response = await complete(
```python
async def research_handler(payload: ResearchPayload, metadata: HandlerMetadata) -> HandlerResponse:
from agentserver.llm import complete
from xml_pipeline.llm import complete
response = await complete(
model="grok-4.1",
@ -233,7 +233,7 @@ except BackendError as e:
The router tracks tokens per agent for budgeting and monitoring:
```python
from agentserver.llm.router import get_router
from xml_pipeline.llm.router import get_router
router = get_router()

226
docs/lsp-integration.md Normal file
View file

@ -0,0 +1,226 @@
# LSP Integration
**Status:** Implemented
**Date:** January 2026
The AgentServer console includes Language Server Protocol (LSP) integration for intelligent
editing of configuration files and AssemblyScript listener source code.
## Overview
LSP integration provides:
- **Autocompletion** — Context-aware suggestions while typing
- **Diagnostics** — Real-time error and warning messages
- **Hover documentation** — Press F1 to see docs for the current symbol
- **Signature help** — Function parameter hints (AssemblyScript only)
## Supported Language Servers
| Server | Purpose | Install |
|--------|---------|---------|
| yaml-language-server | organism.yaml, listener configs | `npm install -g yaml-language-server` |
| asls | AssemblyScript listener source | `npm install -g assemblyscript-lsp` |
## Configuration
LSP is **automatically enabled** when the language server is installed. No configuration needed.
The system detects language servers at startup:
```python
from xml_pipeline.console.lsp import is_lsp_available, is_asls_available
yaml_ok, yaml_reason = is_lsp_available()
# (True, "yaml-language-server available") or (False, "yaml-language-server not found...")
asls_ok, asls_reason = is_asls_available()
# (True, "AssemblyScript LSP available") or (False, "asls not found...")
```
## Editor Usage
### YAML Config Editing
When editing organism.yaml or listener configs via `/config -e`:
| Key | Action |
|-----|--------|
| Ctrl+S | Save and exit |
| Ctrl+Q | Quit without saving |
| F1 | Show hover documentation |
| Ctrl+Space | Trigger completion |
The editor shows `[YAML LSP]` in the header when connected.
### AssemblyScript Editing
When editing `.ts` or `.as` listener source files:
| Key | Action |
|-----|--------|
| Ctrl+S | Save and exit |
| Ctrl+Q | Quit without saving |
| F1 | Show hover documentation |
| Ctrl+Space | Trigger completion |
| Ctrl+P | Show signature help |
The editor shows `[ASLS]` in the header when connected.
## JSON Schema for YAML
The system generates JSON schemas for yaml-language-server validation:
```
~/.xml-pipeline/schemas/
├── organism.schema.json # Schema for organism.yaml
└── listener.schema.json # Schema for listener/*.yaml
```
These are automatically generated by `ensure_schemas()` at startup.
### Schema Modeline
YAML files can include a modeline to enable schema validation:
```yaml
# yaml-language-server: $schema=~/.xml-pipeline/schemas/listener.schema.json
name: greeter
description: Greeting agent
handler: handlers.hello.handle_greeting
```
The editor automatically injects this modeline when editing config files.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Editor (prompt_toolkit) │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ LSPEditor ││
│ │ - Syntax highlighting (Pygments) ││
│ │ - Completion popup ││
│ │ - Diagnostics in status bar ││
│ │ - Hover popup on F1 ││
│ └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ LSP Manager (singleton) │
│ - Manages server lifecycle │
│ - Reference counting for cleanup │
│ - Supports multiple servers concurrently │
└─────────────────────────────────────────────────────────────┘
┌──────────────┴──────────────┐
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ YAMLLSPClient │ │ ASLSClient │
│ (yaml-language- │ │ (asls) │
│ server) │ │ │
└─────────────────────┘ └─────────────────────┘
```
## API Reference
### LSPEditor
```python
from xml_pipeline.console.editor import LSPEditor
# Edit YAML config
editor = LSPEditor(schema_type="listener", syntax="yaml")
edited_text, saved = await editor.edit(content, title="greeter.yaml")
# Edit AssemblyScript
editor = LSPEditor(syntax="assemblyscript")
edited_text, saved = await editor.edit(source, title="handler.ts")
```
### Helper Functions
```python
from xml_pipeline.console.editor import (
edit_text_async,
edit_file_async,
edit_assemblyscript_source,
detect_syntax_from_path,
)
# Edit with LSP
await edit_file_async("config/organism.yaml", schema_type="organism")
# Auto-detect syntax from extension
await edit_file_async("listeners/greeter.ts") # Uses ASLS
# Convenience for AS files
await edit_assemblyscript_source("handler.ts")
```
### LSP Manager
```python
from xml_pipeline.console.lsp import get_lsp_manager, LSPServerType
manager = get_lsp_manager()
# Get YAML client
client = await manager.get_yaml_client()
if client:
completions = await client.completion(uri, line, col)
await manager.release_client(LSPServerType.YAML)
# Get ASLS client
client = await manager.get_asls_client()
if client:
sig_help = await client.signature_help(uri, line, col)
await manager.release_client(LSPServerType.ASSEMBLYSCRIPT)
```
## Graceful Fallback
If language servers are not installed, the editor still works:
- Syntax highlighting via Pygments (no external dependency)
- No completions or diagnostics
- Header shows no LSP indicator
This allows the system to work on machines without Node.js installed.
## Installation
1. Install Node.js (v16+)
2. Install language servers:
```bash
npm install -g yaml-language-server
npm install -g assemblyscript-lsp
```
3. Restart the console
The system will automatically detect and use the language servers.
## Troubleshooting
### Language server not detected
Check if the binary is in PATH:
```bash
which yaml-language-server
which asls
```
### Editor crashes on startup
Check logs for LSP errors:
```bash
python run_organism.py 2>&1 | grep -i lsp
```
### Completions not working
1. Ensure schema files exist in `~/.xml-pipeline/schemas/`
2. Check that the YAML file has the schema modeline
3. Verify yaml-language-server is installed
---
**v2.1 Feature** — January 2026

View file

@ -95,32 +95,60 @@ Pipelines run concurrently; messages within a single pipeline are processed sequ
---
### Handler Response Processing (hard-coded path)
### Handler Response Processing (v2.1 Pattern)
After dispatcher awaits a handler:
Handlers return `HandlerResponse` dataclass (not raw bytes). After dispatcher awaits a handler:
```python
response_bytes = await handler(state.payload, metadata)
from xml_pipeline.message_bus.message_state import HandlerResponse
# Safety guard
if response_bytes is None or not isinstance(response_bytes, bytes):
response_bytes = b"<huh>Handler failed to return valid bytes — likely missing return or wrong type</huh>"
# Dispatch to handler
response = await handler(state.payload, metadata)
# Dedicated multi-payload extraction (hard-coded, tolerant)
payloads_bytes_list = await multi_payload_extract(response_bytes)
# Process response
if response is None:
# Handler terminates chain — no message emitted
return
for payload_bytes in payloads_bytes_list:
# Create fresh initial state for each emitted payload
new_state = MessageState(
if not isinstance(response, HandlerResponse):
# Legacy bytes return (deprecated) or invalid — emit error
await emit_system_error(state, "Handler must return HandlerResponse or None")
return
# Determine routing based on response type
if response.is_response:
# .respond() was used — route back to caller via thread registry
target, new_thread = thread_registry.prune_for_response(state.thread_id)
else:
# Forward to named target
target = response.to
new_thread = thread_registry.extend_chain(state.thread_id, target)
# Peer constraint enforcement (agents only)
if listener.is_agent and listener.peers:
if target not in listener.peers:
await emit_system_error(state, "Routing error")
return
# Serialize payload to XML
payload_bytes = xmlify_serialize(response.payload)
# Create fresh state for the new message
new_state = MessageState(
raw_bytes=payload_bytes,
thread_id=state.thread_id, # inherited
from_id=current_listener.name, # provenance injection
)
# Route through normal pipeline resolution (root tag lookup)
await route_and_process(new_state)
thread_id=new_thread,
from_id=current_listener.name, # Pump injects identity, never handler
)
# Re-inject into pipeline for validation and routing
await route_and_process(new_state)
```
`multi_payload_extract` wraps in `<dummy>` (idempotent), repairs/parses, extracts all root elements, returns list of bytes. If none found → single diagnostic `<huh>`.
**Key security properties:**
- `<from>` always injected from `current_listener.name` (coroutine-captured)
- `<thread>` always from thread registry (never handler output)
- `<to>` validated against peers list for agents
- Handlers cannot forge identity, escape threads, or bypass peer constraints
---
@ -165,10 +193,12 @@ async def dispatcher(state: MessageState):
1. One dedicated pipeline per registered listener + permanent system pipeline.
2. Pipelines are ordered lists of async steps operating on universal `MessageState`.
3. Routing resolution is a normal pipeline step → dispatcher receives pre-routed targets.
4. Handler responses go through hard-coded multi-payload extraction → each payload becomes fresh `MessageState` routed normally.
4. Handlers return `HandlerResponse` (or `None` to terminate) → pump wraps payload in envelope and re-injects.
5. Provenance (`<from>`) and thread continuity injected by pump, never by handlers.
6. `<huh>` guards protect against missing returns and step failures.
7. Extensibility: new steps (token counting, rate limiting, logging) insert anywhere in default list.
6. Peer constraints enforced by pump — agents can only send to declared peers.
7. Thread registry manages call chains — `.respond()` prunes, forward extends.
8. `<huh>` guards protect against step failures; `<SystemError>` for routing violations.
9. Extensibility: new steps (token counting, rate limiting, logging) insert anywhere in default list.
---

View file

@ -40,6 +40,104 @@ return None
- Chain ends here
- Thread can be cleaned up
## Thread Lifecycle & Pruning
Threads represent call chains through the system. The thread registry maps opaque UUIDs
to actual paths like `console.router.greeter.calculator`.
### Thread Creation
Threads are created when:
1. **External message arrives** — Console or WebSocket sends a message
2. **Handler forwards to peer**`HandlerResponse(to="peer")` extends the chain
```
Console sends @greeter hello
→ Thread created: "system.organism.console.greeter"
→ UUID: 550e8400-e29b-41d4-...
Greeter forwards to shouter
→ Chain extended: "system.organism.console.greeter.shouter"
→ New UUID: 6ba7b810-9dad-...
```
### Thread Pruning (Critical)
Pruning happens when a handler returns `.respond()`:
```python
# In calculator handler
return HandlerResponse.respond(payload=ResultPayload(value=42))
```
**What happens:**
1. Registry looks up current chain: `console.router.greeter.calculator`
2. Prunes last segment: → `console.router.greeter`
3. Identifies target (new tail): `greeter`
4. Creates/reuses UUID for pruned chain
5. Routes response to `greeter` with the pruned thread
**Visual:**
```
Before pruning:
console → router → greeter → calculator
↑ (current)
After .respond():
console → router → greeter
↑ (response delivered here)
```
### What Gets Cleaned Up
When a thread is pruned or terminated:
| Resource | Cleanup Behavior |
|----------|------------------|
| Thread UUID mapping | Removed from registry |
| Context buffer slots | Slots for that thread are deleted |
| In-flight messages | Completed or dropped (no orphans) |
| Sub-thread branches | Automatically pruned (cascading) |
**Important:** Sub-threads spawned by a responding handler are effectively orphaned.
If `greeter` spawned `calculator` and `summarizer`, then responds to `router`, both
`calculator` and `summarizer` branches become unreachable.
### When Cleanup Happens
| Event | Cleanup |
|-------|---------|
| `.respond()` | Current UUID cleaned; pruned chain used |
| `return None` | Thread terminates; UUID can be cleaned |
| Chain exhausted | Root reached; entire chain cleaned |
| Idle timeout | (Future) Stale threads garbage collected |
### Thread Privacy
Handlers only see opaque UUIDs via `metadata.thread_id`. They never see:
- The actual call chain (`console.router.greeter`)
- Other thread UUIDs
- The thread registry
This prevents topology probing. Even if a handler is compromised, it cannot:
- Discover who called it (beyond `from_id` = immediate caller)
- Map the organism's structure
- Forge thread IDs to access other conversations
### Debugging Threads
For debugging, the registry provides `debug_dump()`:
```python
from xml_pipeline.message_bus.thread_registry import get_registry
registry = get_registry()
chains = registry.debug_dump()
# {'550e8400...': 'console.router.greeter', ...}
```
**Note:** This is for operator debugging only, never exposed to handlers.
## System Messages
These payload elements are emitted by the system (pump) only. Agents cannot emit them.

View file

@ -1,11 +1,23 @@
# Secure Console Design — v3.0
**Status:** Design Draft
**Status:** Design Draft (Partially Implemented)
**Date:** January 2026
> **Implementation Note:** This document describes the *target design* for v3.0. The current
> implementation has the console working with password authentication and most commands, but
> the OOB network port has **not yet been removed**. See `configuration.md` for current OOB
> configuration. Full keyboard-only mode is planned for a future release.
## Overview
The console becomes the **sole privileged interface** to the organism. OOB channel is eliminated as a network port — privileged operations are only accessible via local keyboard input.
The console becomes the **sole privileged interface** to the organism. In the target design, the OOB channel is eliminated as a network port — privileged operations are only accessible via local keyboard input.
**Current State (v2.1):**
- Console with password protection: ✅ Implemented
- `/config`, `/status`, `/listeners` commands: ✅ Implemented
- `/config -e` editor with LSP support: ✅ Implemented
- OOB network port removed: ❌ Not yet (still in configuration.md)
- Keyboard-only privileged ops: ❌ Partial (console commands work, but OOB port still exists)
## Security Model
@ -197,9 +209,12 @@ class SecureConsole:
return argon2.verify(self.password_hash, password)
```
### OOB Channel Removal
### OOB Channel Removal (Planned)
The current OOB port in `privileged-msg.xsd` is **removed**. Privileged operations are:
> **Not Yet Implemented:** The OOB port is still present in v2.1. This section describes
> the target design where the OOB port is removed.
In the target design, the OOB port in `privileged-msg.xsd` is **removed**. Privileged operations are:
1. Defined as Python methods on `SecureConsole`
2. Invoked directly via keyboard commands
@ -289,13 +304,18 @@ Goodbye!
- [ ] Protected commands require password re-entry
- [ ] Argon2id for password hashing (memory-hard)
## Migration from v2.x
## Migration from v2.x (Future)
When the OOB removal is implemented, migration will involve:
1. Remove OOB port configuration from organism.yaml
2. Remove `privileged-msg.xsd` network handling
3. First run prompts for password setup
4. Existing privileged operations become console commands
**Current v2.1:** OOB is still present. Console provides an alternative privileged interface
but doesn't replace OOB yet.
## Attach/Detach Model
The console is a proper handler in the message flow. It can attach and detach without stopping the organism.

View file

@ -12,7 +12,7 @@ Declare your payload contract as an `@xmlify` dataclass + a pure async handler f
```python
from xmlable import xmlify
from dataclasses import dataclass
from agentserver.message_bus.message_state import HandlerMetadata, HandlerResponse
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
@xmlify
@dataclass
@ -36,7 +36,7 @@ async def add_handler(payload: AddPayload, metadata: HandlerMetadata) -> Handler
# LLM agent example
async def agent_handler(payload: AgentPayload, metadata: HandlerMetadata) -> HandlerResponse:
# Build prompt with peer schemas
from agentserver.llm import complete
from xml_pipeline.llm import complete
response = await complete(
model="grok-4.1",

230
docs/split-config.md Normal file
View file

@ -0,0 +1,230 @@
# Split Configuration Architecture
**Status:** Implemented
**Date:** January 2026
The split configuration architecture allows separating listener definitions from the
core organism configuration, making it easier to manage large organisms with many listeners.
## Overview
Instead of a monolithic `organism.yaml` with all listeners embedded, you can:
```
~/.xml-pipeline/
├── organism.yaml # Core settings only
└── listeners/ # Per-listener configs
├── greeter.yaml
├── calculator.yaml
└── summarizer.yaml
```
## File Structure
### organism.yaml (Core Only)
```yaml
organism:
name: hello-world
port: 8765
llm:
strategy: failover
backends:
- provider: xai
api_key_env: XAI_API_KEY
listeners:
directory: "~/.xml-pipeline/listeners"
include: ["*.yaml"]
```
The `listeners` section can either:
1. **Inline definitions** — Traditional embedded listener list
2. **Directory reference** — Point to a folder of listener files
### Listener Files
Each listener file defines a single listener:
```yaml
# ~/.xml-pipeline/listeners/greeter.yaml
# yaml-language-server: $schema=~/.xml-pipeline/schemas/listener.schema.json
name: greeter
description: Greeting agent
agent: true
handler: handlers.hello.handle_greeting
payload_class: handlers.hello.Greeting
prompt: |
You are a friendly greeter agent.
Keep responses short and enthusiastic.
peers:
- shouter
- logger
```
### File Naming Convention
| Pattern | Example | Result |
|---------|---------|--------|
| `{name}.yaml` | `greeter.yaml` | Listener named "greeter" |
| `{category}.{name}.yaml` | `calculator.add.yaml` | Listener named "calculator.add" |
The filename (without extension) should match the `name` field inside the file.
## API
### Loading Split Config
```python
from xml_pipeline.config.split_loader import load_split_config, load_organism_yaml
# Load full config (organism + listeners)
config = load_split_config("config/organism.yaml")
# Load organism.yaml only (as raw YAML string)
yaml_content = load_organism_yaml()
```
### Listener Config Store
```python
from xml_pipeline.config.listeners import (
ListenerConfigStore,
get_listener_config_store,
LISTENERS_DIR,
)
store = get_listener_config_store()
# List all listener configs
names = store.list_listeners()
# ['greeter', 'calculator.add', 'summarizer']
# Load a listener config
config = store.get("greeter")
# ListenerConfigData(name='greeter', description='...', ...)
# Load as YAML string
yaml_content = store.load_yaml("greeter")
# Save a listener config
store.save_yaml("greeter", updated_yaml)
```
## Console Integration
The `/config` command supports split configs:
| Command | Action |
|---------|--------|
| `/config` | Show current organism.yaml |
| `/config -e` | Edit organism.yaml |
| `/config @greeter` | Edit listeners/greeter.yaml |
| `/config --list` | List all listener configs |
Example session:
```
> /config --list
Listener configs in ~/.xml-pipeline/listeners:
greeter
calculator.add
summarizer
> /config @greeter
[Opens editor for greeter.yaml with LSP support]
```
## Migration from Monolithic Config
### Step 1: Create Listeners Directory
```bash
mkdir -p ~/.xml-pipeline/listeners
```
### Step 2: Extract Listener Definitions
For each listener in your organism.yaml:
```yaml
# Before (in organism.yaml)
listeners:
- name: greeter
description: Greeting agent
handler: handlers.hello.handle_greeting
# ... more fields
```
Create a separate file:
```yaml
# ~/.xml-pipeline/listeners/greeter.yaml
name: greeter
description: Greeting agent
handler: handlers.hello.handle_greeting
# ... more fields
```
### Step 3: Update organism.yaml
Replace the inline listeners with a directory reference:
```yaml
# After (organism.yaml)
listeners:
directory: "~/.xml-pipeline/listeners"
include: ["*.yaml"]
```
### Step 4: Validate
Run the organism to verify configs load correctly:
```bash
python run_organism.py config/organism.yaml
```
## Schema Validation
Listener files can use JSON Schema validation via yaml-language-server:
```yaml
# yaml-language-server: $schema=~/.xml-pipeline/schemas/listener.schema.json
name: greeter
# ...
```
Generate schemas with:
```python
from xml_pipeline.config.schema import ensure_schemas
ensure_schemas() # Creates ~/.xml-pipeline/schemas/
```
## Benefits
| Benefit | Description |
|---------|-------------|
| **Modularity** | Each listener is self-contained |
| **Version control** | Track listener changes independently |
| **Team collaboration** | Different people own different listeners |
| **Hot-reload friendly** | (Future) Reload single listener without restarting |
| **IDE support** | LSP works per-file with focused schema |
## Limitations
- Listener files must be YAML (no JSON support)
- Directory must be readable at startup
- Circular dependencies not detected (future improvement)
- Hot-reload of individual listeners not yet implemented
## Related Documentation
- [Configuration](configuration.md) — Full organism.yaml reference
- [LSP Integration](lsp-integration.md) — Editor support for config files
- [Secure Console](secure-console-v3.md) — Console commands
---
**v2.1 Feature** — January 2026

View file

@ -119,7 +119,7 @@ asc calculator.ts -o calculator.wasm --optimize
```python
# Pseudocode
from agentserver.wasm import register_wasm_listener
from xml_pipeline.wasm import register_wasm_listener
register_wasm_listener(
name="calculator",

7
examples/__init__.py Normal file
View file

@ -0,0 +1,7 @@
"""
Examples Reference implementations for xml-pipeline.
Available examples:
- console: Interactive terminal console
- mcp-servers: MCP server integrations (reddit-sentiment)
"""

185
examples/console/README.md Normal file
View file

@ -0,0 +1,185 @@
# Console Example
A minimal interactive console demonstrating xml-pipeline basics.
## Quick Start
```bash
# From the repo root
python -m examples.console
# Or with a custom config
python -m examples.console path/to/organism.yaml
```
## What's Included
```
examples/console/
├── __init__.py # Package exports
├── __main__.py # Entry point
├── console.py # Console implementation
├── handlers.py # Example handlers
├── organism.yaml # Example config
└── README.md # This file
```
## Example Session
```
==================================================
xml-pipeline console
==================================================
Organism: console-example
Listeners: 3
Type /help for commands
> /listeners
Listeners:
console-output Prints output to console
echo Echoes back your message
greeter Greets you by name
> @greeter Alice
[sending to greeter]
[greeter] Hello, Alice! Welcome to xml-pipeline.
> @echo Hello, world!
[sending to echo]
[echo] Hello, world!
> /quit
Shutting down...
Goodbye!
```
## Commands
| Command | Description |
|---------|-------------|
| `/help` | Show available commands |
| `/listeners` | List registered listeners |
| `/status` | Show organism status |
| `/quit` | Exit |
## Sending Messages
Use `@listener message` to send a message:
```
@greeter Alice # Greet Alice
@echo Hello! # Echo back "Hello!"
```
## Optional Dependencies
For a better terminal experience, install prompt_toolkit:
```bash
pip install prompt_toolkit
```
Without it, the console falls back to basic `input()`.
## Customization
This example is designed to be copied and modified. Key extension points:
1. **Add handlers** — Create new payload classes and handlers in `handlers.py`
2. **Update config** — Add listeners to `organism.yaml`
3. **Modify console** — Change commands or output formatting in `console.py`
### Example: Adding a Calculator
```python
# handlers.py
@xmlify
@dataclass
class Calculate:
expression: str
@xmlify
@dataclass
class CalculateResult:
result: str
async def handle_calculate(payload: Calculate, metadata: HandlerMetadata) -> HandlerResponse:
try:
result = eval(payload.expression) # (Use simpleeval in production!)
text = f"{payload.expression} = {result}"
except Exception as e:
text = f"Error: {e}"
return HandlerResponse(
payload=ConsoleOutput(source="calculator", text=text),
to="console-output",
)
```
```yaml
# organism.yaml
listeners:
- name: calc
payload_class: examples.console.handlers.Calculate
handler: examples.console.handlers.handle_calculate
description: Evaluates math expressions
```
Then: `@calc 2 + 2``[calculator] 2 + 2 = 4`
## Architecture
```
User Input (@greeter Alice)
┌─────────────────────────────────────┐
│ Console │
│ - Parses input │
│ - Creates Greeting payload │
│ - Injects into pump │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ StreamPump │
│ - Validates envelope │
│ - Routes to greeter listener │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ handle_greeting() │
│ - Receives Greeting payload │
│ - Returns ConsoleOutput │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ handle_print() │
│ - Receives ConsoleOutput │
│ - Displays on console │
└─────────────────────────────────────┘
```
## Using in Your Project
```python
from xml_pipeline.message_bus import bootstrap
from examples.console import Console
async def main():
pump = await bootstrap("my_organism.yaml")
console = Console(pump)
pump_task = asyncio.create_task(pump.run())
try:
await console.run()
finally:
pump_task.cancel()
await pump.shutdown()
```
Or copy the entire `examples/console/` directory and modify as needed.

View file

@ -0,0 +1,38 @@
"""
Console Example Interactive terminal for xml-pipeline.
This example demonstrates how to build an interactive console
that sends messages to listeners and displays responses.
Usage:
python -m examples.console [config.yaml]
Or in your own code:
from examples.console import Console
console = Console(pump)
await console.run()
Dependencies:
pip install prompt_toolkit # For rich terminal input (optional)
The console provides:
- @listener message Send message to a listener
- /help Show available commands
- /listeners List registered listeners
- /quit Graceful shutdown
This is a reference implementation. Feel free to copy and modify
for your own use case.
"""
from .console import Console
from .handlers import Greeting, Echo, handle_greeting, handle_echo, handle_print
__all__ = [
"Console",
"Greeting",
"Echo",
"handle_greeting",
"handle_echo",
"handle_print",
]

View file

@ -0,0 +1,60 @@
#!/usr/bin/env python3
"""
Run the console example.
Usage:
python -m examples.console [config.yaml]
If no config is specified, uses the bundled organism.yaml.
"""
import asyncio
import sys
from pathlib import Path
async def main(config_path: str) -> None:
"""Boot organism and run console."""
from xml_pipeline.message_bus import bootstrap
from .console import Console
# Bootstrap the pump
pump = await bootstrap(config_path)
# Create and run console
console = Console(pump)
# Start pump in background
pump_task = asyncio.create_task(pump.run())
try:
await console.run()
finally:
# Cleanup
pump_task.cancel()
try:
await pump_task
except asyncio.CancelledError:
pass
await pump.shutdown()
print("Goodbye!")
if __name__ == "__main__":
# Find config
args = sys.argv[1:]
if args:
config_path = args[0]
else:
# Use bundled config
config_path = str(Path(__file__).parent / "organism.yaml")
if not Path(config_path).exists():
print(f"Config not found: {config_path}")
sys.exit(1)
try:
asyncio.run(main(config_path))
except KeyboardInterrupt:
print("\nInterrupted")

291
examples/console/console.py Normal file
View file

@ -0,0 +1,291 @@
"""
console.py Simple interactive console for xml-pipeline.
This is a minimal, copy-friendly implementation that shows how to:
- Send messages to listeners via the message pump
- Display responses
- Handle basic commands
No password auth, no TUI split-screen, no LSP just the essentials.
Uses prompt_toolkit if available, falls back to basic input().
Copy this file and modify for your own use case.
"""
from __future__ import annotations
import asyncio
import sys
import uuid
from typing import TYPE_CHECKING, Optional
# Optional: prompt_toolkit for better terminal experience
try:
from prompt_toolkit import PromptSession
from prompt_toolkit.history import InMemoryHistory
from prompt_toolkit.patch_stdout import patch_stdout
PROMPT_TOOLKIT = True
except ImportError:
PROMPT_TOOLKIT = False
if TYPE_CHECKING:
from xml_pipeline.message_bus.stream_pump import StreamPump
# ============================================================================
# Global console registry (for handlers to find us)
# ============================================================================
_active_console: Optional["Console"] = None
def get_active_console() -> Optional["Console"]:
"""Get the currently active console instance."""
return _active_console
def set_active_console(console: Optional["Console"]) -> None:
"""Set the active console instance."""
global _active_console
_active_console = console
# ============================================================================
# ANSI Colors (simple, no dependencies)
# ============================================================================
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
CYAN = "\033[36m"
def cprint(text: str, color: str = "") -> None:
"""Print with optional ANSI color."""
if color:
print(f"{color}{text}{Colors.RESET}")
else:
print(text)
# ============================================================================
# Console
# ============================================================================
class Console:
"""
Simple interactive console for xml-pipeline.
Usage:
pump = await bootstrap("organism.yaml")
console = Console(pump)
await console.run()
"""
def __init__(self, pump: StreamPump):
self.pump = pump
self.running = False
self._session: Optional[PromptSession] = None
async def run(self) -> None:
"""Main console loop."""
set_active_console(self)
self.running = True
self._print_banner()
# Initialize prompt session if available
if PROMPT_TOOLKIT:
self._session = PromptSession(history=InMemoryHistory())
try:
while self.running:
try:
line = await self._read_input("> ")
if line:
await self._handle_input(line.strip())
except EOFError:
cprint("\nGoodbye!", Colors.YELLOW)
break
except KeyboardInterrupt:
continue
finally:
set_active_console(None)
async def _read_input(self, prompt: str) -> str:
"""Read a line of input."""
if PROMPT_TOOLKIT and self._session:
with patch_stdout():
return await self._session.prompt_async(prompt)
else:
# Fallback: blocking input in executor
loop = asyncio.get_event_loop()
print(prompt, end="", flush=True)
line = await loop.run_in_executor(None, sys.stdin.readline)
return line.strip() if line else ""
async def _handle_input(self, line: str) -> None:
"""Route input to appropriate handler."""
if line.startswith("/"):
await self._handle_command(line)
elif line.startswith("@"):
await self._handle_message(line)
else:
cprint("Use @listener message or /command", Colors.DIM)
cprint("Type /help for available commands", Colors.DIM)
# ------------------------------------------------------------------
# Commands
# ------------------------------------------------------------------
async def _handle_command(self, line: str) -> None:
"""Handle /command."""
parts = line[1:].split(None, 1)
cmd = parts[0].lower() if parts else ""
args = parts[1] if len(parts) > 1 else ""
commands = {
"help": self._cmd_help,
"h": self._cmd_help,
"listeners": self._cmd_listeners,
"ls": self._cmd_listeners,
"status": self._cmd_status,
"quit": self._cmd_quit,
"q": self._cmd_quit,
"exit": self._cmd_quit,
}
handler = commands.get(cmd)
if handler:
await handler(args)
else:
cprint(f"Unknown command: /{cmd}", Colors.RED)
cprint("Type /help for available commands", Colors.DIM)
async def _cmd_help(self, args: str) -> None:
"""Show help."""
cprint("\nCommands:", Colors.CYAN)
cprint(" /help, /h Show this help", Colors.DIM)
cprint(" /listeners, /ls List registered listeners", Colors.DIM)
cprint(" /status Show organism status", Colors.DIM)
cprint(" /quit, /q Exit", Colors.DIM)
cprint("")
cprint("Messages:", Colors.CYAN)
cprint(" @listener text Send message to listener", Colors.DIM)
cprint("")
cprint("Examples:", Colors.CYAN)
cprint(" @greeter Alice Greet Alice", Colors.DIM)
cprint(" @echo Hello! Echo back 'Hello!'", Colors.DIM)
cprint("")
async def _cmd_listeners(self, args: str) -> None:
"""List registered listeners."""
cprint("\nListeners:", Colors.CYAN)
for name, listener in sorted(self.pump.listeners.items()):
desc = listener.description or "(no description)"
cprint(f" {name:20} {desc}", Colors.DIM)
cprint("")
async def _cmd_status(self, args: str) -> None:
"""Show organism status."""
cprint(f"\nOrganism: {self.pump.config.name}", Colors.CYAN)
cprint(f"Listeners: {len(self.pump.listeners)}", Colors.DIM)
cprint(f"Running: {self.pump._running}", Colors.DIM)
cprint("")
async def _cmd_quit(self, args: str) -> None:
"""Exit the console."""
cprint("Shutting down...", Colors.YELLOW)
self.running = False
# ------------------------------------------------------------------
# Message Sending
# ------------------------------------------------------------------
async def _handle_message(self, line: str) -> None:
"""Handle @listener message."""
parts = line[1:].split(None, 1)
if not parts:
cprint("Usage: @listener message", Colors.DIM)
return
target = parts[0].lower()
message = parts[1] if len(parts) > 1 else ""
# Check if listener exists
if target not in self.pump.listeners:
cprint(f"Unknown listener: {target}", Colors.RED)
cprint("Use /listeners to see available listeners", Colors.DIM)
return
# Create payload
listener = self.pump.listeners[target]
payload = self._create_payload(listener, message)
if payload is None:
cprint(f"Cannot create payload for {target}", Colors.RED)
return
cprint(f"[sending to {target}]", Colors.DIM)
# Create thread and inject
thread_id = str(uuid.uuid4())
envelope = self.pump._wrap_in_envelope(
payload=payload,
from_id="console",
to_id=target,
thread_id=thread_id,
)
await self.pump.inject(envelope, thread_id=thread_id, from_id="console")
def _create_payload(self, listener, message: str):
"""Create payload instance from message text."""
payload_class = listener.payload_class
# Try common field patterns
if hasattr(payload_class, "__dataclass_fields__"):
fields = list(payload_class.__dataclass_fields__.keys())
if len(fields) == 1:
return payload_class(**{fields[0]: message})
elif "name" in fields:
return payload_class(name=message)
elif "text" in fields:
return payload_class(text=message)
elif "message" in fields:
return payload_class(message=message)
# Fallback
try:
return payload_class()
except Exception:
return None
# ------------------------------------------------------------------
# Output (called by handlers)
# ------------------------------------------------------------------
def display_response(self, source: str, text: str) -> None:
"""Display a response from a handler."""
cprint(f"[{source}] {text}", Colors.CYAN)
# ------------------------------------------------------------------
# UI
# ------------------------------------------------------------------
def _print_banner(self) -> None:
"""Print startup banner."""
print()
cprint("=" * 50, Colors.CYAN)
cprint(" xml-pipeline console", Colors.CYAN)
cprint("=" * 50, Colors.CYAN)
print()
cprint(f"Organism: {self.pump.config.name}", Colors.GREEN)
cprint(f"Listeners: {len(self.pump.listeners)}", Colors.DIM)
cprint("Type /help for commands", Colors.DIM)
print()

View file

@ -0,0 +1,106 @@
"""
handlers.py Example handlers for the console demo.
These handlers demonstrate the basic patterns without LLM dependencies:
- Greeting: Simple greeting flow
- Echo: Echo back input
- Response printing to console
No LLM calls, no complex logic just show how messages flow.
"""
from dataclasses import dataclass
from third_party.xmlable import xmlify
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
# ============================================================================
# Payloads
# ============================================================================
@xmlify
@dataclass
class Greeting:
"""A greeting request."""
name: str
@xmlify
@dataclass
class GreetingReply:
"""Response from the greeter."""
message: str
@xmlify
@dataclass
class Echo:
"""Echo request — repeats back whatever you send."""
text: str
@xmlify
@dataclass
class EchoReply:
"""Echoed response."""
text: str
@xmlify
@dataclass
class ConsoleOutput:
"""Output to display on console."""
source: str
text: str
# ============================================================================
# Handlers
# ============================================================================
async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> HandlerResponse:
"""
Handle a Greeting and return a friendly response.
This is a pure tool (no LLM) just demonstrates message routing.
"""
message = f"Hello, {payload.name}! Welcome to xml-pipeline."
return HandlerResponse(
payload=ConsoleOutput(source="greeter", text=message),
to="console-output",
)
async def handle_echo(payload: Echo, metadata: HandlerMetadata) -> HandlerResponse:
"""
Echo back whatever text was sent.
Demonstrates simple request/response pattern.
"""
return HandlerResponse(
payload=ConsoleOutput(source="echo", text=payload.text),
to="console-output",
)
async def handle_print(payload: ConsoleOutput, metadata: HandlerMetadata) -> None:
"""
Print output to the console.
This is a terminal handler returns None to end the chain.
Uses console_registry to find the active console (if any).
"""
# Try to use registered console, fall back to print
try:
from .console import get_active_console
console = get_active_console()
if console is not None:
console.display_response(payload.source, payload.text)
return
except (ImportError, RuntimeError):
pass
# Fallback: just print with color
print(f"\033[36m[{payload.source}]\033[0m {payload.text}")

View file

@ -0,0 +1,42 @@
# organism.yaml — Example console organism
#
# A minimal organism demonstrating basic message routing.
# No LLM backends required — pure message passing.
#
# Message flows:
# @greeter Alice -> greeter -> console-output -> (display)
# @echo Hello -> echo -> console-output -> (display)
#
# Run with:
# python -m examples.console
organism:
name: console-example
port: 8765
# No LLM config needed for this example
# Uncomment to enable LLM-based agents:
# llm:
# strategy: failover
# backends:
# - provider: xai
# api_key_env: XAI_API_KEY
listeners:
# Greeter: receives Greeting, responds with friendly message
- name: greeter
payload_class: examples.console.handlers.Greeting
handler: examples.console.handlers.handle_greeting
description: Greets you by name
# Echo: echoes back whatever you send
- name: echo
payload_class: examples.console.handlers.Echo
handler: examples.console.handlers.handle_echo
description: Echoes back your message
# Console output: terminal handler that prints responses
- name: console-output
payload_class: examples.console.handlers.ConsoleOutput
handler: examples.console.handlers.handle_print
description: Prints output to console

View file

@ -30,7 +30,7 @@ from dataclasses import dataclass
from typing import Optional
from third_party.xmlable import xmlify
from agentserver.message_bus.message_state import HandlerMetadata, HandlerResponse
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
# ============================================================================

View file

@ -26,7 +26,7 @@ Usage in organism.yaml:
from dataclasses import dataclass
from third_party.xmlable import xmlify
from agentserver.message_bus.message_state import HandlerMetadata, HandlerResponse
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
@xmlify
@ -67,8 +67,8 @@ async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> Handl
The system prompt is managed by the platform (from organism.yaml).
The handler cannot see or modify the prompt.
"""
from agentserver.platform import complete
from agentserver.message_bus.todo_registry import get_todo_registry
from xml_pipeline.platform import complete
from xml_pipeline.message_bus.todo_registry import get_todo_registry
# Check for any raised todos and close them
todo_registry = get_todo_registry()
@ -126,7 +126,7 @@ async def handle_response_print(payload: ShoutedResponse, metadata: HandlerMetad
Routes output to the TUI console if available, otherwise prints to stdout.
"""
from agentserver.console.console_registry import get_console
from xml_pipeline.console.console_registry import get_console
console = get_console()

View file

@ -1,118 +1,199 @@
# pyproject.toml
# pyproject.toml — OSS xml-pipeline library
#
# This is the open-source core: message pump, handlers, LLM abstraction, tools.
# Advanced features (TUI console, LSP, auth, WebSocket server) are in Nextra.
[build-system]
requires = ["setuptools>=45", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "xml-pipeline"
version = "0.2.0"
description = "Tamper-proof nervous system for multi-agent organisms"
version = "0.3.0"
description = "Schema-driven XML message bus for multi-agent systems"
readme = "README.md"
requires-python = ">=3.11"
license = {text = "MIT"}
keywords = ["xml", "multi-agent", "message-bus", "aiostream"]
authors = [
{name = "Your Name", email = "you@example.com"},
]
keywords = [
"xml",
"multi-agent",
"message-bus",
"llm",
"pipeline",
"async",
]
classifiers = [
"Development Status :: 3 - Alpha",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Framework :: AsyncIO",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Distributed Computing",
]
# =============================================================================
# CORE DEPENDENCIES - minimal, always installed
# CORE DEPENDENCIES — minimal set for the library
# =============================================================================
dependencies = [
# XML processing
"lxml",
# Async streaming
# XML processing & validation
"lxml>=4.9",
# Async streaming pipeline
"aiostream>=0.5",
# Config & serialization
"pyyaml",
"pyhumps",
# Crypto (for identity keys)
"cryptography",
# Console
"prompt_toolkit>=3.0",
"termcolor",
# Configuration
"pyyaml>=6.0",
# Case conversion (snake_case <-> camelCase)
"pyhumps>=3.0",
# Ed25519 identity keys for signing
"cryptography>=41.0",
# HTTP client for LLM backends
"httpx>=0.27",
# Colored terminal output (minimal, no TUI)
"termcolor>=2.0",
]
# =============================================================================
# OPTIONAL DEPENDENCIES - user opts into what they need
# OPTIONAL DEPENDENCIES — user opts in
# =============================================================================
[project.optional-dependencies]
# LLM provider SDKs (alternative to raw httpx)
# LLM provider SDKs (alternative to raw httpx calls)
anthropic = ["anthropic>=0.39"]
openai = ["openai>=1.0"]
# Tool backends
redis = ["redis>=5.0"] # For distributed keyvalue
search = ["duckduckgo-search"] # For search tool
redis = ["redis>=5.0"] # Distributed key-value store
search = ["duckduckgo-search>=6.0"] # Web search tool
# Auth (only for multi-tenant/remote deployments)
auth = [
"pyotp", # TOTP for privileged channel
"argon2-cffi", # Password hashing
]
# Console example (optional, for interactive use)
console = ["prompt_toolkit>=3.0"]
# WebSocket server (for remote connections)
server = ["websockets"]
# All LLM providers
llm = ["xml-pipeline[anthropic,openai]"]
# LSP support for config editor (requires yaml-language-server: npm install -g yaml-language-server)
lsp = ["lsp-client>=0.3.0"]
# All tools
tools = ["xml-pipeline[redis,search]"]
# All optional features
all = [
"xml-pipeline[anthropic,openai,redis,search,auth,server,lsp]",
]
# Everything (for local development)
all = ["xml-pipeline[llm,tools,console]"]
# Development
# Testing
test = [
"pytest>=7.0",
"pytest-asyncio>=0.21",
"pytest-asyncio>=0.23",
"pytest-cov>=4.0",
]
# Development (linting, type checking)
dev = [
"xml-pipeline[test,all]",
"mypy",
"ruff",
"mypy>=1.8",
"ruff>=0.1",
"types-PyYAML",
]
# =============================================================================
# CLI ENTRY POINTS
# =============================================================================
[project.scripts]
xml-pipeline = "agentserver.cli:main"
xp = "agentserver.cli:main"
xml-pipeline = "xml_pipeline.cli:main"
xp = "xml_pipeline.cli:main" # Short alias
# =============================================================================
# TOOL CONFIGURATION
# PROJECT URLS
# =============================================================================
[project.urls]
Homepage = "https://github.com/yourorg/xml-pipeline"
Documentation = "https://xml-pipeline.org/docs"
Repository = "https://github.com/yourorg/xml-pipeline"
Issues = "https://github.com/yourorg/xml-pipeline/issues"
Changelog = "https://github.com/yourorg/xml-pipeline/blob/main/CHANGELOG.md"
# =============================================================================
# PACKAGE DISCOVERY
# =============================================================================
[tool.setuptools.packages.find]
where = ["."]
include = [
"xml_pipeline*",
"third_party*",
"examples*",
]
exclude = [
"tests*",
"docs*",
]
[tool.setuptools.package-data]
xml_pipeline = [
"schema/*.xsd",
"prompts/*.txt",
]
# =============================================================================
# PYTEST
# =============================================================================
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py"]
norecursedirs = [".git", "__pycache__", "*.egg-info"]
[tool.setuptools.packages.find]
where = ["."]
include = ["agentserver*", "third_party*"]
norecursedirs = [".git", "__pycache__", "*.egg-info", "build", "dist"]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: marks tests requiring external services",
]
# =============================================================================
# RUFF (linting)
# =============================================================================
[tool.ruff]
line-length = 100
target-version = "py311"
[tool.ruff.lint]
select = ["E", "F", "I", "N", "W", "UP"]
ignore = ["E501"]
select = [
"E", # pycodestyle errors
"F", # pyflakes
"I", # isort
"N", # pep8-naming
"W", # pycodestyle warnings
"UP", # pyupgrade
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"SIM", # flake8-simplify
]
ignore = [
"E501", # line too long (handled by formatter)
"B008", # function call in default argument
]
[tool.ruff.lint.isort]
known-first-party = ["xml_pipeline", "third_party"]
# =============================================================================
# MYPY (type checking)
# =============================================================================
[tool.mypy]
python_version = "3.11"
warn_return_any = true
warn_unused_ignores = true
disallow_untyped_defs = true
strict_optional = true
ignore_missing_imports = true
[[tool.mypy.overrides]]
module = "third_party.*"
ignore_errors = true

View file

@ -22,8 +22,8 @@ import asyncio
import sys
from pathlib import Path
from agentserver.message_bus import bootstrap
from agentserver.console.console_registry import set_console
from xml_pipeline.message_bus import bootstrap
from xml_pipeline.console.console_registry import set_console
async def run_organism(config_path: str = "config/organism.yaml", use_simple: bool = False):
@ -34,7 +34,7 @@ async def run_organism(config_path: str = "config/organism.yaml", use_simple: bo
if use_simple:
# Use old SecureConsole for compatibility
from agentserver.console import SecureConsole
from xml_pipeline.console import SecureConsole
console = SecureConsole(pump)
if not await console.authenticate():
print("Authentication failed.")
@ -54,7 +54,7 @@ async def run_organism(config_path: str = "config/organism.yaml", use_simple: bo
print("Goodbye!")
else:
# Use new TUI console
from agentserver.console.tui_console import TUIConsole
from xml_pipeline.console.tui_console import TUIConsole
console = TUIConsole(pump)
set_console(console) # Register for handlers to find

View file

@ -1,6 +1,6 @@
```
xml-pipeline/
├── agentserver/
├── xml_pipeline/
│ ├── auth/
│ │ ├── __init__.py
│ │ └── totp.py
@ -47,7 +47,7 @@ xml-pipeline/
│ │ ├── __init__.py
│ │ └── message.py
│ ├── __init__.py
│ ├── agentserver.py
│ ├── xml_pipeline.py
│ ├── main.py
│ └── xml_listener.py
├── docs/

View file

@ -13,7 +13,7 @@ import pytest
import uuid
from dataclasses import dataclass, FrozenInstanceError
from agentserver.memory.context_buffer import (
from xml_pipeline.memory.context_buffer import (
ContextBuffer,
ThreadContext,
BufferSlot,
@ -329,9 +329,9 @@ class TestPumpIntegration:
async def test_buffer_records_messages_during_flow(self):
"""Context buffer should record messages as they flow through pump."""
from unittest.mock import AsyncMock, patch
from agentserver.message_bus.stream_pump import StreamPump, ListenerConfig, OrganismConfig
from agentserver.message_bus.message_state import HandlerResponse
from agentserver.llm.backend import LLMResponse
from xml_pipeline.message_bus.stream_pump import StreamPump, ListenerConfig, OrganismConfig
from xml_pipeline.message_bus.message_state import HandlerResponse
from xml_pipeline.llm.backend import LLMResponse
# Import handlers
from handlers.hello import Greeting, GreetingResponse, handle_greeting, handle_shout
@ -378,7 +378,7 @@ class TestPumpIntegration:
pass
pump._reinject_responses = noop_reinject
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
# Create envelope for Greeting
thread_id = str(uuid.uuid4())
envelope = f"""<message xmlns="https://xml-pipeline.org/ns/envelope/v1">

View file

@ -16,14 +16,14 @@ from dataclasses import dataclass
from lxml import etree
# Import the message state
from agentserver.message_bus.message_state import MessageState, HandlerMetadata
from xml_pipeline.message_bus.message_state import MessageState, HandlerMetadata
# Import individual steps
from agentserver.message_bus.steps.repair import repair_step
from agentserver.message_bus.steps.c14n import c14n_step
from agentserver.message_bus.steps.envelope_validation import envelope_validation_step
from agentserver.message_bus.steps.payload_extraction import payload_extraction_step
from agentserver.message_bus.steps.thread_assignment import thread_assignment_step
from xml_pipeline.message_bus.steps.repair import repair_step
from xml_pipeline.message_bus.steps.c14n import c14n_step
from xml_pipeline.message_bus.steps.envelope_validation import envelope_validation_step
from xml_pipeline.message_bus.steps.payload_extraction import payload_extraction_step
from xml_pipeline.message_bus.steps.thread_assignment import thread_assignment_step
# Check for optional dependencies
try:
@ -39,8 +39,8 @@ requires_aiostream = pytest.mark.skipif(
# Check for stream_pump dependencies
try:
from agentserver.message_bus.stream_pump import StreamPump, Listener
from agentserver.message_bus.steps.routing_resolution import make_routing_step
from xml_pipeline.message_bus.stream_pump import StreamPump, Listener
from xml_pipeline.message_bus.steps.routing_resolution import make_routing_step
HAS_STREAM_PUMP = True
except ImportError:
HAS_STREAM_PUMP = False
@ -434,7 +434,7 @@ class TestMultiPayloadExtraction:
@pytest.mark.asyncio
async def test_single_payload_yields_one(self):
"""Single payload should yield one state."""
from agentserver.message_bus.stream_pump import extract_payloads
from xml_pipeline.message_bus.stream_pump import extract_payloads
state = MessageState(
raw_bytes=b"<result>42</result>",
@ -452,7 +452,7 @@ class TestMultiPayloadExtraction:
@pytest.mark.asyncio
async def test_multiple_payloads_yields_many(self, multi_payload_response):
"""Multiple payloads should yield multiple states."""
from agentserver.message_bus.stream_pump import extract_payloads
from xml_pipeline.message_bus.stream_pump import extract_payloads
state = MessageState(
raw_bytes=multi_payload_response,
@ -471,7 +471,7 @@ class TestMultiPayloadExtraction:
@pytest.mark.asyncio
async def test_empty_response_yields_original(self):
"""Empty response should yield original state."""
from agentserver.message_bus.stream_pump import extract_payloads
from xml_pipeline.message_bus.stream_pump import extract_payloads
state = MessageState(
raw_bytes=b"",
@ -487,7 +487,7 @@ class TestMultiPayloadExtraction:
@pytest.mark.asyncio
async def test_preserves_metadata(self):
"""Extracted payloads should preserve metadata."""
from agentserver.message_bus.stream_pump import extract_payloads
from xml_pipeline.message_bus.stream_pump import extract_payloads
state = MessageState(
raw_bytes=b"<a/><b/>",
@ -537,8 +537,8 @@ class TestStepFactories:
@pytest.mark.asyncio
async def test_routing_factory(self):
"""Routing step should use injected routing table."""
from agentserver.message_bus.steps.routing_resolution import make_routing_step
from agentserver.message_bus.stream_pump import Listener
from xml_pipeline.message_bus.steps.routing_resolution import make_routing_step
from xml_pipeline.message_bus.stream_pump import Listener
# Create mock listener
mock_listener = Listener(

View file

@ -12,8 +12,8 @@ import asyncio
import uuid
from unittest.mock import AsyncMock, patch
from agentserver.message_bus import StreamPump, bootstrap, MessageState
from agentserver.message_bus.stream_pump import ConfigLoader, ListenerConfig, OrganismConfig, Listener
from xml_pipeline.message_bus import StreamPump, bootstrap, MessageState
from xml_pipeline.message_bus.stream_pump import ConfigLoader, ListenerConfig, OrganismConfig, Listener
from handlers.hello import Greeting, GreetingResponse, handle_greeting, handle_shout
ENVELOPE_NS = "https://xml-pipeline.org/ns/envelope/v1"
@ -148,7 +148,7 @@ class TestFullPipelineFlow:
original_handler = pump.listeners["greeter"].handler
# Mock the LLM call since we don't have a real API key in tests
from agentserver.llm.backend import LLMResponse
from xml_pipeline.llm.backend import LLMResponse
mock_response = LLMResponse(
content="Hello, World!",
@ -164,7 +164,7 @@ class TestFullPipelineFlow:
pump.listeners["greeter"].handler = tracking_handler
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_response)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)):
# Create and inject a Greeting message
thread_id = str(uuid.uuid4())
envelope = make_envelope(
@ -236,7 +236,7 @@ class TestFullPipelineFlow:
pump._reinject_responses = capture_reinject
# Mock the LLM call since we don't have a real API key in tests
from agentserver.llm.backend import LLMResponse
from xml_pipeline.llm.backend import LLMResponse
mock_response = LLMResponse(
content="Hello, Alice!",
@ -245,7 +245,7 @@ class TestFullPipelineFlow:
finish_reason="stop",
)
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_response)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)):
# Inject a Greeting
thread_id = str(uuid.uuid4())
envelope = make_envelope(
@ -403,8 +403,8 @@ class TestThreadRoutingFlow:
from handlers.console import ConsoleInput, ConsolePrompt, ShoutedResponse
from handlers.console import handle_console_input, handle_shouted_response
from handlers.hello import Greeting, GreetingResponse, handle_greeting, handle_shout
from agentserver.llm.backend import LLMResponse
from agentserver.message_bus.thread_registry import get_registry
from xml_pipeline.llm.backend import LLMResponse
from xml_pipeline.message_bus.thread_registry import get_registry
# Create pump with full routing chain (but no console - it blocks on stdin)
config = OrganismConfig(name="thread-routing-test")
@ -498,7 +498,7 @@ class TestThreadRoutingFlow:
pump._reinject_responses = capture_reinject
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
# Inject ConsoleInput (simulating: user typed "@greeter TestUser")
# Note: xmlify converts field names to PascalCase for XML elements
thread_id = str(uuid.uuid4())
@ -573,8 +573,8 @@ class TestThreadRoutingFlow:
from handlers.console import ConsoleInput, ShoutedResponse
from handlers.console import handle_console_input, handle_shouted_response
from handlers.hello import Greeting, GreetingResponse, handle_greeting, handle_shout
from agentserver.llm.backend import LLMResponse
from agentserver.message_bus.thread_registry import ThreadRegistry
from xml_pipeline.llm.backend import LLMResponse
from xml_pipeline.message_bus.thread_registry import ThreadRegistry
# Use a fresh registry for this test
test_registry = ThreadRegistry()
@ -584,8 +584,8 @@ class TestThreadRoutingFlow:
pump = StreamPump(config)
# Patch get_registry to use our test registry
with patch('agentserver.message_bus.stream_pump.get_registry', return_value=test_registry):
with patch('agentserver.message_bus.thread_registry.get_registry', return_value=test_registry):
with patch('xml_pipeline.message_bus.stream_pump.get_registry', return_value=test_registry):
with patch('xml_pipeline.message_bus.thread_registry.get_registry', return_value=test_registry):
# Register handlers
pump.register_listener(ListenerConfig(
name="console-router",
@ -650,7 +650,7 @@ class TestThreadRoutingFlow:
finish_reason="stop",
)
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
# Inject initial message
thread_id = str(uuid.uuid4())
envelope = make_envelope(

View file

@ -13,10 +13,10 @@ import asyncio
import uuid
from unittest.mock import AsyncMock, patch
from agentserver.message_bus.todo_registry import TodoRegistry, TodoWatcher, get_todo_registry
from agentserver.message_bus.stream_pump import StreamPump, ListenerConfig, OrganismConfig
from agentserver.message_bus.message_state import HandlerMetadata, HandlerResponse
from agentserver.primitives.todo import (
from xml_pipeline.message_bus.todo_registry import TodoRegistry, TodoWatcher, get_todo_registry
from xml_pipeline.message_bus.stream_pump import StreamPump, ListenerConfig, OrganismConfig
from xml_pipeline.message_bus.message_state import HandlerMetadata, HandlerResponse
from xml_pipeline.primitives.todo import (
TodoUntil, TodoComplete, TodoRegistered, TodoClosed,
handle_todo_until, handle_todo_complete,
)
@ -278,7 +278,7 @@ class TestTodoIntegration:
async def test_todo_nudge_appears_in_metadata(self):
"""Raised eyebrows should appear in handler metadata."""
from handlers.hello import Greeting, GreetingResponse, handle_greeting
from agentserver.llm.backend import LLMResponse
from xml_pipeline.llm.backend import LLMResponse
# Clear registries
todo_registry = get_todo_registry()
@ -325,7 +325,7 @@ class TestTodoIntegration:
pump.listeners["greeter"].handler = capturing_handler
# Create and inject a message
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
state = MessageState(
payload=Greeting(name="Test"),
@ -383,7 +383,7 @@ class TestTodoIntegration:
assert watcher.eyebrow_raised is False
# Dispatch a ShoutedResponse message
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
state = MessageState(
payload=ShoutedResponse(message="HELLO!"),
@ -411,7 +411,7 @@ class TestGreeterTodoFlow:
"""
from handlers.hello import Greeting, GreetingResponse, handle_greeting
from handlers.console import ShoutedResponse
from agentserver.llm.backend import LLMResponse
from xml_pipeline.llm.backend import LLMResponse
# Clear registry
todo_registry = get_todo_registry()
@ -427,7 +427,7 @@ class TestGreeterTodoFlow:
finish_reason="stop",
)
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
# Call greeter handler
metadata = HandlerMetadata(
thread_id=thread_id,
@ -466,7 +466,7 @@ class TestGreeterTodoFlow:
When greeter is called again with raised todos, it should close them.
"""
from handlers.hello import Greeting, GreetingResponse, handle_greeting
from agentserver.llm.backend import LLMResponse
from xml_pipeline.llm.backend import LLMResponse
# Clear registry
todo_registry = get_todo_registry()
@ -497,7 +497,7 @@ class TestGreeterTodoFlow:
raised = todo_registry.get_raised_for(thread_id, "greeter")
nudge = todo_registry.format_nudge(raised)
with patch('agentserver.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
# Call greeter with the nudge
metadata = HandlerMetadata(
thread_id=thread_id,

View file

@ -16,8 +16,8 @@ from pathlib import Path
def cmd_run(args: argparse.Namespace) -> int:
"""Run an organism from config."""
from agentserver.config.loader import load_config
from agentserver.message_bus import bootstrap
from xml_pipeline.config.loader import load_config
from xml_pipeline.message_bus import bootstrap
config_path = Path(args.config)
if not config_path.exists():
@ -38,7 +38,7 @@ def cmd_run(args: argparse.Namespace) -> int:
def cmd_init(args: argparse.Namespace) -> int:
"""Initialize a new organism config."""
from agentserver.config.template import create_organism_template
from xml_pipeline.config.template import create_organism_template
name = args.name or "my-organism"
output = Path(args.output or f"{name}.yaml")
@ -59,7 +59,7 @@ def cmd_init(args: argparse.Namespace) -> int:
def cmd_check(args: argparse.Namespace) -> int:
"""Validate config without running."""
from agentserver.config.loader import load_config, ConfigError
from xml_pipeline.config.loader import load_config, ConfigError
config_path = Path(args.config)
if not config_path.exists():
@ -73,7 +73,7 @@ def cmd_check(args: argparse.Namespace) -> int:
print(f" LLM backends: {len(config.llm_backends)}")
# Check optional features
from agentserver.config.features import check_features
from xml_pipeline.config.features import check_features
features = check_features(config)
if features.missing:
print(f"\nOptional features needed:")
@ -88,8 +88,8 @@ def cmd_check(args: argparse.Namespace) -> int:
def cmd_version(args: argparse.Namespace) -> int:
"""Show version and feature info."""
from agentserver import __version__
from agentserver.config.features import get_available_features
from xml_pipeline import __version__
from xml_pipeline.config.features import get_available_features
print(f"xml-pipeline {__version__}")
print()

View file

@ -0,0 +1 @@
<!-- Signed boot-time configuration (LLM pools, initial listeners, etc.) will go here -->

View file

@ -0,0 +1,17 @@
"""
Listener configuration management.
Per-listener YAML configuration files stored in ~/.xml-pipeline/listeners/
"""
from .store import (
ListenerConfigStore,
get_listener_config_store,
LISTENERS_DIR,
)
__all__ = [
"ListenerConfigStore",
"get_listener_config_store",
"LISTENERS_DIR",
]

View file

@ -0,0 +1,270 @@
"""
Listener configuration storage.
Each listener can have its own YAML config file in ~/.xml-pipeline/listeners/
containing listener-specific settings (handler, peers, prompt, etc.)
The main organism.yaml defines which listeners to load and can reference
these individual files or inline the config.
"""
from __future__ import annotations
from dataclasses import dataclass, field, asdict
from pathlib import Path
from typing import Optional, Any
import yaml
CONFIG_DIR = Path.home() / ".xml-pipeline"
LISTENERS_DIR = CONFIG_DIR / "listeners"
@dataclass
class ListenerConfigData:
"""
Configuration for an individual listener.
Stored in ~/.xml-pipeline/listeners/{name}.yaml
"""
name: str
# Description (required for tool prompt generation)
description: str = ""
# Type flags
agent: bool = False
tool: bool = False
gateway: bool = False
# Handler configuration
handler: Optional[str] = None
payload_class: Optional[str] = None
# Agent configuration
prompt: Optional[str] = None
model: Optional[str] = None
# Routing
peers: list[str] = field(default_factory=list)
# Tool permissions (for agents)
allowed_tools: list[str] = field(default_factory=list)
blocked_tools: list[str] = field(default_factory=list)
# Custom metadata
metadata: dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> dict[str, Any]:
"""Convert to dict for YAML serialization."""
d = asdict(self)
# Remove None values and empty lists/dicts for cleaner YAML
result = {}
for key, value in d.items():
if value is None:
continue
if isinstance(value, list) and not value:
continue
if isinstance(value, dict) and not value:
continue
result[key] = value
return result
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "ListenerConfigData":
"""Create from dict (loaded from YAML)."""
return cls(
name=data.get("name", ""),
description=data.get("description", ""),
agent=data.get("agent", False),
tool=data.get("tool", False),
gateway=data.get("gateway", False),
handler=data.get("handler"),
payload_class=data.get("payload_class"),
prompt=data.get("prompt"),
model=data.get("model"),
peers=data.get("peers", []),
allowed_tools=data.get("allowed_tools", []),
blocked_tools=data.get("blocked_tools", []),
metadata=data.get("metadata", {}),
)
def to_yaml(self) -> str:
"""Serialize to YAML string."""
return yaml.dump(
self.to_dict(),
default_flow_style=False,
sort_keys=False,
allow_unicode=True,
)
@classmethod
def from_yaml(cls, yaml_str: str) -> "ListenerConfigData":
"""Parse from YAML string."""
data = yaml.safe_load(yaml_str) or {}
return cls.from_dict(data)
class ListenerConfigStore:
"""
Manages listener configuration files.
Usage:
store = ListenerConfigStore()
# Load or create config
config = store.get("greeter")
# Modify and save
config.prompt = "You are a friendly greeter."
store.save(config)
# Get raw YAML for editing
yaml_content = store.load_yaml("greeter")
# Save edited YAML
store.save_yaml("greeter", yaml_content)
"""
def __init__(self, listeners_dir: Path = LISTENERS_DIR):
self.listeners_dir = listeners_dir
self._ensure_dir()
def _ensure_dir(self) -> None:
"""Create listeners directory if needed."""
self.listeners_dir.mkdir(parents=True, exist_ok=True)
def path_for(self, name: str) -> Path:
"""Get path to listener's config file."""
return self.listeners_dir / f"{name}.yaml"
def exists(self, name: str) -> bool:
"""Check if listener config exists."""
return self.path_for(name).exists()
def get(self, name: str) -> ListenerConfigData:
"""
Load listener config, creating default if not exists.
"""
path = self.path_for(name)
if path.exists():
with open(path) as f:
data = yaml.safe_load(f) or {}
# Ensure name is set
data["name"] = name
return ListenerConfigData.from_dict(data)
# Return default config (not saved yet)
return ListenerConfigData(name=name)
def save(self, config: ListenerConfigData) -> Path:
"""
Save listener config to file.
Returns path to saved file.
"""
path = self.path_for(config.name)
with open(path, "w") as f:
yaml.dump(
config.to_dict(),
f,
default_flow_style=False,
sort_keys=False,
allow_unicode=True,
)
return path
def save_yaml(self, name: str, yaml_content: str) -> Path:
"""
Save raw YAML content for a listener.
Used when saving from editor.
"""
path = self.path_for(name)
# Validate YAML before saving
yaml.safe_load(yaml_content) # Raises on invalid YAML
with open(path, "w") as f:
f.write(yaml_content)
return path
def load_yaml(self, name: str) -> str:
"""
Load raw YAML content for editing.
Returns default template if file doesn't exist.
"""
path = self.path_for(name)
if path.exists():
with open(path) as f:
return f.read()
# Return default template
return self._default_template(name)
def _default_template(self, name: str) -> str:
"""Generate default YAML template for new listener."""
return f"""# yaml-language-server: $schema=~/.xml-pipeline/schemas/listener.schema.json
# Listener configuration for: {name}
name: {name}
description: "Description of what this listener does"
# Listener type (set one to true)
agent: false # LLM-powered agent
tool: false # Simple tool/function
gateway: false # Federation gateway
# Handler configuration
handler: "handlers.{name}.handle_{name}"
payload_class: "handlers.{name}.{name.title()}Payload"
# Agent configuration (only if agent: true)
# prompt: |
# You are an AI assistant.
#
# Respond helpfully and concisely.
# model: default
# Routing - which listeners this can send to
peers: []
# Tool permissions (for agents)
# allowed_tools: []
# blocked_tools: []
# Custom metadata (available to handler)
# metadata: {{}}
"""
def list_listeners(self) -> list[str]:
"""List all configured listeners."""
return [p.stem for p in self.listeners_dir.glob("*.yaml")]
def delete(self, name: str) -> bool:
"""Delete listener config file."""
path = self.path_for(name)
if path.exists():
path.unlink()
return True
return False
# Global instance
_store: Optional[ListenerConfigStore] = None
def get_listener_config_store() -> ListenerConfigStore:
"""Get the global listener config store."""
global _store
if _store is None:
_store = ListenerConfigStore()
return _store

View file

@ -0,0 +1,102 @@
"""
JSON Schema management for YAML Language Server.
Provides JSON schemas for organism.yaml and listener.yaml files,
enabling LSP-powered autocompletion and validation in the editor.
Schemas are written to ~/.xml-pipeline/schemas/ for yaml-language-server.
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import Optional
from .organism import ORGANISM_SCHEMA
from .listener import LISTENER_SCHEMA
SCHEMA_DIR = Path.home() / ".xml-pipeline" / "schemas"
SCHEMA_FILES = {
"organism.schema.json": ORGANISM_SCHEMA,
"listener.schema.json": LISTENER_SCHEMA,
}
def ensure_schema_dir() -> Path:
"""Create schema directory if needed."""
SCHEMA_DIR.mkdir(parents=True, exist_ok=True)
return SCHEMA_DIR
def write_schemas() -> dict[str, Path]:
"""
Write all schemas to the schema directory.
Returns dict of schema_name -> path.
"""
ensure_schema_dir()
paths = {}
for name, schema in SCHEMA_FILES.items():
path = SCHEMA_DIR / name
with open(path, "w") as f:
json.dump(schema, f, indent=2)
paths[name] = path
return paths
def get_schema_path(schema_type: str) -> Optional[Path]:
"""
Get path to a schema file.
Args:
schema_type: "organism" or "listener"
Returns path if exists, None otherwise.
"""
filename = f"{schema_type}.schema.json"
path = SCHEMA_DIR / filename
if not path.exists():
# Write schemas if not present
write_schemas()
return path if path.exists() else None
def ensure_schemas() -> dict[str, Path]:
"""
Ensure all schemas are written and up to date.
Call this at startup to make sure schemas are available.
Returns dict of schema_name -> path.
"""
return write_schemas()
def get_schema_modeline(schema_type: str) -> str:
"""
Get the YAML modeline for a schema type.
Args:
schema_type: "organism" or "listener"
Returns modeline string like:
# yaml-language-server: $schema=~/.xml-pipeline/schemas/listener.schema.json
"""
return f"# yaml-language-server: $schema=~/.xml-pipeline/schemas/{schema_type}.schema.json"
__all__ = [
"ORGANISM_SCHEMA",
"LISTENER_SCHEMA",
"SCHEMA_DIR",
"ensure_schemas",
"get_schema_path",
"get_schema_modeline",
"write_schemas",
]

View file

@ -0,0 +1,186 @@
"""
JSON Schema for listener.yaml files.
This schema enables yaml-language-server to provide:
- Autocompletion for listener configuration fields
- Validation of field types
- Documentation on hover
"""
LISTENER_SCHEMA = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://xml-pipeline.org/schemas/listener.schema.json",
"title": "Listener Configuration",
"description": "Configuration for an individual listener in xml-pipeline",
"type": "object",
"required": ["name"],
"additionalProperties": False,
"properties": {
"name": {
"type": "string",
"description": "Unique listener name. Becomes part of the XML root tag.",
"pattern": "^[a-zA-Z][a-zA-Z0-9_.-]*$",
"examples": ["greeter", "calculator.add", "search.google"],
},
"description": {
"type": "string",
"description": "Human-readable description. Required for tool prompt generation - leads auto-generated tool descriptions.",
"examples": ["Greets users warmly", "Adds two integers and returns their sum"],
},
"agent": {
"type": "boolean",
"description": "Mark as LLM-powered agent. Agents get unique root tags (enabling blind self-iteration) and receive own_name in metadata.",
"default": False,
},
"tool": {
"type": "boolean",
"description": "Mark as simple tool/function. Tools are stateless handlers that process requests and return results.",
"default": False,
},
"gateway": {
"type": "boolean",
"description": "Mark as federation gateway. Gateways forward messages to remote organisms.",
"default": False,
},
"broadcast": {
"type": "boolean",
"description": "Allow sharing root tag with other listeners. Enables parallel handling of the same message type.",
"default": False,
},
"handler": {
"type": "string",
"description": "Python import path to the async handler function.",
"pattern": "^[a-zA-Z_][a-zA-Z0-9_.]*$",
"examples": [
"handlers.hello.handle_greeting",
"xml_pipeline.tools.calculate.calculate_handler",
],
},
"payload_class": {
"type": "string",
"description": "Python import path to the @xmlify dataclass that defines the message schema.",
"pattern": "^[a-zA-Z_][a-zA-Z0-9_.]*$",
"examples": [
"handlers.hello.Greeting",
"xml_pipeline.tools.calculate.CalculatePayload",
],
},
"prompt": {
"type": "string",
"description": "System prompt for LLM agents. Injected as the first system message. Can use YAML multiline syntax.",
"examples": [
"You are a friendly greeter. Keep responses short and enthusiastic.",
],
},
"model": {
"type": "string",
"description": "LLM model to use for this agent. Overrides the default model from LLM router.",
"examples": ["grok-4.1", "claude-sonnet-4", "gpt-4o", "llama3"],
},
"peers": {
"type": "array",
"description": "List of listener names this listener can send messages to. Enforced by the message pump.",
"items": {
"type": "string",
"pattern": "^[a-zA-Z][a-zA-Z0-9_.-]*$",
},
"uniqueItems": True,
"examples": [["shouter", "logger"], ["calculator.add", "calculator.multiply"]],
},
"allowed_tools": {
"type": "array",
"description": "Explicitly allowed native tools. If set, only these tools are available.",
"items": {
"type": "string",
"enum": [
"calculate",
"fetch",
"files",
"shell",
"search",
"keyvalue",
"convert",
"librarian",
],
},
"uniqueItems": True,
},
"blocked_tools": {
"type": "array",
"description": "Explicitly blocked native tools. These tools are never available.",
"items": {
"type": "string",
"enum": [
"calculate",
"fetch",
"files",
"shell",
"search",
"keyvalue",
"convert",
"librarian",
],
},
"uniqueItems": True,
},
"temperature": {
"type": "number",
"description": "LLM temperature setting. Higher = more creative, lower = more focused.",
"default": 0.7,
"minimum": 0.0,
"maximum": 2.0,
},
"max_tokens": {
"type": "integer",
"description": "Maximum tokens in LLM response.",
"default": 4096,
"minimum": 1,
},
"verbose": {
"type": "boolean",
"description": "Enable verbose logging for this listener.",
"default": False,
},
"confirm_actions": {
"type": "boolean",
"description": "Require confirmation before tool calls.",
"default": False,
},
"metadata": {
"type": "object",
"description": "Custom metadata available to the handler via metadata.custom.",
"additionalProperties": True,
},
},
"if": {
"properties": {
"agent": {"const": True}
}
},
"then": {
"required": ["prompt"],
"properties": {
"description": {
"description": "Description is recommended for agents to improve tool generation."
}
}
},
"examples": [
{
"name": "greeter",
"description": "Greeting agent",
"agent": True,
"handler": "handlers.hello.handle_greeting",
"payload_class": "handlers.hello.Greeting",
"prompt": "You are a friendly greeter. Respond warmly and briefly.",
"peers": ["shouter"],
},
{
"name": "calculator.add",
"description": "Adds two integers and returns their sum",
"tool": True,
"handler": "handlers.calculator.add_handler",
"payload_class": "handlers.calculator.AddPayload",
},
],
}

View file

@ -0,0 +1,381 @@
"""
JSON Schema for organism.yaml files.
This schema enables yaml-language-server to provide:
- Autocompletion for fields
- Validation of field types
- Documentation on hover
"""
ORGANISM_SCHEMA = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://xml-pipeline.org/schemas/organism.schema.json",
"title": "Organism Configuration",
"description": "Configuration for an xml-pipeline organism",
"type": "object",
"required": ["organism"],
"additionalProperties": False,
"properties": {
"organism": {
"type": "object",
"description": "Core organism settings",
"required": ["name"],
"additionalProperties": False,
"properties": {
"name": {
"type": "string",
"description": "Unique name for this organism",
"pattern": "^[a-zA-Z][a-zA-Z0-9_-]*$",
},
"port": {
"type": "integer",
"description": "WebSocket server port",
"default": 8765,
"minimum": 1,
"maximum": 65535,
},
"version": {
"type": "string",
"description": "Organism version (semver)",
"default": "0.1.0",
},
"description": {
"type": "string",
"description": "Human-readable description",
},
"identity": {
"type": "string",
"description": "Path to Ed25519 private key for signing",
},
"thread_scheduling": {
"type": "string",
"description": "Thread execution policy",
"enum": ["breadth-first", "depth-first"],
"default": "breadth-first",
},
"max_concurrent_pipelines": {
"type": "integer",
"description": "Maximum concurrent pipeline executions",
"default": 100,
"minimum": 1,
},
"max_concurrent_handlers": {
"type": "integer",
"description": "Maximum concurrent handler executions",
"default": 50,
"minimum": 1,
},
"max_concurrent_per_agent": {
"type": "integer",
"description": "Maximum concurrent requests per agent",
"default": 5,
"minimum": 1,
},
},
},
"tls": {
"type": "object",
"description": "TLS configuration for WebSocket server",
"properties": {
"cert": {
"type": "string",
"description": "Path to certificate file (PEM)",
},
"key": {
"type": "string",
"description": "Path to private key file (PEM)",
},
},
},
"oob": {
"type": "object",
"description": "Out-of-band privileged channel configuration",
"properties": {
"enabled": {
"type": "boolean",
"default": True,
},
"bind": {
"type": "string",
"description": "Bind address (localhost only for security)",
"default": "127.0.0.1",
},
"port": {
"type": "integer",
"description": "OOB channel port",
"minimum": 1,
"maximum": 65535,
},
"unix_socket": {
"type": "string",
"description": "Unix socket path (alternative to port)",
},
},
},
"meta": {
"type": "object",
"description": "Introspection settings",
"properties": {
"enabled": {
"type": "boolean",
"default": True,
},
"allow_list_capabilities": {
"type": "boolean",
"default": True,
},
"allow_schema_requests": {
"type": "string",
"enum": ["admin", "authenticated", "none"],
"default": "admin",
},
"allow_example_requests": {
"type": "string",
"enum": ["admin", "authenticated", "none"],
"default": "admin",
},
"allow_prompt_requests": {
"type": "string",
"enum": ["admin", "authenticated", "none"],
"default": "admin",
},
"allow_remote": {
"type": "boolean",
"description": "Allow federation peers to query meta",
"default": False,
},
},
},
"listeners": {
"oneOf": [
{
"type": "array",
"description": "Inline listener configurations (legacy format)",
"items": {
"$ref": "#/$defs/listener",
},
},
{
"type": "object",
"description": "Split listener configuration",
"properties": {
"directory": {
"type": "string",
"description": "Path to listeners directory",
"default": "~/.xml-pipeline/listeners",
},
"include": {
"type": "array",
"description": "Glob patterns to include",
"items": {"type": "string"},
"default": ["*.yaml"],
},
},
},
],
},
"gateways": {
"type": "array",
"description": "Federation gateway configurations",
"items": {
"type": "object",
"required": ["name"],
"properties": {
"name": {
"type": "string",
"description": "Gateway identifier",
},
"remote_url": {
"type": "string",
"description": "Remote organism WebSocket URL",
"format": "uri",
},
"trusted_identity": {
"type": "string",
"description": "Path to trusted public key",
},
"description": {
"type": "string",
},
},
},
},
"llm": {
"type": "object",
"description": "LLM router configuration",
"properties": {
"strategy": {
"type": "string",
"description": "Backend selection strategy",
"enum": ["failover", "round-robin", "least-loaded"],
"default": "failover",
},
"retries": {
"type": "integer",
"description": "Max retry attempts per request",
"default": 3,
"minimum": 0,
},
"retry_base_delay": {
"type": "number",
"description": "Base delay for exponential backoff (seconds)",
"default": 1.0,
},
"retry_max_delay": {
"type": "number",
"description": "Maximum delay between retries (seconds)",
"default": 60.0,
},
"backends": {
"type": "array",
"description": "LLM backend configurations",
"items": {
"$ref": "#/$defs/llmBackend",
},
},
},
},
"server": {
"type": "object",
"description": "WebSocket server configuration",
"properties": {
"enabled": {
"type": "boolean",
"default": False,
},
"host": {
"type": "string",
"default": "127.0.0.1",
},
"port": {
"type": "integer",
"default": 8765,
"minimum": 1,
"maximum": 65535,
},
},
},
"auth": {
"type": "object",
"description": "Authentication configuration",
"properties": {
"enabled": {
"type": "boolean",
"default": False,
},
"totp_secret_env": {
"type": "string",
"description": "Environment variable containing TOTP secret",
"default": "ORGANISM_TOTP_SECRET",
},
},
},
},
"$defs": {
"listener": {
"type": "object",
"required": ["name"],
"properties": {
"name": {
"type": "string",
"description": "Unique listener name",
"pattern": "^[a-zA-Z][a-zA-Z0-9_.-]*$",
},
"description": {
"type": "string",
"description": "Human-readable description (required for tool prompts)",
},
"agent": {
"type": "boolean",
"description": "LLM-powered agent (requires unique root tag)",
"default": False,
},
"tool": {
"type": "boolean",
"description": "Simple tool/function handler",
"default": False,
},
"gateway": {
"type": "boolean",
"description": "Federation gateway",
"default": False,
},
"broadcast": {
"type": "boolean",
"description": "Allow shared root tag with other listeners",
"default": False,
},
"handler": {
"type": "string",
"description": "Python import path to handler function",
},
"payload_class": {
"type": "string",
"description": "Python import path to @xmlify dataclass",
},
"prompt": {
"type": "string",
"description": "System prompt for LLM agent",
},
"model": {
"type": "string",
"description": "LLM model to use",
},
"peers": {
"type": "array",
"description": "Allowed message targets",
"items": {"type": "string"},
},
"allowed_tools": {
"type": "array",
"description": "Explicitly allowed tools",
"items": {"type": "string"},
},
"blocked_tools": {
"type": "array",
"description": "Explicitly blocked tools",
"items": {"type": "string"},
},
},
},
"llmBackend": {
"type": "object",
"required": ["provider"],
"properties": {
"provider": {
"type": "string",
"description": "LLM provider type",
"enum": ["xai", "anthropic", "openai", "ollama"],
},
"api_key_env": {
"type": "string",
"description": "Environment variable containing API key",
},
"priority": {
"type": "integer",
"description": "Priority for failover (lower = preferred)",
"default": 0,
},
"rate_limit_tpm": {
"type": "integer",
"description": "Tokens per minute limit",
},
"max_concurrent": {
"type": "integer",
"description": "Maximum concurrent requests",
"default": 20,
},
"base_url": {
"type": "string",
"description": "Override default API endpoint",
"format": "uri",
},
"supported_models": {
"type": "array",
"description": "Model names this backend handles (Ollama)",
"items": {"type": "string"},
},
},
},
},
}

View file

@ -0,0 +1,322 @@
"""
Split configuration loader.
Loads organism configuration from multiple files:
- organism.yaml: Core settings (name, port, llm backends)
- listeners/*.yaml: Per-listener configurations
This enables:
- Cleaner separation of concerns
- Per-listener LSP-assisted editing
- Modular configuration management
"""
from __future__ import annotations
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Optional
import glob as glob_module
import yaml
from .listeners.store import ListenerConfigStore, LISTENERS_DIR
class SplitConfigError(Exception):
"""Configuration loading/validation error."""
pass
@dataclass
class OrganismCoreConfig:
"""
Core organism configuration (from organism.yaml).
Does not include listener details - those come from split files.
"""
name: str
port: int = 8765
version: str = "0.1.0"
description: str = ""
# Thread scheduling
thread_scheduling: str = "breadth-first"
# Concurrency limits
max_concurrent_pipelines: int = 100
max_concurrent_handlers: int = 50
max_concurrent_per_agent: int = 5
# Listeners directory configuration
listeners_directory: Optional[str] = None
listeners_include: list[str] = field(default_factory=lambda: ["*.yaml"])
# LLM configuration (kept in organism.yaml)
llm: dict[str, Any] = field(default_factory=dict)
# Server configuration
server: dict[str, Any] = field(default_factory=dict)
# Auth configuration
auth: dict[str, Any] = field(default_factory=dict)
# Meta configuration
meta: dict[str, Any] = field(default_factory=dict)
@dataclass
class SplitOrganismConfig:
"""
Complete organism configuration assembled from split files.
"""
# Core config from organism.yaml
core: OrganismCoreConfig
# Listener configs (from split files or inlined)
listeners: list[dict[str, Any]] = field(default_factory=list)
# Source paths for debugging
organism_path: Optional[Path] = None
listener_paths: list[Path] = field(default_factory=list)
def load_organism_yaml(path: Path) -> tuple[dict[str, Any], OrganismCoreConfig]:
"""
Load organism.yaml and extract core config.
Returns (raw_data, core_config) tuple.
"""
with open(path) as f:
raw = yaml.safe_load(f)
if not isinstance(raw, dict):
raise SplitConfigError(f"Config must be a YAML mapping, got {type(raw)}")
# Extract organism section
org_raw = raw.get("organism", {})
if not org_raw.get("name"):
raise SplitConfigError("organism.name is required")
# Extract listeners config
listeners_raw = raw.get("listeners", {})
if isinstance(listeners_raw, dict):
# New split format: listeners: { directory: ..., include: [...] }
listeners_dir = listeners_raw.get("directory")
listeners_include = listeners_raw.get("include", ["*.yaml"])
else:
# Legacy format: listeners is a list - no split loading
listeners_dir = None
listeners_include = ["*.yaml"]
core = OrganismCoreConfig(
name=org_raw["name"],
port=org_raw.get("port", 8765),
version=org_raw.get("version", "0.1.0"),
description=org_raw.get("description", ""),
thread_scheduling=org_raw.get("thread_scheduling", "breadth-first"),
max_concurrent_pipelines=org_raw.get("max_concurrent_pipelines", 100),
max_concurrent_handlers=org_raw.get("max_concurrent_handlers", 50),
max_concurrent_per_agent=org_raw.get("max_concurrent_per_agent", 5),
listeners_directory=listeners_dir,
listeners_include=listeners_include,
llm=raw.get("llm", {}),
server=raw.get("server", {}),
auth=raw.get("auth", {}),
meta=raw.get("meta", {}),
)
return raw, core
def load_listener_files(
directory: Path,
patterns: list[str],
) -> list[tuple[Path, dict[str, Any]]]:
"""
Load all listener YAML files matching patterns.
Returns list of (path, data) tuples.
"""
results = []
for pattern in patterns:
full_pattern = str(directory / pattern)
for filepath in glob_module.glob(full_pattern):
path = Path(filepath)
try:
with open(path) as f:
data = yaml.safe_load(f)
if isinstance(data, dict):
# Ensure name is set from filename if not in file
if "name" not in data:
data["name"] = path.stem
results.append((path, data))
except Exception as e:
raise SplitConfigError(
f"Failed to load listener file {path}: {e}"
)
return results
def resolve_listeners_directory(
config_dir: Optional[str],
organism_path: Optional[Path] = None,
) -> Path:
"""
Resolve the listeners directory path.
Handles:
- None -> default ~/.xml-pipeline/listeners
- Absolute path -> use as-is
- Relative path -> relative to organism.yaml location
- ~ expansion
"""
if config_dir is None:
return LISTENERS_DIR
# Expand user home
expanded = Path(config_dir).expanduser()
if expanded.is_absolute():
return expanded
# Relative to organism.yaml location
if organism_path is not None:
return organism_path.parent / expanded
return expanded
def load_split_config(organism_path: Path) -> SplitOrganismConfig:
"""
Load complete organism configuration from split files.
If organism.yaml has listeners as a dict with 'directory' key,
loads listener configs from that directory.
If listeners is a list (legacy format), uses those directly.
"""
raw, core = load_organism_yaml(organism_path)
listeners: list[dict[str, Any]] = []
listener_paths: list[Path] = []
listeners_raw = raw.get("listeners", {})
if isinstance(listeners_raw, list):
# Legacy format: inline listeners
listeners = listeners_raw
elif isinstance(listeners_raw, dict):
# Split format: load from directory
listeners_dir = resolve_listeners_directory(
core.listeners_directory,
organism_path,
)
if listeners_dir.exists():
loaded = load_listener_files(listeners_dir, core.listeners_include)
for path, data in loaded:
listeners.append(data)
listener_paths.append(path)
else:
raise SplitConfigError(
f"listeners must be a list or dict, got {type(listeners_raw)}"
)
return SplitOrganismConfig(
core=core,
listeners=listeners,
organism_path=organism_path,
listener_paths=listener_paths,
)
def get_organism_yaml_path() -> Optional[Path]:
"""
Get the default organism.yaml path.
Searches in order:
1. ~/.xml-pipeline/organism.yaml
2. ./organism.yaml
3. ./config/organism.yaml
"""
candidates = [
Path.home() / ".xml-pipeline" / "organism.yaml",
Path("organism.yaml"),
Path("config/organism.yaml"),
]
for path in candidates:
if path.exists():
return path
return None
def save_organism_yaml(config: OrganismCoreConfig, path: Path) -> None:
"""
Save organism core config to YAML file.
Preserves the split-file structure if listeners_directory is set.
"""
data: dict[str, Any] = {
"organism": {
"name": config.name,
"port": config.port,
"version": config.version,
}
}
if config.description:
data["organism"]["description"] = config.description
if config.thread_scheduling != "breadth-first":
data["organism"]["thread_scheduling"] = config.thread_scheduling
# Add listeners directory reference
if config.listeners_directory:
data["listeners"] = {
"directory": config.listeners_directory,
"include": config.listeners_include,
}
# Add other sections if non-empty
if config.llm:
data["llm"] = config.llm
if config.server:
data["server"] = config.server
if config.auth:
data["auth"] = config.auth
if config.meta:
data["meta"] = config.meta
with open(path, "w") as f:
yaml.dump(
data,
f,
default_flow_style=False,
sort_keys=False,
allow_unicode=True,
)
def load_organism_yaml_content(path: Path) -> str:
"""Load organism.yaml content as string for editing."""
with open(path) as f:
return f.read()
def save_organism_yaml_content(path: Path, content: str) -> None:
"""Save organism.yaml content from string."""
# Validate YAML before saving
yaml.safe_load(content)
with open(path, "w") as f:
f.write(content)

View file

@ -6,7 +6,7 @@ Provides:
- ConsoleClient: Network client connecting to server with auth
"""
from agentserver.console.secure_console import SecureConsole, PasswordManager
from agentserver.console.client import ConsoleClient
from xml_pipeline.console.secure_console import SecureConsole, PasswordManager
from xml_pipeline.console.client import ConsoleClient
__all__ = ["SecureConsole", "PasswordManager", "ConsoleClient"]

View file

@ -13,7 +13,7 @@ from pathlib import Path
from typing import Optional, Tuple, TYPE_CHECKING
if TYPE_CHECKING:
from agentserver.console.lsp import YAMLLSPClient, ASLSClient
from xml_pipeline.console.lsp import YAMLLSPClient, ASLSClient
from typing import Union
LSPClientType = Union[YAMLLSPClient, ASLSClient]
@ -375,7 +375,7 @@ class LSPEditor:
# Try to get appropriate LSP client
try:
from agentserver.console.lsp import get_lsp_manager, LSPServerType
from xml_pipeline.console.lsp import get_lsp_manager, LSPServerType
manager = get_lsp_manager()
if self._lsp_type == "yaml":
@ -409,7 +409,7 @@ class LSPEditor:
if self._lsp_client:
await self._lsp_client.did_close(document_uri)
try:
from agentserver.console.lsp import get_lsp_manager, LSPServerType
from xml_pipeline.console.lsp import get_lsp_manager, LSPServerType
manager = get_lsp_manager()
if self._lsp_type == "yaml":
await manager.release_client(LSPServerType.YAML)

View file

@ -0,0 +1,63 @@
"""
LSP (Language Server Protocol) integration for the editor.
Provides:
- YAMLLSPClient: Wrapper for yaml-language-server communication
- ASLSClient: Wrapper for AssemblyScript language server communication
- LSPServerManager: Server lifecycle management
- LSPBridge: Integration with prompt_toolkit editor
Supported Language Servers:
- yaml-language-server: npm install -g yaml-language-server
- asls (AssemblyScript): npm install -g assemblyscript-lsp
"""
from __future__ import annotations
from .client import (
YAMLLSPClient,
LSPCompletion,
LSPDiagnostic,
LSPHover,
is_lsp_available,
)
from .asls_client import (
ASLSClient,
ASLSConfig,
is_asls_available,
is_assemblyscript_file,
ASSEMBLYSCRIPT_EXTENSIONS,
)
from .manager import (
LSPServerManager,
LSPServerType,
get_lsp_manager,
ensure_lsp_stopped,
)
from .bridge import (
LSPCompleter,
DiagnosticsProcessor,
)
__all__ = [
# YAML Client
"YAMLLSPClient",
"LSPCompletion",
"LSPDiagnostic",
"LSPHover",
"is_lsp_available",
# AssemblyScript Client
"ASLSClient",
"ASLSConfig",
"is_asls_available",
"is_assemblyscript_file",
"ASSEMBLYSCRIPT_EXTENSIONS",
# Manager
"LSPServerManager",
"LSPServerType",
"get_lsp_manager",
"ensure_lsp_stopped",
# Bridge
"LSPCompleter",
"DiagnosticsProcessor",
]

View file

@ -0,0 +1,527 @@
"""
AssemblyScript Language Server Protocol client.
Wraps communication with asls (AssemblyScript Language Server) for:
- Autocompletion for AgentServer SDK types
- Type checking and diagnostics
- Hover documentation
Install: npm install -g assemblyscript-lsp
Used for editing WASM listener source files written in AssemblyScript.
"""
from __future__ import annotations
import asyncio
import shutil
import logging
from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Any
from .client import (
LSPCompletion,
LSPDiagnostic,
LSPHover,
)
logger = logging.getLogger(__name__)
def _check_asls() -> bool:
"""Check if asls (AssemblyScript Language Server) is installed."""
return shutil.which("asls") is not None
def is_asls_available() -> tuple[bool, str]:
"""
Check if AssemblyScript LSP support is available.
Returns (available, reason) tuple.
"""
if not _check_asls():
return False, "asls not found (npm install -g assemblyscript-lsp)"
return True, "AssemblyScript LSP available"
# File extensions handled by ASLS
ASSEMBLYSCRIPT_EXTENSIONS = {".ts", ".as"}
def is_assemblyscript_file(path: str | Path) -> bool:
"""Check if a file should use the AssemblyScript LSP."""
return Path(path).suffix.lower() in ASSEMBLYSCRIPT_EXTENSIONS
@dataclass
class ASLSConfig:
"""
Configuration for the AssemblyScript Language Server.
These settings are passed during initialization.
"""
# Path to asconfig.json (AssemblyScript project config)
asconfig_path: Optional[str] = None
# Path to AgentServer SDK type definitions
sdk_types_path: Optional[str] = None
# Enable strict null checks
strict_null_checks: bool = True
# Enable additional diagnostics
verbose_diagnostics: bool = False
class ASLSClient:
"""
Client for communicating with the AssemblyScript Language Server.
Uses stdio for communication with the language server process.
Usage:
client = ASLSClient()
if await client.start():
await client.did_open(uri, content)
completions = await client.completion(uri, line, col)
await client.stop()
"""
def __init__(self, config: Optional[ASLSConfig] = None):
"""
Initialize the ASLS client.
Args:
config: Optional ASLS configuration
"""
self.config = config or ASLSConfig()
self._process: Optional[asyncio.subprocess.Process] = None
self._reader_task: Optional[asyncio.Task] = None
self._request_id = 0
self._pending_requests: dict[int, asyncio.Future] = {}
self._diagnostics: dict[str, list[LSPDiagnostic]] = {}
self._initialized = False
self._lock = asyncio.Lock()
async def start(self) -> bool:
"""
Start the AssemblyScript language server.
Returns True if started successfully.
"""
available, reason = is_asls_available()
if not available:
logger.warning(f"ASLS not available: {reason}")
return False
try:
self._process = await asyncio.create_subprocess_exec(
"asls", "--stdio",
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Start reader task
self._reader_task = asyncio.create_task(self._read_messages())
# Initialize LSP
await self._initialize()
self._initialized = True
logger.info("AssemblyScript language server started")
return True
except Exception as e:
logger.error(f"Failed to start asls: {e}")
await self.stop()
return False
async def stop(self) -> None:
"""Stop the language server."""
self._initialized = False
if self._reader_task:
self._reader_task.cancel()
try:
await self._reader_task
except asyncio.CancelledError:
pass
self._reader_task = None
if self._process:
self._process.terminate()
try:
await asyncio.wait_for(self._process.wait(), timeout=2)
except asyncio.TimeoutError:
self._process.kill()
self._process = None
# Cancel pending requests
for future in self._pending_requests.values():
if not future.done():
future.cancel()
self._pending_requests.clear()
async def _initialize(self) -> None:
"""Send LSP initialize request."""
init_options: dict[str, Any] = {}
if self.config.asconfig_path:
init_options["asconfigPath"] = self.config.asconfig_path
if self.config.sdk_types_path:
init_options["sdkTypesPath"] = self.config.sdk_types_path
result = await self._request(
"initialize",
{
"processId": None,
"rootUri": None,
"capabilities": {
"textDocument": {
"completion": {
"completionItem": {
"snippetSupport": True,
"documentationFormat": ["markdown", "plaintext"],
}
},
"hover": {
"contentFormat": ["markdown", "plaintext"],
},
"publishDiagnostics": {
"relatedInformation": True,
},
"signatureHelp": {
"signatureInformation": {
"documentationFormat": ["markdown", "plaintext"],
}
},
}
},
"initializationOptions": init_options,
},
)
logger.debug(f"ASLS initialized: {result}")
# Send initialized notification
await self._notify("initialized", {})
async def did_open(self, uri: str, content: str) -> None:
"""Notify server that a document was opened."""
if not self._initialized:
return
# Determine language ID based on extension
language_id = "assemblyscript"
if uri.endswith(".ts"):
language_id = "typescript" # ASLS may prefer this
await self._notify(
"textDocument/didOpen",
{
"textDocument": {
"uri": uri,
"languageId": language_id,
"version": 1,
"text": content,
}
},
)
async def did_change(
self, uri: str, content: str, version: int = 1
) -> list[LSPDiagnostic]:
"""
Notify server of document change.
Returns current diagnostics for the document.
"""
if not self._initialized:
return []
await self._notify(
"textDocument/didChange",
{
"textDocument": {"uri": uri, "version": version},
"contentChanges": [{"text": content}],
},
)
# Wait briefly for diagnostics
await asyncio.sleep(0.2) # ASLS may need more time than YAML
return self._diagnostics.get(uri, [])
async def did_close(self, uri: str) -> None:
"""Notify server that a document was closed."""
if not self._initialized:
return
await self._notify(
"textDocument/didClose",
{"textDocument": {"uri": uri}},
)
# Clear diagnostics
self._diagnostics.pop(uri, None)
async def completion(
self, uri: str, line: int, column: int
) -> list[LSPCompletion]:
"""
Request completions at a position.
Args:
uri: Document URI
line: 0-indexed line number
column: 0-indexed column number
Returns list of completion items.
"""
if not self._initialized:
return []
try:
result = await self._request(
"textDocument/completion",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
if result is None:
return []
items = result.get("items", []) if isinstance(result, dict) else result
return [LSPCompletion.from_lsp(item) for item in items]
except Exception as e:
logger.debug(f"ASLS completion request failed: {e}")
return []
async def hover(self, uri: str, line: int, column: int) -> Optional[LSPHover]:
"""
Request hover information at a position.
Args:
uri: Document URI
line: 0-indexed line number
column: 0-indexed column number
"""
if not self._initialized:
return None
try:
result = await self._request(
"textDocument/hover",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
return LSPHover.from_lsp(result) if result else None
except Exception as e:
logger.debug(f"ASLS hover request failed: {e}")
return None
async def signature_help(
self, uri: str, line: int, column: int
) -> Optional[dict[str, Any]]:
"""
Request signature help at a position.
Useful when typing function arguments.
"""
if not self._initialized:
return None
try:
result = await self._request(
"textDocument/signatureHelp",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
return result
except Exception as e:
logger.debug(f"ASLS signature help request failed: {e}")
return None
async def go_to_definition(
self, uri: str, line: int, column: int
) -> Optional[list[dict[str, Any]]]:
"""
Request go-to-definition at a position.
Returns list of location objects.
"""
if not self._initialized:
return None
try:
result = await self._request(
"textDocument/definition",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
if result is None:
return None
# Normalize to list
if isinstance(result, dict):
return [result]
return result
except Exception as e:
logger.debug(f"ASLS go-to-definition failed: {e}")
return None
def get_diagnostics(self, uri: str) -> list[LSPDiagnostic]:
"""Get current diagnostics for a document."""
return self._diagnostics.get(uri, [])
# -------------------------------------------------------------------------
# LSP Protocol Implementation (shared pattern with YAMLLSPClient)
# -------------------------------------------------------------------------
async def _request(self, method: str, params: dict[str, Any]) -> Any:
"""Send a request and wait for response."""
async with self._lock:
self._request_id += 1
req_id = self._request_id
message = {
"jsonrpc": "2.0",
"id": req_id,
"method": method,
"params": params,
}
future: asyncio.Future = asyncio.Future()
self._pending_requests[req_id] = future
try:
await self._send_message(message)
return await asyncio.wait_for(future, timeout=10.0) # Longer timeout for ASLS
except asyncio.TimeoutError:
logger.warning(f"ASLS request timed out: {method}")
return None
finally:
self._pending_requests.pop(req_id, None)
async def _notify(self, method: str, params: dict[str, Any]) -> None:
"""Send a notification (no response expected)."""
message = {
"jsonrpc": "2.0",
"method": method,
"params": params,
}
await self._send_message(message)
async def _send_message(self, message: dict[str, Any]) -> None:
"""Send a JSON-RPC message to the server."""
if not self._process or not self._process.stdin:
return
import json
content = json.dumps(message)
header = f"Content-Length: {len(content)}\r\n\r\n"
try:
self._process.stdin.write(header.encode())
self._process.stdin.write(content.encode())
await self._process.stdin.drain()
except (BrokenPipeError, OSError, ConnectionResetError) as e:
logger.error(f"Failed to send ASLS message: {e}")
async def _read_messages(self) -> None:
"""Read messages from the server."""
if not self._process or not self._process.stdout:
return
import json
try:
while True:
# Read header
header = b""
while b"\r\n\r\n" not in header:
chunk = await self._process.stdout.read(1)
if not chunk:
return # EOF
header += chunk
# Parse content length
content_length = 0
for line in header.decode().split("\r\n"):
if line.startswith("Content-Length:"):
content_length = int(line.split(":")[1].strip())
break
if content_length == 0:
continue
# Read content
content = await self._process.stdout.read(content_length)
if not content:
return
# Parse and handle message
try:
message = json.loads(content.decode())
await self._handle_message(message)
except json.JSONDecodeError as e:
logger.error(f"Failed to parse ASLS message: {e}")
except asyncio.CancelledError:
pass
except Exception as e:
logger.error(f"ASLS reader error: {e}")
async def _handle_message(self, message: dict[str, Any]) -> None:
"""Handle an incoming LSP message."""
if "id" in message and "result" in message:
# Response to a request
req_id = message["id"]
if req_id in self._pending_requests:
future = self._pending_requests[req_id]
if not future.done():
future.set_result(message.get("result"))
elif "id" in message and "error" in message:
# Error response
req_id = message["id"]
if req_id in self._pending_requests:
future = self._pending_requests[req_id]
if not future.done():
error = message["error"]
future.set_exception(
Exception(f"ASLS error: {error.get('message', error)}")
)
elif message.get("method") == "textDocument/publishDiagnostics":
# Diagnostics notification
params = message.get("params", {})
uri = params.get("uri", "")
diagnostics = [
LSPDiagnostic.from_lsp(d)
for d in params.get("diagnostics", [])
]
self._diagnostics[uri] = diagnostics
logger.debug(f"ASLS: {len(diagnostics)} diagnostics for {uri}")
elif "method" in message:
# Other notification
logger.debug(f"ASLS notification: {message.get('method')}")

View file

@ -0,0 +1,314 @@
"""
Bridge between LSP client and prompt_toolkit.
Provides:
- LSPCompleter: Async completer for prompt_toolkit using LSP
- DiagnosticsProcessor: Processes diagnostics for inline display
"""
from __future__ import annotations
import asyncio
from dataclasses import dataclass
from typing import Optional, Iterable, TYPE_CHECKING
if TYPE_CHECKING:
from .client import YAMLLSPClient, LSPDiagnostic, LSPCompletion
try:
from prompt_toolkit.completion import Completer, Completion
from prompt_toolkit.document import Document
PROMPT_TOOLKIT_AVAILABLE = True
except ImportError:
PROMPT_TOOLKIT_AVAILABLE = False
# Stub classes for type checking
class Completer: # type: ignore
pass
class Completion: # type: ignore
pass
class Document: # type: ignore
pass
@dataclass
class DiagnosticMark:
"""A diagnostic marker for display in the editor."""
line: int
column: int
end_column: int
message: str
severity: str # error, warning, info, hint
@property
def is_error(self) -> bool:
return self.severity == "error"
@property
def is_warning(self) -> bool:
return self.severity == "warning"
@property
def style(self) -> str:
"""Get prompt_toolkit style for this diagnostic."""
if self.severity == "error":
return "class:diagnostic.error"
elif self.severity == "warning":
return "class:diagnostic.warning"
elif self.severity == "info":
return "class:diagnostic.info"
else:
return "class:diagnostic.hint"
class LSPCompleter(Completer):
"""
prompt_toolkit completer that uses LSP for suggestions.
Usage:
completer = LSPCompleter(lsp_client, document_uri)
buffer = Buffer(completer=completer)
"""
def __init__(
self,
client: Optional["YAMLLSPClient"],
uri: str,
fallback_completer: Optional[Completer] = None,
):
"""
Initialize the LSP completer.
Args:
client: LSP client (can be None for fallback-only mode)
uri: Document URI for LSP requests
fallback_completer: Fallback when LSP unavailable
"""
self.client = client
self.uri = uri
self.fallback_completer = fallback_completer
self._cache: dict[tuple[int, int], list["LSPCompletion"]] = {}
self._cache_version = 0
def invalidate_cache(self) -> None:
"""Invalidate the completion cache."""
self._cache.clear()
self._cache_version += 1
def get_completions(
self,
document: Document,
complete_event,
) -> Iterable[Completion]:
"""
Get completions for the current document position.
This is called synchronously by prompt_toolkit.
We use a cached result if available, otherwise
return nothing (async completions handled separately).
"""
if not PROMPT_TOOLKIT_AVAILABLE:
return
# Get current position
line = document.cursor_position_row
col = document.cursor_position_col
# Check cache
cache_key = (line, col)
if cache_key in self._cache:
completions = self._cache[cache_key]
for item in completions:
yield Completion(
text=item.insert_text or item.label,
start_position=-len(self._get_word_before_cursor(document)),
display=item.label,
display_meta=item.detail or item.kind,
)
return
# Fallback to basic completer
if self.fallback_completer:
yield from self.fallback_completer.get_completions(
document, complete_event
)
async def get_completions_async(
self,
document: Document,
) -> list["LSPCompletion"]:
"""
Get completions asynchronously from LSP.
Call this when Ctrl+Space is pressed.
"""
if self.client is None:
return []
line = document.cursor_position_row
col = document.cursor_position_col
# Request from LSP
completions = await self.client.completion(self.uri, line, col)
# Cache result
self._cache[(line, col)] = completions
return completions
def _get_word_before_cursor(self, document: Document) -> str:
"""Get the word being typed before cursor."""
text = document.text_before_cursor
if not text:
return ""
# Find word boundary
i = len(text) - 1
while i >= 0 and (text[i].isalnum() or text[i] in "_-"):
i -= 1
return text[i + 1:]
class DiagnosticsProcessor:
"""
Processes LSP diagnostics for display in the editor.
Converts LSP diagnostics into markers that can be
displayed inline in the prompt_toolkit editor.
"""
def __init__(self, client: Optional["YAMLLSPClient"], uri: str):
self.client = client
self.uri = uri
self._marks: list[DiagnosticMark] = []
def get_marks(self) -> list[DiagnosticMark]:
"""Get current diagnostic marks."""
return self._marks
def get_marks_for_line(self, line: int) -> list[DiagnosticMark]:
"""Get diagnostic marks for a specific line."""
return [m for m in self._marks if m.line == line]
def has_errors(self) -> bool:
"""Check if there are any error-level diagnostics."""
return any(m.is_error for m in self._marks)
def has_warnings(self) -> bool:
"""Check if there are any warning-level diagnostics."""
return any(m.is_warning for m in self._marks)
def get_error_count(self) -> int:
"""Get number of errors."""
return sum(1 for m in self._marks if m.is_error)
def get_warning_count(self) -> int:
"""Get number of warnings."""
return sum(1 for m in self._marks if m.is_warning)
async def update(self, content: str, version: int = 1) -> list[DiagnosticMark]:
"""
Update diagnostics by sending content to LSP.
Returns the new list of diagnostic marks.
"""
if self.client is None:
self._marks = []
return []
diagnostics = await self.client.did_change(self.uri, content, version)
self._marks = [
DiagnosticMark(
line=d.line,
column=d.column,
end_column=d.end_column,
message=d.message,
severity=d.severity,
)
for d in diagnostics
]
return self._marks
def format_status(self) -> str:
"""Format diagnostics as status bar text."""
errors = self.get_error_count()
warnings = self.get_warning_count()
if errors == 0 and warnings == 0:
return ""
parts = []
if errors > 0:
parts.append(f"{errors} error{'s' if errors > 1 else ''}")
if warnings > 0:
parts.append(f"{warnings} warning{'s' if warnings > 1 else ''}")
return " | ".join(parts)
def format_messages(self, max_lines: int = 3) -> list[str]:
"""Format diagnostic messages for display."""
messages = []
for mark in self._marks[:max_lines]:
prefix = "E" if mark.is_error else "W"
messages.append(f"[{prefix}] Line {mark.line + 1}: {mark.message}")
remaining = len(self._marks) - max_lines
if remaining > 0:
messages.append(f"... and {remaining} more")
return messages
class HoverPopup:
"""
Manages hover information display.
Shows documentation when hovering over a field
or pressing F1 on a position.
"""
def __init__(self, client: Optional["YAMLLSPClient"], uri: str):
self.client = client
self.uri = uri
self._current_hover: Optional[str] = None
self._hover_position: Optional[tuple[int, int]] = None
async def get_hover(self, line: int, col: int) -> Optional[str]:
"""
Get hover information for a position.
Returns formatted hover text or None.
"""
if self.client is None:
return None
hover = await self.client.hover(self.uri, line, col)
if hover is None:
self._current_hover = None
self._hover_position = None
return None
self._current_hover = hover.contents
self._hover_position = (line, col)
return hover.contents
def clear(self) -> None:
"""Clear current hover."""
self._current_hover = None
self._hover_position = None
@property
def has_hover(self) -> bool:
"""Check if there's an active hover."""
return self._current_hover is not None
@property
def text(self) -> str:
"""Get current hover text."""
return self._current_hover or ""

View file

@ -0,0 +1,538 @@
"""
YAML Language Server Protocol client.
Wraps communication with yaml-language-server for:
- Autocompletion
- Diagnostics (validation errors)
- Hover information
"""
from __future__ import annotations
import asyncio
import json
import subprocess
import shutil
from dataclasses import dataclass, field
from pathlib import Path
from typing import Optional, Any
import logging
logger = logging.getLogger(__name__)
# Check for lsp-client availability
def _check_lsp_client() -> bool:
"""Check if lsp-client package is available."""
try:
import lsp_client # noqa: F401
return True
except ImportError:
return False
def _check_yaml_language_server() -> bool:
"""Check if yaml-language-server is installed."""
return shutil.which("yaml-language-server") is not None
def is_lsp_available() -> tuple[bool, str]:
"""
Check if LSP support is available.
Returns (available, reason) tuple.
"""
if not _check_lsp_client():
return False, "lsp-client package not installed (pip install lsp-client)"
if not _check_yaml_language_server():
return False, "yaml-language-server not found (npm install -g yaml-language-server)"
return True, "LSP available"
@dataclass
class LSPCompletion:
"""Normalized completion item from LSP."""
label: str
kind: str = "text" # text, keyword, property, value, snippet
detail: str = ""
documentation: str = ""
insert_text: str = ""
sort_text: str = ""
@classmethod
def from_lsp(cls, item: dict[str, Any]) -> "LSPCompletion":
"""Create from LSP CompletionItem."""
kind_map = {
1: "text",
2: "method",
3: "function",
5: "field",
6: "variable",
9: "module",
10: "property",
12: "value",
14: "keyword",
15: "snippet",
}
return cls(
label=item.get("label", ""),
kind=kind_map.get(item.get("kind", 1), "text"),
detail=item.get("detail", ""),
documentation=_extract_documentation(item.get("documentation")),
insert_text=item.get("insertText", item.get("label", "")),
sort_text=item.get("sortText", item.get("label", "")),
)
@dataclass
class LSPDiagnostic:
"""Normalized diagnostic from LSP."""
line: int
column: int
end_line: int
end_column: int
message: str
severity: str = "error" # error, warning, info, hint
source: str = "yaml-language-server"
@classmethod
def from_lsp(cls, diag: dict[str, Any]) -> "LSPDiagnostic":
"""Create from LSP Diagnostic."""
severity_map = {1: "error", 2: "warning", 3: "info", 4: "hint"}
range_data = diag.get("range", {})
start = range_data.get("start", {})
end = range_data.get("end", {})
return cls(
line=start.get("line", 0),
column=start.get("character", 0),
end_line=end.get("line", 0),
end_column=end.get("character", 0),
message=diag.get("message", ""),
severity=severity_map.get(diag.get("severity", 1), "error"),
source=diag.get("source", "yaml-language-server"),
)
@dataclass
class LSPHover:
"""Normalized hover information from LSP."""
contents: str
range_start_line: Optional[int] = None
range_start_col: Optional[int] = None
@classmethod
def from_lsp(cls, hover: dict[str, Any]) -> Optional["LSPHover"]:
"""Create from LSP Hover response."""
if not hover:
return None
contents = hover.get("contents")
if isinstance(contents, str):
text = contents
elif isinstance(contents, dict):
text = contents.get("value", str(contents))
elif isinstance(contents, list):
text = "\n".join(
c.get("value", str(c)) if isinstance(c, dict) else str(c)
for c in contents
)
else:
return None
range_data = hover.get("range", {})
start = range_data.get("start", {})
return cls(
contents=text,
range_start_line=start.get("line"),
range_start_col=start.get("character"),
)
def _extract_documentation(doc: Any) -> str:
"""Extract documentation string from LSP documentation field."""
if doc is None:
return ""
if isinstance(doc, str):
return doc
if isinstance(doc, dict):
return doc.get("value", "")
return str(doc)
class YAMLLSPClient:
"""
Client for communicating with yaml-language-server.
Uses stdio for communication with the language server process.
"""
def __init__(self, schema_uri: Optional[str] = None):
"""
Initialize the LSP client.
Args:
schema_uri: Default schema URI for YAML files
"""
self.schema_uri = schema_uri
self._process: Optional[subprocess.Popen] = None
self._reader_task: Optional[asyncio.Task] = None
self._request_id = 0
self._pending_requests: dict[int, asyncio.Future] = {}
self._diagnostics: dict[str, list[LSPDiagnostic]] = {}
self._initialized = False
self._lock = asyncio.Lock()
async def start(self) -> bool:
"""
Start the language server.
Returns True if started successfully.
"""
available, reason = is_lsp_available()
if not available:
logger.warning(f"LSP not available: {reason}")
return False
try:
self._process = subprocess.Popen(
["yaml-language-server", "--stdio"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# Start reader task
self._reader_task = asyncio.create_task(self._read_messages())
# Initialize LSP
await self._initialize()
self._initialized = True
return True
except Exception as e:
logger.error(f"Failed to start yaml-language-server: {e}")
await self.stop()
return False
async def stop(self) -> None:
"""Stop the language server."""
self._initialized = False
if self._reader_task:
self._reader_task.cancel()
try:
await self._reader_task
except asyncio.CancelledError:
pass
self._reader_task = None
if self._process:
self._process.terminate()
try:
self._process.wait(timeout=2)
except subprocess.TimeoutExpired:
self._process.kill()
self._process = None
# Cancel pending requests
for future in self._pending_requests.values():
if not future.done():
future.cancel()
self._pending_requests.clear()
async def _initialize(self) -> None:
"""Send LSP initialize request."""
result = await self._request(
"initialize",
{
"processId": None,
"rootUri": None,
"capabilities": {
"textDocument": {
"completion": {
"completionItem": {
"snippetSupport": True,
"documentationFormat": ["markdown", "plaintext"],
}
},
"hover": {
"contentFormat": ["markdown", "plaintext"],
},
"publishDiagnostics": {},
}
},
"initializationOptions": {
"yaml": {
"validate": True,
"hover": True,
"completion": True,
"schemas": {},
}
},
},
)
logger.debug(f"LSP initialized: {result}")
# Send initialized notification
await self._notify("initialized", {})
async def did_open(self, uri: str, content: str) -> None:
"""Notify server that a document was opened."""
if not self._initialized:
return
await self._notify(
"textDocument/didOpen",
{
"textDocument": {
"uri": uri,
"languageId": "yaml",
"version": 1,
"text": content,
}
},
)
async def did_change(self, uri: str, content: str, version: int = 1) -> list[LSPDiagnostic]:
"""
Notify server of document change.
Returns current diagnostics for the document.
"""
if not self._initialized:
return []
await self._notify(
"textDocument/didChange",
{
"textDocument": {"uri": uri, "version": version},
"contentChanges": [{"text": content}],
},
)
# Wait briefly for diagnostics
await asyncio.sleep(0.1)
return self._diagnostics.get(uri, [])
async def did_close(self, uri: str) -> None:
"""Notify server that a document was closed."""
if not self._initialized:
return
await self._notify(
"textDocument/didClose",
{"textDocument": {"uri": uri}},
)
# Clear diagnostics
self._diagnostics.pop(uri, None)
async def completion(
self, uri: str, line: int, column: int
) -> list[LSPCompletion]:
"""
Request completions at a position.
Args:
uri: Document URI
line: 0-indexed line number
column: 0-indexed column number
Returns list of completion items.
"""
if not self._initialized:
return []
try:
result = await self._request(
"textDocument/completion",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
if result is None:
return []
items = result.get("items", []) if isinstance(result, dict) else result
return [LSPCompletion.from_lsp(item) for item in items]
except Exception as e:
logger.debug(f"Completion request failed: {e}")
return []
async def hover(self, uri: str, line: int, column: int) -> Optional[LSPHover]:
"""
Request hover information at a position.
Args:
uri: Document URI
line: 0-indexed line number
column: 0-indexed column number
"""
if not self._initialized:
return None
try:
result = await self._request(
"textDocument/hover",
{
"textDocument": {"uri": uri},
"position": {"line": line, "character": column},
},
)
return LSPHover.from_lsp(result) if result else None
except Exception as e:
logger.debug(f"Hover request failed: {e}")
return None
def get_diagnostics(self, uri: str) -> list[LSPDiagnostic]:
"""Get current diagnostics for a document."""
return self._diagnostics.get(uri, [])
async def _request(self, method: str, params: dict[str, Any]) -> Any:
"""Send a request and wait for response."""
async with self._lock:
self._request_id += 1
req_id = self._request_id
message = {
"jsonrpc": "2.0",
"id": req_id,
"method": method,
"params": params,
}
future: asyncio.Future = asyncio.Future()
self._pending_requests[req_id] = future
try:
await self._send_message(message)
return await asyncio.wait_for(future, timeout=5.0)
except asyncio.TimeoutError:
logger.warning(f"LSP request timed out: {method}")
return None
finally:
self._pending_requests.pop(req_id, None)
async def _notify(self, method: str, params: dict[str, Any]) -> None:
"""Send a notification (no response expected)."""
message = {
"jsonrpc": "2.0",
"method": method,
"params": params,
}
await self._send_message(message)
async def _send_message(self, message: dict[str, Any]) -> None:
"""Send a JSON-RPC message to the server."""
if not self._process or not self._process.stdin:
return
content = json.dumps(message)
header = f"Content-Length: {len(content)}\r\n\r\n"
try:
self._process.stdin.write(header.encode())
self._process.stdin.write(content.encode())
self._process.stdin.flush()
except (BrokenPipeError, OSError) as e:
logger.error(f"Failed to send LSP message: {e}")
async def _read_messages(self) -> None:
"""Read messages from the server."""
if not self._process or not self._process.stdout:
return
loop = asyncio.get_event_loop()
try:
while True:
# Read header
header = b""
while b"\r\n\r\n" not in header:
chunk = await loop.run_in_executor(
None, self._process.stdout.read, 1
)
if not chunk:
return # EOF
header += chunk
# Parse content length
content_length = 0
for line in header.decode().split("\r\n"):
if line.startswith("Content-Length:"):
content_length = int(line.split(":")[1].strip())
break
if content_length == 0:
continue
# Read content
content = await loop.run_in_executor(
None, self._process.stdout.read, content_length
)
if not content:
return
# Parse and handle message
try:
message = json.loads(content.decode())
await self._handle_message(message)
except json.JSONDecodeError as e:
logger.error(f"Failed to parse LSP message: {e}")
except asyncio.CancelledError:
pass
except Exception as e:
logger.error(f"LSP reader error: {e}")
async def _handle_message(self, message: dict[str, Any]) -> None:
"""Handle an incoming LSP message."""
if "id" in message and "result" in message:
# Response to a request
req_id = message["id"]
if req_id in self._pending_requests:
future = self._pending_requests[req_id]
if not future.done():
future.set_result(message.get("result"))
elif "id" in message and "error" in message:
# Error response
req_id = message["id"]
if req_id in self._pending_requests:
future = self._pending_requests[req_id]
if not future.done():
error = message["error"]
future.set_exception(
Exception(f"LSP error: {error.get('message', error)}")
)
elif message.get("method") == "textDocument/publishDiagnostics":
# Diagnostics notification
params = message.get("params", {})
uri = params.get("uri", "")
diagnostics = [
LSPDiagnostic.from_lsp(d)
for d in params.get("diagnostics", [])
]
self._diagnostics[uri] = diagnostics
logger.debug(f"Received {len(diagnostics)} diagnostics for {uri}")
elif "method" in message:
# Other notification
logger.debug(f"LSP notification: {message.get('method')}")

View file

@ -0,0 +1,211 @@
"""
LSP Server lifecycle manager.
Manages language server instances that can be shared across
multiple editor sessions. Supports multiple language servers:
- yaml-language-server (for config files)
- asls (for AssemblyScript listener source)
"""
from __future__ import annotations
import asyncio
import logging
from enum import Enum
from typing import Optional, Union
from .client import YAMLLSPClient, is_lsp_available
from .asls_client import ASLSClient, ASLSConfig, is_asls_available
logger = logging.getLogger(__name__)
class LSPServerType(Enum):
"""Supported language server types."""
YAML = "yaml"
ASSEMBLYSCRIPT = "assemblyscript"
# Type alias for any LSP client
LSPClient = Union[YAMLLSPClient, ASLSClient]
class LSPServerManager:
"""
Manages the lifecycle of LSP servers.
Provides singleton client instances that start on first use
and stop when explicitly requested or when the process exits.
Supports multiple language servers running concurrently.
"""
def __init__(self):
self._clients: dict[LSPServerType, LSPClient] = {}
self._ref_counts: dict[LSPServerType, int] = {}
self._lock = asyncio.Lock()
def is_running(self, server_type: LSPServerType = LSPServerType.YAML) -> bool:
"""Check if a specific LSP server is running."""
client = self._clients.get(server_type)
return client is not None and client._initialized
async def get_client(
self,
server_type: LSPServerType = LSPServerType.YAML,
asls_config: Optional[ASLSConfig] = None,
) -> Optional[LSPClient]:
"""
Get an LSP client, starting the server if needed.
Args:
server_type: Which language server to get
asls_config: Configuration for ASLS (only used if server_type is ASSEMBLYSCRIPT)
Returns None if the requested LSP is not available.
"""
async with self._lock:
# Check if already running
if server_type in self._clients:
client = self._clients[server_type]
if client._initialized:
self._ref_counts[server_type] = self._ref_counts.get(server_type, 0) + 1
return client
# Start the appropriate server
if server_type == LSPServerType.YAML:
return await self._start_yaml_server()
elif server_type == LSPServerType.ASSEMBLYSCRIPT:
return await self._start_asls_server(asls_config)
else:
logger.error(f"Unknown LSP server type: {server_type}")
return None
async def _start_yaml_server(self) -> Optional[YAMLLSPClient]:
"""Start the YAML language server."""
available, reason = is_lsp_available()
if not available:
logger.info(f"YAML LSP not available: {reason}")
return None
client = YAMLLSPClient()
success = await client.start()
if success:
self._clients[LSPServerType.YAML] = client
self._ref_counts[LSPServerType.YAML] = 1
logger.info("yaml-language-server started")
return client
else:
return None
async def _start_asls_server(
self, config: Optional[ASLSConfig] = None
) -> Optional[ASLSClient]:
"""Start the AssemblyScript language server."""
available, reason = is_asls_available()
if not available:
logger.info(f"ASLS not available: {reason}")
return None
client = ASLSClient(config=config)
success = await client.start()
if success:
self._clients[LSPServerType.ASSEMBLYSCRIPT] = client
self._ref_counts[LSPServerType.ASSEMBLYSCRIPT] = 1
logger.info("AssemblyScript language server started")
return client
else:
return None
async def release_client(
self, server_type: LSPServerType = LSPServerType.YAML
) -> None:
"""
Release a reference to a client.
Stops the server when the last reference is released.
"""
async with self._lock:
if server_type not in self._ref_counts:
return
self._ref_counts[server_type] -= 1
if self._ref_counts[server_type] <= 0:
client = self._clients.pop(server_type, None)
self._ref_counts.pop(server_type, None)
if client is not None:
await client.stop()
logger.info(f"{server_type.value} language server stopped")
async def stop(self, server_type: Optional[LSPServerType] = None) -> None:
"""
Force stop LSP server(s).
Args:
server_type: Specific server to stop, or None to stop all
"""
async with self._lock:
if server_type is not None:
# Stop specific server
client = self._clients.pop(server_type, None)
self._ref_counts.pop(server_type, None)
if client is not None:
await client.stop()
logger.info(f"{server_type.value} language server stopped (forced)")
else:
# Stop all servers
for st, client in list(self._clients.items()):
await client.stop()
logger.info(f"{st.value} language server stopped (forced)")
self._clients.clear()
self._ref_counts.clear()
async def stop_all(self) -> None:
"""Force stop all LSP servers."""
await self.stop(None)
# Convenience methods for YAML (backwards compatible)
async def get_yaml_client(self) -> Optional[YAMLLSPClient]:
"""Get YAML LSP client (convenience method)."""
client = await self.get_client(LSPServerType.YAML)
return client if isinstance(client, YAMLLSPClient) else None
async def get_asls_client(
self, config: Optional[ASLSConfig] = None
) -> Optional[ASLSClient]:
"""Get AssemblyScript LSP client (convenience method)."""
client = await self.get_client(LSPServerType.ASSEMBLYSCRIPT, asls_config=config)
return client if isinstance(client, ASLSClient) else None
# Context manager for YAML (backwards compatible)
async def __aenter__(self) -> Optional[YAMLLSPClient]:
"""Context manager entry - get YAML client."""
return await self.get_yaml_client()
async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
"""Context manager exit - release YAML client."""
await self.release_client(LSPServerType.YAML)
# Global singleton
_manager: Optional[LSPServerManager] = None
def get_lsp_manager() -> LSPServerManager:
"""Get the global LSP server manager."""
global _manager
if _manager is None:
_manager = LSPServerManager()
return _manager
async def ensure_lsp_stopped() -> None:
"""Ensure all LSP servers are stopped. Call on application shutdown."""
if _manager is not None:
await _manager.stop_all()

View file

@ -42,7 +42,7 @@ except ImportError:
PROMPT_TOOLKIT_AVAILABLE = False
if TYPE_CHECKING:
from agentserver.message_bus.stream_pump import StreamPump
from xml_pipeline.message_bus.stream_pump import StreamPump
# ============================================================================
@ -516,8 +516,8 @@ class SecureConsole:
async def _cmd_status(self, args: str) -> None:
"""Show organism status."""
from agentserver.memory import get_context_buffer
from agentserver.message_bus.thread_registry import get_registry
from xml_pipeline.memory import get_context_buffer
from xml_pipeline.message_bus.thread_registry import get_registry
buffer = get_context_buffer()
registry = get_registry()
@ -541,7 +541,7 @@ class SecureConsole:
async def _cmd_threads(self, args: str) -> None:
"""List active threads."""
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
stats = buffer.get_stats()
@ -574,7 +574,7 @@ class SecureConsole:
cprint("Usage: /buffer <thread-id>", Colors.DIM)
return
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
# Find thread by prefix
@ -608,7 +608,7 @@ class SecureConsole:
cprint(" /monitor * (show all threads)", Colors.DIM)
return
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
# Find thread by prefix (or * for all)
@ -719,7 +719,7 @@ class SecureConsole:
async def _config_list(self) -> None:
"""List available listener configs."""
from agentserver.config import get_listener_config_store
from xml_pipeline.config import get_listener_config_store
store = get_listener_config_store()
listeners = store.list_listeners()
@ -753,9 +753,9 @@ class SecureConsole:
async def _config_edit_organism(self) -> None:
"""Edit organism.yaml in the full-screen editor."""
from agentserver.console.editor import edit_text_async
from agentserver.config.schema import ensure_schemas
from agentserver.config.split_loader import (
from xml_pipeline.console.editor import edit_text_async
from xml_pipeline.config.schema import ensure_schemas
from xml_pipeline.config.split_loader import (
get_organism_yaml_path,
load_organism_yaml_content,
save_organism_yaml_content,
@ -809,9 +809,9 @@ class SecureConsole:
async def _config_edit_listener(self, name: str) -> None:
"""Edit a listener config in the full-screen editor."""
from agentserver.config import get_listener_config_store
from agentserver.console.editor import edit_text_async
from agentserver.config.schema import ensure_schemas
from xml_pipeline.config import get_listener_config_store
from xml_pipeline.console.editor import edit_text_async
from xml_pipeline.config.schema import ensure_schemas
# Ensure schemas are written for LSP
try:
@ -865,7 +865,7 @@ class SecureConsole:
await self.pump.shutdown()
# Re-bootstrap
from agentserver.message_bus.stream_pump import bootstrap
from xml_pipeline.message_bus.stream_pump import bootstrap
self.pump = await bootstrap()
# Start pump in background
@ -878,7 +878,7 @@ class SecureConsole:
cprint("Usage: /kill <thread-id>", Colors.DIM)
return
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
# Find thread by prefix

View file

@ -40,7 +40,7 @@ except ImportError:
NoConsoleScreenBufferError = Exception
if TYPE_CHECKING:
from agentserver.message_bus.stream_pump import StreamPump
from xml_pipeline.message_bus.stream_pump import StreamPump
# ============================================================================
@ -388,7 +388,7 @@ class TUIConsole:
self.print_raw(" /status, /listeners, /threads, /monitor, /clear, /quit", "output.dim")
async def _cmd_status(self, args: str):
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
stats = buffer.get_stats()
self.print_raw(f"Organism: {self.pump.config.name}", "output.system")
@ -401,13 +401,13 @@ class TUIConsole:
self.print_raw(f" {name:15} {tag} {l.description}", "output.dim")
async def _cmd_threads(self, args: str):
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
for tid, ctx in buffer._threads.items():
self.print_raw(f" {tid[:8]}... slots: {len(ctx)}", "output.dim")
async def _cmd_monitor(self, args: str):
from agentserver.memory import get_context_buffer
from xml_pipeline.memory import get_context_buffer
buffer = get_context_buffer()
if args == "*":
for tid, ctx in buffer._threads.items():

View file

@ -2,8 +2,8 @@
First real intelligent listener classic Grok voice.
"""
from agentserver.listeners.llm_listener import LLMPersonality
from agentserver.prompts.grok_classic import GROK_CLASSIC_MESSAGE
from xml_pipeline.listeners.llm_listener import LLMPersonality
from xml_pipeline.prompts.grok_classic import GROK_CLASSIC_MESSAGE
class GrokPersonality(LLMPersonality):
"""

View file

@ -5,8 +5,8 @@ The actual implementation lives in agentserver.llm.router.
This module re-exports the router as llm_pool for listeners.
"""
from agentserver.llm.router import get_router, configure_router, LLMRouter
from agentserver.llm.backend import (
from xml_pipeline.llm.router import get_router, configure_router, LLMRouter
from xml_pipeline.llm.backend import (
LLMRequest,
LLMResponse,
Backend,
@ -32,7 +32,7 @@ class LLMPool:
Wrapper around the LLM router that provides a simpler interface for listeners.
Usage:
from agentserver.listeners.llm_connection import llm_pool
from xml_pipeline.listeners.llm_connection import llm_pool
response = await llm_pool.complete(
model="grok-2",

View file

@ -16,9 +16,9 @@ from typing import Dict, List
from lxml import etree
from agentserver.xml_listener import XMLListener
from agentserver.listeners.llm_connection import llm_pool
from agentserver.prompts.no_paperclippers import MANIFESTO_MESSAGE
from xml_pipeline.xml_listener import XMLListener
from xml_pipeline.listeners.llm_connection import llm_pool
from xml_pipeline.prompts.no_paperclippers import MANIFESTO_MESSAGE
logger = logging.getLogger(__name__)

View file

@ -63,7 +63,7 @@ class WasmListenerRegistry:
Registry for WASM listeners (STUB).
Usage:
from agentserver.listeners.wasm_listener import wasm_registry
from xml_pipeline.listeners.wasm_listener import wasm_registry
wasm_registry.register(
name="calculator",

View file

@ -2,7 +2,7 @@
LLM abstraction layer.
Usage:
from agentserver.llm import router
from xml_pipeline.llm import router
# Configure once at startup (or via organism.yaml)
router.configure_router({
@ -19,14 +19,14 @@ Usage:
)
"""
from agentserver.llm.router import (
from xml_pipeline.llm.router import (
LLMRouter,
get_router,
configure_router,
complete,
Strategy,
)
from agentserver.llm.backend import LLMRequest, LLMResponse, BackendError
from xml_pipeline.llm.backend import LLMRequest, LLMResponse, BackendError
__all__ = [
"LLMRouter",

View file

@ -16,7 +16,7 @@ from typing import List, Dict, Any, Optional, AsyncIterator
import httpx
from agentserver.llm.token_bucket import TokenBucket
from xml_pipeline.llm.token_bucket import TokenBucket
logger = logging.getLogger(__name__)

View file

@ -20,7 +20,7 @@ from dataclasses import dataclass, field
from enum import Enum
from typing import List, Dict, Any, Optional
from agentserver.llm.backend import (
from xml_pipeline.llm.backend import (
Backend,
LLMRequest,
LLMResponse,
@ -292,7 +292,7 @@ async def complete(
Convenience function - calls get_router().complete().
Usage:
from agentserver.llm import router
from xml_pipeline.llm import router
response = await router.complete("grok-4.1", messages)
"""
return await get_router().complete(model, messages, **kwargs)

View file

@ -8,7 +8,7 @@ Provides thread-scoped, append-only context buffers with:
- GC and limits (prevent runaway memory usage)
"""
from agentserver.memory.context_buffer import (
from xml_pipeline.memory.context_buffer import (
ContextBuffer,
ThreadContext,
BufferSlot,

View file

@ -288,7 +288,7 @@ def slot_to_handler_metadata(slot: BufferSlot) -> 'HandlerMetadata':
Handlers still receive HandlerMetadata, but it's derived from the slot.
"""
from agentserver.message_bus.message_state import HandlerMetadata
from xml_pipeline.message_bus.message_state import HandlerMetadata
return HandlerMetadata(
thread_id=slot.metadata.thread_id,

View file

@ -12,7 +12,7 @@ Key classes:
MessageState Message flowing through pipeline steps
Usage:
from agentserver.message_bus import StreamPump, SystemPipeline, bootstrap
from xml_pipeline.message_bus import StreamPump, SystemPipeline, bootstrap
pump = await bootstrap("config/organism.yaml")
system = SystemPipeline(pump)
@ -23,7 +23,7 @@ Usage:
await pump.run()
"""
from agentserver.message_bus.stream_pump import (
from xml_pipeline.message_bus.stream_pump import (
StreamPump,
ConfigLoader,
Listener,
@ -32,12 +32,12 @@ from agentserver.message_bus.stream_pump import (
bootstrap,
)
from agentserver.message_bus.message_state import (
from xml_pipeline.message_bus.message_state import (
MessageState,
HandlerMetadata,
)
from agentserver.message_bus.system_pipeline import (
from xml_pipeline.message_bus.system_pipeline import (
SystemPipeline,
ExternalMessage,
)

View file

@ -11,7 +11,7 @@ Part of AgentServer v2.1 message pump.
"""
from lxml import etree
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
async def c14n_step(state: MessageState) -> MessageState:

View file

@ -9,7 +9,7 @@ Part of AgentServer v2.1 message pump.
"""
from lxml.etree import _Element
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
# Import the customized parse_element from your forked xmlable
from third_party.xmlable import parse_element # adjust path if needed

View file

@ -13,11 +13,11 @@ Part of AgentServer v2.1 message pump.
"""
from lxml import etree
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
# Load envelope.xsd once at module import (startup time)
# In real implementation, move this to a config loader or bus init
_ENVELOPE_XSD = etree.XMLSchema(file="agentserver/schema/envelope.xsd")
_ENVELOPE_XSD = etree.XMLSchema(file="xml_pipeline/schema/envelope.xsd")
async def envelope_validation_step(state: MessageState) -> MessageState:

View file

@ -12,7 +12,7 @@ Part of AgentServer v2.1 message pump.
"""
from lxml import etree
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
# Envelope namespace for easy reference
_ENVELOPE_NS = "https://xml-pipeline.org/ns/envelope/v1"

View file

@ -1,5 +1,5 @@
from lxml import etree
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
# lxml parser configured for maximum tolerance + recovery
_RECOVERY_PARSER = etree.XMLParser(

View file

@ -18,10 +18,10 @@ from __future__ import annotations
from typing import Dict, List, Callable, Awaitable, TYPE_CHECKING
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
if TYPE_CHECKING:
from agentserver.message_bus.stream_pump import Listener
from xml_pipeline.message_bus.stream_pump import Listener
def make_routing_step(

View file

@ -17,7 +17,7 @@ Part of AgentServer v2.1 message pump.
"""
import uuid
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
def _is_valid_uuid(val: str) -> bool:

View file

@ -15,7 +15,7 @@ Part of AgentServer v2.1 message pump.
"""
from lxml import etree
from agentserver.message_bus.message_state import MessageState
from xml_pipeline.message_bus.message_state import MessageState
async def xsd_validation_step(state: MessageState) -> MessageState:

View file

@ -24,15 +24,15 @@ from lxml import etree
from aiostream import stream, pipe, operator
# Import existing step implementations (we'll wrap them)
from agentserver.message_bus.steps.repair import repair_step
from agentserver.message_bus.steps.c14n import c14n_step
from agentserver.message_bus.steps.envelope_validation import envelope_validation_step
from agentserver.message_bus.steps.payload_extraction import payload_extraction_step
from agentserver.message_bus.steps.thread_assignment import thread_assignment_step
from agentserver.message_bus.message_state import MessageState, HandlerMetadata, HandlerResponse, SystemError, ROUTING_ERROR
from agentserver.message_bus.thread_registry import get_registry
from agentserver.message_bus.todo_registry import get_todo_registry
from agentserver.memory import get_context_buffer
from xml_pipeline.message_bus.steps.repair import repair_step
from xml_pipeline.message_bus.steps.c14n import c14n_step
from xml_pipeline.message_bus.steps.envelope_validation import envelope_validation_step
from xml_pipeline.message_bus.steps.payload_extraction import payload_extraction_step
from xml_pipeline.message_bus.steps.thread_assignment import thread_assignment_step
from xml_pipeline.message_bus.message_state import MessageState, HandlerMetadata, HandlerResponse, SystemError, ROUTING_ERROR
from xml_pipeline.message_bus.thread_registry import get_registry
from xml_pipeline.message_bus.todo_registry import get_todo_registry
from xml_pipeline.memory import get_context_buffer
# ============================================================================
@ -406,7 +406,7 @@ class StreamPump:
# Derive metadata from slot (single source of truth)
# Fall back to manual construction if no slot (e.g., buffer overflow)
if slot:
from agentserver.memory import slot_to_handler_metadata
from xml_pipeline.memory import slot_to_handler_metadata
metadata = slot_to_handler_metadata(slot)
payload_ref = slot.payload # Same reference as in buffer
else:
@ -781,12 +781,12 @@ async def bootstrap(config_path: str = "config/organism.yaml") -> StreamPump:
"""Load config, create pump, initialize root thread, and inject boot message."""
from datetime import datetime, timezone
from dotenv import load_dotenv
from agentserver.primitives import Boot, handle_boot
from agentserver.primitives import (
from xml_pipeline.primitives import Boot, handle_boot
from xml_pipeline.primitives import (
TodoUntil, TodoComplete,
handle_todo_until, handle_todo_complete,
)
from agentserver.platform import get_prompt_registry
from xml_pipeline.platform import get_prompt_registry
# Load .env file if present
load_dotenv()
@ -800,8 +800,8 @@ async def bootstrap(config_path: str = "config/organism.yaml") -> StreamPump:
# Register system listeners first
boot_listener_config = ListenerConfig(
name="system.boot",
payload_class_path="agentserver.primitives.Boot",
handler_path="agentserver.primitives.handle_boot",
payload_class_path="xml_pipeline.primitives.Boot",
handler_path="xml_pipeline.primitives.handle_boot",
description="System boot handler - initializes organism",
is_agent=False,
payload_class=Boot,
@ -812,8 +812,8 @@ async def bootstrap(config_path: str = "config/organism.yaml") -> StreamPump:
# Register TodoUntil handler (agents register watchers)
todo_until_config = ListenerConfig(
name="system.todo",
payload_class_path="agentserver.primitives.TodoUntil",
handler_path="agentserver.primitives.handle_todo_until",
payload_class_path="xml_pipeline.primitives.TodoUntil",
handler_path="xml_pipeline.primitives.handle_todo_until",
description="System todo handler - registers watchers",
is_agent=False,
payload_class=TodoUntil,
@ -824,8 +824,8 @@ async def bootstrap(config_path: str = "config/organism.yaml") -> StreamPump:
# Register TodoComplete handler (agents close watchers)
todo_complete_config = ListenerConfig(
name="system.todo-complete",
payload_class_path="agentserver.primitives.TodoComplete",
handler_path="agentserver.primitives.handle_todo_complete",
payload_class_path="xml_pipeline.primitives.TodoComplete",
handler_path="xml_pipeline.primitives.handle_todo_complete",
description="System todo handler - closes watchers",
is_agent=False,
payload_class=TodoComplete,
@ -859,7 +859,7 @@ async def bootstrap(config_path: str = "config/organism.yaml") -> StreamPump:
# Configure LLM router if llm section present
if config.llm_config:
from agentserver.llm import configure_router
from xml_pipeline.llm import configure_router
configure_router(config.llm_config)
print(f"LLM backends: {len(config.llm_config.get('backends', []))}")

View file

@ -30,7 +30,7 @@ from typing import TYPE_CHECKING, Optional, Callable, Any
if TYPE_CHECKING:
from .stream_pump import StreamPump
from agentserver.primitives.text_input import TextInput, TextOutput
from xml_pipeline.primitives.text_input import TextInput, TextOutput
logger = logging.getLogger(__name__)

View file

@ -10,13 +10,13 @@ Agents are sandboxed. They receive messages and return responses.
They cannot see or modify prompts, and cannot directly access the LLM.
"""
from agentserver.platform.prompt_registry import (
from xml_pipeline.platform.prompt_registry import (
PromptRegistry,
AgentPrompt,
get_prompt_registry,
)
from agentserver.platform.llm_api import (
from xml_pipeline.platform.llm_api import (
complete,
platform_complete,
)

View file

@ -12,7 +12,7 @@ Design principles:
- Rate-limited: platform controls costs
Usage (from handler):
from agentserver.platform import complete
from xml_pipeline.platform import complete
async def handle_greeting(payload, metadata):
response = await complete(
@ -32,8 +32,8 @@ from __future__ import annotations
import logging
from typing import Any, Dict, List, Optional
from agentserver.platform.prompt_registry import get_prompt_registry
from agentserver.memory import get_context_buffer
from xml_pipeline.platform.prompt_registry import get_prompt_registry
from xml_pipeline.memory import get_context_buffer
logger = logging.getLogger(__name__)
@ -118,7 +118,7 @@ async def complete(
# Make LLM call via router
try:
from agentserver.llm import complete as llm_complete
from xml_pipeline.llm import complete as llm_complete
# Use model from kwargs or default
model = kwargs.pop("model", "grok-3-mini-beta")

View file

@ -5,8 +5,8 @@ These are not user-defined listeners but system-level messages that
establish context, handle errors, and manage the organism lifecycle.
"""
from agentserver.primitives.boot import Boot, handle_boot
from agentserver.primitives.todo import (
from xml_pipeline.primitives.boot import Boot, handle_boot
from xml_pipeline.primitives.todo import (
TodoUntil,
TodoComplete,
TodoRegistered,
@ -14,7 +14,7 @@ from agentserver.primitives.todo import (
handle_todo_until,
handle_todo_complete,
)
from agentserver.primitives.text_input import TextInput, TextOutput
from xml_pipeline.primitives.text_input import TextInput, TextOutput
__all__ = [
"Boot",

Some files were not shown because too many files have changed in this diff Show more