Archive obsolete docs and misc cleanup

- Move lsp-integration.md and secure-console-v3.md to docs/archive-obsolete/
  (these features are now in the Nextra SaaS product)
- Update CLAUDE.md with current project state
- Simplify run_organism.py
- Fix test fixtures for shared backend compatibility
- Minor handler and llm_api cleanups

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
dullfig 2026-01-20 20:20:10 -08:00
parent 6790c7a46c
commit c01428260c
8 changed files with 80 additions and 160 deletions

View file

@ -2,7 +2,7 @@
A tamper-proof nervous system for multi-agent AI systems using XML as the sovereign wire format. AgentServer provides a schema-driven, Turing-complete message bus where agents communicate through validated XML payloads, with automatic XSD generation, handler isolation, and built-in security guarantees against agent misbehavior.
**Version:** 0.2.0
**Version:** 0.4.0
## Tech Stack
@ -14,10 +14,11 @@ A tamper-proof nervous system for multi-agent AI systems using XML as the sovere
| Serialization | xmlable | vendored | Dataclass ↔ XML round-trip with auto-XSD |
| Config | PyYAML | Latest | Organism configuration (organism.yaml) |
| Crypto | cryptography | Latest | Ed25519 identity keys for signing |
| Console | prompt_toolkit | 3.0+ | Interactive TUI console |
| HTTP | httpx | 0.27+ | LLM backend communication |
| Case conversion | pyhumps | Latest | Snake/camel case conversion |
> **Note:** TUI console, authentication, and WebSocket server are available in the Nextra SaaS product.
## Quick Start
```bash
@ -38,19 +39,19 @@ pip install -e ".[all]"
# Or minimal install + specific features
pip install -e "." # Core only
pip install -e ".[anthropic]" # + Anthropic SDK
pip install -e ".[server]" # + WebSocket server
# Configure environment
cp .env.example .env
# Edit .env to add your API keys (XAI_API_KEY, ANTHROPIC_API_KEY, etc.)
# Run the organism
python run_organism.py config/organism.yaml
# Or use CLI
xml-pipeline run config/organism.yaml
xp run config/organism.yaml # Short alias
# Try the console example
pip install -e ".[console]"
python -m examples.console
# Run tests
pip install -e ".[test]"
pytest tests/ -v
@ -61,9 +62,7 @@ pytest tests/ -v
```
xml-pipeline/
├── xml_pipeline/ # Main package
│ ├── auth/ # Authentication (TOTP, sessions, users)
│ ├── config/ # Config loading and templates
│ ├── console/ # TUI console and secure console
│ ├── listeners/ # Listener implementations and examples
│ ├── llm/ # LLM router, backends, token bucket
│ ├── memory/ # Context buffer for conversation history
@ -77,21 +76,21 @@ xml-pipeline/
│ ├── primitives/ # System message types (Boot, TodoUntil, etc.)
│ ├── prompts/ # System prompts (no_paperclippers, etc.)
│ ├── schema/ # XSD schema files
│ ├── server/ # HTTP/WebSocket server
│ ├── tools/ # Native tools (files, shell, search, etc.)
│ └── utils/ # Shared utilities
├── config/ # Example organism configurations
├── docs/ # Architecture and design docs
├── examples/ # Example MCP servers and integrations
├── examples/ # Example console and integrations
├── handlers/ # Example message handlers
├── tests/ # pytest test suite
├── third_party/ # Vendored dependencies
│ └── xmlable/ # XML serialization library
├── pyproject.toml # Project metadata and dependencies
├── run_organism.py # Main entry point with TUI
└── organism.yaml # Default organism config (if present)
└── pyproject.toml # Project metadata and dependencies
```
> **Note:** Authentication (`auth/`), TUI console (`console/`), and WebSocket server (`server/`)
> are available in the Nextra SaaS product.
## Architecture Overview
AgentServer implements a stream-based message pump where all communication flows through validated XML envelopes. The architecture enforces strict isolation between handlers (untrusted code) and the system (trusted zone).
@ -192,8 +191,7 @@ async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> Handl
| `xml-pipeline check [config]` | Validate config without running |
| `xml-pipeline version` | Show version and installed features |
| `xp run [config]` | Short alias for xml-pipeline run |
| `python run_organism.py [config]` | Run with TUI console |
| `python run_organism.py --simple [config]` | Run with simple console |
| `python -m examples.console` | Run interactive console example |
| `pytest tests/ -v` | Run test suite |
| `pytest tests/test_pipeline_steps.py -v` | Run specific test file |
@ -304,9 +302,8 @@ pip install xml-pipeline[openai] # OpenAI SDK
pip install xml-pipeline[redis] # Distributed key-value store
pip install xml-pipeline[search] # DuckDuckGo search
# Server features
pip install xml-pipeline[auth] # TOTP + Argon2 authentication
pip install xml-pipeline[server] # WebSocket server
# Console example
pip install xml-pipeline[console] # prompt_toolkit for examples
# Everything
pip install xml-pipeline[all]
@ -315,6 +312,8 @@ pip install xml-pipeline[all]
pip install xml-pipeline[dev]
```
> **Note:** Authentication and WebSocket server features are available in the Nextra SaaS product.
## Native Tools
The project includes built-in tool implementations in `xml_pipeline/tools/`:
@ -348,15 +347,15 @@ Built-in message types in `xml_pipeline/primitives/`:
- @docs/message-pump-v2.1.md — Message pump implementation details
- @docs/handler-contract-v2.1.md — Handler interface specification
- @docs/llm-router-v2.1.md — LLM backend abstraction
- @docs/secure-console-v3.md — Console and authentication
- @docs/platform-architecture.md — Platform-level APIs
- @docs/native_tools.md — Native tool implementations
- @docs/primitives.md — System primitives reference (includes thread lifecycle)
- @docs/configuration.md — Organism configuration reference
- @docs/lsp-integration.md — LSP editor support for YAML and AssemblyScript
- @docs/split-config.md — Split configuration architecture
- @docs/why-not-json.md — Rationale for XML over JSON
> **Note:** Console, authentication, and LSP integration documentation is in the Nextra project.
## Skill Usage Guide
@ -370,7 +369,6 @@ When working on tasks involving these technologies, invoke the corresponding ski
| cryptography | Implements Ed25519 identity keys for signing and federation auth |
| httpx | Handles async HTTP requests for LLM backend communication |
| aiostream | Implements stream-based message pipeline with concurrent fan-out processing |
| prompt-toolkit | Builds interactive TUI console with password input and command history |
| lxml | Handles XML processing, XSD validation, C14N normalization, and repair |
| python | Manages async-first Python 3.11+ codebase with type hints and dataclasses |
| pytest | Runs async test suite with pytest-asyncio fixtures and markers |

View file

@ -124,14 +124,6 @@ async def handle_response_print(payload: ShoutedResponse, metadata: HandlerMetad
"""
Print the final response to the console.
Routes output to the TUI console if available, otherwise prints to stdout.
Note: TUI console is available in Nextra. This handler uses simple stdout.
"""
from xml_pipeline.console.console_registry import get_console
console = get_console()
if console is not None and hasattr(console, 'on_response'):
console.on_response("shouter", payload)
else:
# Fallback for simple mode or no console
print(f"\033[36m[response] {payload.message}\033[0m")
print(f"\033[36m[response] {payload.message}\033[0m")

View file

@ -1,93 +1,58 @@
#!/usr/bin/env python3
"""
run_organism.py Start the organism with TUI console.
run_organism.py Deprecated entry point.
Usage:
python run_organism.py [config.yaml]
python run_organism.py --simple [config.yaml] # Use simple console
The TUI console and server have been moved to the Nextra SaaS product.
This file is kept for backwards compatibility but will display a helpful message.
This boots the organism with a split-screen terminal UI:
- Scrolling output area above
- Status bar separator
- Input area below
For the open-source xml-pipeline, use the CLI or programmatic API:
Flow:
1. Bootstrap organism
2. Start pump in background
3. Run TUI console
4. /quit shuts down gracefully
# CLI
xml-pipeline run config/organism.yaml
# Programmatic
from xml_pipeline.message_bus import bootstrap
pump = await bootstrap("organism.yaml")
await pump.run()
# Interactive console example
pip install xml-pipeline[console]
python -m examples.console
For the full TUI console with authentication and WebSocket server,
see the Nextra project.
"""
import asyncio
import sys
from pathlib import Path
from xml_pipeline.message_bus import bootstrap
from xml_pipeline.console.console_registry import set_console
async def run_organism(config_path: str = "config/organism.yaml", use_simple: bool = False):
"""Boot organism with TUI console."""
def main() -> None:
"""Show deprecation message and exit."""
print("""
xml-pipeline: TUI Console Moved to Nextra
==========================================
# Bootstrap the pump
pump = await bootstrap(config_path)
The interactive TUI console with authentication and WebSocket server
has been moved to the Nextra SaaS product (v0.4.0).
if use_simple:
# Use old SecureConsole for compatibility
from xml_pipeline.console import SecureConsole
console = SecureConsole(pump)
if not await console.authenticate():
print("Authentication failed.")
return
set_console(None)
For the open-source xml-pipeline, use:
pump_task = asyncio.create_task(pump.run())
try:
await console.run_command_loop()
finally:
pump_task.cancel()
try:
await pump_task
except asyncio.CancelledError:
pass
await pump.shutdown()
print("Goodbye!")
else:
# Use new TUI console
from xml_pipeline.console.tui_console import TUIConsole
console = TUIConsole(pump)
set_console(console) # Register for handlers to find
1. CLI command:
xml-pipeline run config/organism.yaml
# Start pump in background
pump_task = asyncio.create_task(pump.run())
2. Programmatic API:
from xml_pipeline.message_bus import bootstrap
pump = await bootstrap("organism.yaml")
await pump.run()
try:
await console.run()
finally:
pump_task.cancel()
try:
await pump_task
except asyncio.CancelledError:
pass
await pump.shutdown()
3. Console example (for testing):
pip install xml-pipeline[console]
python -m examples.console
def main():
args = sys.argv[1:]
use_simple = "--simple" in args
if use_simple:
args.remove("--simple")
config_path = args[0] if args else "config/organism.yaml"
if not Path(config_path).exists():
print(f"Config not found: {config_path}")
sys.exit(1)
try:
asyncio.run(run_organism(config_path, use_simple=use_simple))
except KeyboardInterrupt:
print("\nInterrupted")
For full TUI console, authentication, and WebSocket server features,
see the Nextra project.
""")
sys.exit(1)
if __name__ == "__main__":

View file

@ -147,15 +147,8 @@ class TestFullPipelineFlow:
handler_calls = []
original_handler = pump.listeners["greeter"].handler
# Mock the LLM call since we don't have a real API key in tests
from xml_pipeline.llm.backend import LLMResponse
mock_response = LLMResponse(
content="Hello, World!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
# Mock platform.complete since handle_greeting uses platform API
mock_response = "Hello, World!"
async def tracking_handler(payload, metadata):
handler_calls.append((payload, metadata))
@ -164,7 +157,7 @@ class TestFullPipelineFlow:
pump.listeners["greeter"].handler = tracking_handler
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Create and inject a Greeting message
thread_id = str(uuid.uuid4())
envelope = make_envelope(
@ -235,17 +228,10 @@ class TestFullPipelineFlow:
pump._reinject_responses = capture_reinject
# Mock the LLM call since we don't have a real API key in tests
from xml_pipeline.llm.backend import LLMResponse
# Mock platform.complete since handle_greeting uses platform API (not llm directly)
mock_response = "Hello, Alice!"
mock_response = LLMResponse(
content="Hello, Alice!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Inject a Greeting
thread_id = str(uuid.uuid4())
envelope = make_envelope(
@ -479,13 +465,8 @@ class TestThreadRoutingFlow:
pump.listeners["shouter"].handler = trace_shouter
pump.listeners["response-handler"].handler = trace_response
# Mock LLM response
mock_llm = LLMResponse(
content="Hello there, friend!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
# Mock platform.complete since handle_greeting uses platform API
mock_response = "Hello there, friend!"
# Capture final output (response-handler sends to console, but console isn't registered)
final_outputs = []
@ -498,7 +479,7 @@ class TestThreadRoutingFlow:
pump._reinject_responses = capture_reinject
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Inject ConsoleInput (simulating: user typed "@greeter TestUser")
# Note: xmlify converts field names to PascalCase for XML elements
thread_id = str(uuid.uuid4())
@ -642,15 +623,10 @@ class TestThreadRoutingFlow:
pass
pump._reinject_responses = noop_reinject
# Mock LLM
mock_llm = LLMResponse(
content="Hello!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
# Mock platform.complete since handle_greeting uses platform API
mock_response = "Hello!"
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Inject initial message
thread_id = str(uuid.uuid4())
envelope = make_envelope(

View file

@ -411,7 +411,6 @@ class TestGreeterTodoFlow:
"""
from handlers.hello import Greeting, GreetingResponse, handle_greeting
from handlers.console import ShoutedResponse
from xml_pipeline.llm.backend import LLMResponse
# Clear registry
todo_registry = get_todo_registry()
@ -419,15 +418,10 @@ class TestGreeterTodoFlow:
thread_id = str(uuid.uuid4())
# Mock LLM
mock_llm = LLMResponse(
content="Hello there!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
# Mock platform.complete (not llm.complete) since handle_greeting uses platform API
mock_response = "Hello there!"
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Call greeter handler
metadata = HandlerMetadata(
thread_id=thread_id,
@ -466,7 +460,6 @@ class TestGreeterTodoFlow:
When greeter is called again with raised todos, it should close them.
"""
from handlers.hello import Greeting, GreetingResponse, handle_greeting
from xml_pipeline.llm.backend import LLMResponse
# Clear registry
todo_registry = get_todo_registry()
@ -485,19 +478,14 @@ class TestGreeterTodoFlow:
# Verify eyebrow is raised
assert todo_registry._by_id[watcher_id].eyebrow_raised is True
# Mock LLM
mock_llm = LLMResponse(
content="Hello again!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
# Mock platform.complete (not llm.complete) since handle_greeting uses platform API
mock_response = "Hello again!"
# Format the nudge as the pump would
raised = todo_registry.get_raised_for(thread_id, "greeter")
nudge = todo_registry.format_nudge(raised)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)):
with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Call greeter with the nudge
metadata = HandlerMetadata(
thread_id=thread_id,

View file

@ -99,7 +99,8 @@ async def complete(
context_buffer = get_context_buffer()
history = context_buffer.get_thread(thread_id)
for slot in history:
# get_thread returns None if thread doesn't exist yet
for slot in history or []:
# Determine role: assistant if from this agent, user otherwise
role = "assistant" if slot.from_id == agent_name else "user"