Archive obsolete docs and misc cleanup

- Move lsp-integration.md and secure-console-v3.md to docs/archive-obsolete/
  (these features are now in the Nextra SaaS product)
- Update CLAUDE.md with current project state
- Simplify run_organism.py
- Fix test fixtures for shared backend compatibility
- Minor handler and llm_api cleanups

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
dullfig 2026-01-20 20:20:10 -08:00
parent 6790c7a46c
commit c01428260c
8 changed files with 80 additions and 160 deletions

View file

@ -2,7 +2,7 @@
A tamper-proof nervous system for multi-agent AI systems using XML as the sovereign wire format. AgentServer provides a schema-driven, Turing-complete message bus where agents communicate through validated XML payloads, with automatic XSD generation, handler isolation, and built-in security guarantees against agent misbehavior. A tamper-proof nervous system for multi-agent AI systems using XML as the sovereign wire format. AgentServer provides a schema-driven, Turing-complete message bus where agents communicate through validated XML payloads, with automatic XSD generation, handler isolation, and built-in security guarantees against agent misbehavior.
**Version:** 0.2.0 **Version:** 0.4.0
## Tech Stack ## Tech Stack
@ -14,10 +14,11 @@ A tamper-proof nervous system for multi-agent AI systems using XML as the sovere
| Serialization | xmlable | vendored | Dataclass ↔ XML round-trip with auto-XSD | | Serialization | xmlable | vendored | Dataclass ↔ XML round-trip with auto-XSD |
| Config | PyYAML | Latest | Organism configuration (organism.yaml) | | Config | PyYAML | Latest | Organism configuration (organism.yaml) |
| Crypto | cryptography | Latest | Ed25519 identity keys for signing | | Crypto | cryptography | Latest | Ed25519 identity keys for signing |
| Console | prompt_toolkit | 3.0+ | Interactive TUI console |
| HTTP | httpx | 0.27+ | LLM backend communication | | HTTP | httpx | 0.27+ | LLM backend communication |
| Case conversion | pyhumps | Latest | Snake/camel case conversion | | Case conversion | pyhumps | Latest | Snake/camel case conversion |
> **Note:** TUI console, authentication, and WebSocket server are available in the Nextra SaaS product.
## Quick Start ## Quick Start
```bash ```bash
@ -38,19 +39,19 @@ pip install -e ".[all]"
# Or minimal install + specific features # Or minimal install + specific features
pip install -e "." # Core only pip install -e "." # Core only
pip install -e ".[anthropic]" # + Anthropic SDK pip install -e ".[anthropic]" # + Anthropic SDK
pip install -e ".[server]" # + WebSocket server
# Configure environment # Configure environment
cp .env.example .env cp .env.example .env
# Edit .env to add your API keys (XAI_API_KEY, ANTHROPIC_API_KEY, etc.) # Edit .env to add your API keys (XAI_API_KEY, ANTHROPIC_API_KEY, etc.)
# Run the organism # Run the organism
python run_organism.py config/organism.yaml
# Or use CLI
xml-pipeline run config/organism.yaml xml-pipeline run config/organism.yaml
xp run config/organism.yaml # Short alias xp run config/organism.yaml # Short alias
# Try the console example
pip install -e ".[console]"
python -m examples.console
# Run tests # Run tests
pip install -e ".[test]" pip install -e ".[test]"
pytest tests/ -v pytest tests/ -v
@ -61,9 +62,7 @@ pytest tests/ -v
``` ```
xml-pipeline/ xml-pipeline/
├── xml_pipeline/ # Main package ├── xml_pipeline/ # Main package
│ ├── auth/ # Authentication (TOTP, sessions, users)
│ ├── config/ # Config loading and templates │ ├── config/ # Config loading and templates
│ ├── console/ # TUI console and secure console
│ ├── listeners/ # Listener implementations and examples │ ├── listeners/ # Listener implementations and examples
│ ├── llm/ # LLM router, backends, token bucket │ ├── llm/ # LLM router, backends, token bucket
│ ├── memory/ # Context buffer for conversation history │ ├── memory/ # Context buffer for conversation history
@ -77,21 +76,21 @@ xml-pipeline/
│ ├── primitives/ # System message types (Boot, TodoUntil, etc.) │ ├── primitives/ # System message types (Boot, TodoUntil, etc.)
│ ├── prompts/ # System prompts (no_paperclippers, etc.) │ ├── prompts/ # System prompts (no_paperclippers, etc.)
│ ├── schema/ # XSD schema files │ ├── schema/ # XSD schema files
│ ├── server/ # HTTP/WebSocket server
│ ├── tools/ # Native tools (files, shell, search, etc.) │ ├── tools/ # Native tools (files, shell, search, etc.)
│ └── utils/ # Shared utilities │ └── utils/ # Shared utilities
├── config/ # Example organism configurations ├── config/ # Example organism configurations
├── docs/ # Architecture and design docs ├── docs/ # Architecture and design docs
├── examples/ # Example MCP servers and integrations ├── examples/ # Example console and integrations
├── handlers/ # Example message handlers ├── handlers/ # Example message handlers
├── tests/ # pytest test suite ├── tests/ # pytest test suite
├── third_party/ # Vendored dependencies ├── third_party/ # Vendored dependencies
│ └── xmlable/ # XML serialization library │ └── xmlable/ # XML serialization library
├── pyproject.toml # Project metadata and dependencies └── pyproject.toml # Project metadata and dependencies
├── run_organism.py # Main entry point with TUI
└── organism.yaml # Default organism config (if present)
``` ```
> **Note:** Authentication (`auth/`), TUI console (`console/`), and WebSocket server (`server/`)
> are available in the Nextra SaaS product.
## Architecture Overview ## Architecture Overview
AgentServer implements a stream-based message pump where all communication flows through validated XML envelopes. The architecture enforces strict isolation between handlers (untrusted code) and the system (trusted zone). AgentServer implements a stream-based message pump where all communication flows through validated XML envelopes. The architecture enforces strict isolation between handlers (untrusted code) and the system (trusted zone).
@ -192,8 +191,7 @@ async def handle_greeting(payload: Greeting, metadata: HandlerMetadata) -> Handl
| `xml-pipeline check [config]` | Validate config without running | | `xml-pipeline check [config]` | Validate config without running |
| `xml-pipeline version` | Show version and installed features | | `xml-pipeline version` | Show version and installed features |
| `xp run [config]` | Short alias for xml-pipeline run | | `xp run [config]` | Short alias for xml-pipeline run |
| `python run_organism.py [config]` | Run with TUI console | | `python -m examples.console` | Run interactive console example |
| `python run_organism.py --simple [config]` | Run with simple console |
| `pytest tests/ -v` | Run test suite | | `pytest tests/ -v` | Run test suite |
| `pytest tests/test_pipeline_steps.py -v` | Run specific test file | | `pytest tests/test_pipeline_steps.py -v` | Run specific test file |
@ -304,9 +302,8 @@ pip install xml-pipeline[openai] # OpenAI SDK
pip install xml-pipeline[redis] # Distributed key-value store pip install xml-pipeline[redis] # Distributed key-value store
pip install xml-pipeline[search] # DuckDuckGo search pip install xml-pipeline[search] # DuckDuckGo search
# Server features # Console example
pip install xml-pipeline[auth] # TOTP + Argon2 authentication pip install xml-pipeline[console] # prompt_toolkit for examples
pip install xml-pipeline[server] # WebSocket server
# Everything # Everything
pip install xml-pipeline[all] pip install xml-pipeline[all]
@ -315,6 +312,8 @@ pip install xml-pipeline[all]
pip install xml-pipeline[dev] pip install xml-pipeline[dev]
``` ```
> **Note:** Authentication and WebSocket server features are available in the Nextra SaaS product.
## Native Tools ## Native Tools
The project includes built-in tool implementations in `xml_pipeline/tools/`: The project includes built-in tool implementations in `xml_pipeline/tools/`:
@ -348,15 +347,15 @@ Built-in message types in `xml_pipeline/primitives/`:
- @docs/message-pump-v2.1.md — Message pump implementation details - @docs/message-pump-v2.1.md — Message pump implementation details
- @docs/handler-contract-v2.1.md — Handler interface specification - @docs/handler-contract-v2.1.md — Handler interface specification
- @docs/llm-router-v2.1.md — LLM backend abstraction - @docs/llm-router-v2.1.md — LLM backend abstraction
- @docs/secure-console-v3.md — Console and authentication
- @docs/platform-architecture.md — Platform-level APIs - @docs/platform-architecture.md — Platform-level APIs
- @docs/native_tools.md — Native tool implementations - @docs/native_tools.md — Native tool implementations
- @docs/primitives.md — System primitives reference (includes thread lifecycle) - @docs/primitives.md — System primitives reference (includes thread lifecycle)
- @docs/configuration.md — Organism configuration reference - @docs/configuration.md — Organism configuration reference
- @docs/lsp-integration.md — LSP editor support for YAML and AssemblyScript
- @docs/split-config.md — Split configuration architecture - @docs/split-config.md — Split configuration architecture
- @docs/why-not-json.md — Rationale for XML over JSON - @docs/why-not-json.md — Rationale for XML over JSON
> **Note:** Console, authentication, and LSP integration documentation is in the Nextra project.
## Skill Usage Guide ## Skill Usage Guide
@ -370,7 +369,6 @@ When working on tasks involving these technologies, invoke the corresponding ski
| cryptography | Implements Ed25519 identity keys for signing and federation auth | | cryptography | Implements Ed25519 identity keys for signing and federation auth |
| httpx | Handles async HTTP requests for LLM backend communication | | httpx | Handles async HTTP requests for LLM backend communication |
| aiostream | Implements stream-based message pipeline with concurrent fan-out processing | | aiostream | Implements stream-based message pipeline with concurrent fan-out processing |
| prompt-toolkit | Builds interactive TUI console with password input and command history |
| lxml | Handles XML processing, XSD validation, C14N normalization, and repair | | lxml | Handles XML processing, XSD validation, C14N normalization, and repair |
| python | Manages async-first Python 3.11+ codebase with type hints and dataclasses | | python | Manages async-first Python 3.11+ codebase with type hints and dataclasses |
| pytest | Runs async test suite with pytest-asyncio fixtures and markers | | pytest | Runs async test suite with pytest-asyncio fixtures and markers |

View file

@ -124,14 +124,6 @@ async def handle_response_print(payload: ShoutedResponse, metadata: HandlerMetad
""" """
Print the final response to the console. Print the final response to the console.
Routes output to the TUI console if available, otherwise prints to stdout. Note: TUI console is available in Nextra. This handler uses simple stdout.
""" """
from xml_pipeline.console.console_registry import get_console print(f"\033[36m[response] {payload.message}\033[0m")
console = get_console()
if console is not None and hasattr(console, 'on_response'):
console.on_response("shouter", payload)
else:
# Fallback for simple mode or no console
print(f"\033[36m[response] {payload.message}\033[0m")

View file

@ -1,93 +1,58 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
run_organism.py Start the organism with TUI console. run_organism.py Deprecated entry point.
Usage: The TUI console and server have been moved to the Nextra SaaS product.
python run_organism.py [config.yaml] This file is kept for backwards compatibility but will display a helpful message.
python run_organism.py --simple [config.yaml] # Use simple console
This boots the organism with a split-screen terminal UI: For the open-source xml-pipeline, use the CLI or programmatic API:
- Scrolling output area above
- Status bar separator
- Input area below
Flow: # CLI
1. Bootstrap organism xml-pipeline run config/organism.yaml
2. Start pump in background
3. Run TUI console # Programmatic
4. /quit shuts down gracefully from xml_pipeline.message_bus import bootstrap
pump = await bootstrap("organism.yaml")
await pump.run()
# Interactive console example
pip install xml-pipeline[console]
python -m examples.console
For the full TUI console with authentication and WebSocket server,
see the Nextra project.
""" """
import asyncio
import sys import sys
from pathlib import Path
from xml_pipeline.message_bus import bootstrap
from xml_pipeline.console.console_registry import set_console
async def run_organism(config_path: str = "config/organism.yaml", use_simple: bool = False): def main() -> None:
"""Boot organism with TUI console.""" """Show deprecation message and exit."""
print("""
xml-pipeline: TUI Console Moved to Nextra
==========================================
# Bootstrap the pump The interactive TUI console with authentication and WebSocket server
pump = await bootstrap(config_path) has been moved to the Nextra SaaS product (v0.4.0).
if use_simple: For the open-source xml-pipeline, use:
# Use old SecureConsole for compatibility
from xml_pipeline.console import SecureConsole
console = SecureConsole(pump)
if not await console.authenticate():
print("Authentication failed.")
return
set_console(None)
pump_task = asyncio.create_task(pump.run()) 1. CLI command:
try: xml-pipeline run config/organism.yaml
await console.run_command_loop()
finally:
pump_task.cancel()
try:
await pump_task
except asyncio.CancelledError:
pass
await pump.shutdown()
print("Goodbye!")
else:
# Use new TUI console
from xml_pipeline.console.tui_console import TUIConsole
console = TUIConsole(pump)
set_console(console) # Register for handlers to find
# Start pump in background 2. Programmatic API:
pump_task = asyncio.create_task(pump.run()) from xml_pipeline.message_bus import bootstrap
pump = await bootstrap("organism.yaml")
await pump.run()
try: 3. Console example (for testing):
await console.run() pip install xml-pipeline[console]
finally: python -m examples.console
pump_task.cancel()
try:
await pump_task
except asyncio.CancelledError:
pass
await pump.shutdown()
For full TUI console, authentication, and WebSocket server features,
def main(): see the Nextra project.
args = sys.argv[1:] """)
use_simple = "--simple" in args sys.exit(1)
if use_simple:
args.remove("--simple")
config_path = args[0] if args else "config/organism.yaml"
if not Path(config_path).exists():
print(f"Config not found: {config_path}")
sys.exit(1)
try:
asyncio.run(run_organism(config_path, use_simple=use_simple))
except KeyboardInterrupt:
print("\nInterrupted")
if __name__ == "__main__": if __name__ == "__main__":

View file

@ -147,15 +147,8 @@ class TestFullPipelineFlow:
handler_calls = [] handler_calls = []
original_handler = pump.listeners["greeter"].handler original_handler = pump.listeners["greeter"].handler
# Mock the LLM call since we don't have a real API key in tests # Mock platform.complete since handle_greeting uses platform API
from xml_pipeline.llm.backend import LLMResponse mock_response = "Hello, World!"
mock_response = LLMResponse(
content="Hello, World!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
async def tracking_handler(payload, metadata): async def tracking_handler(payload, metadata):
handler_calls.append((payload, metadata)) handler_calls.append((payload, metadata))
@ -164,7 +157,7 @@ class TestFullPipelineFlow:
pump.listeners["greeter"].handler = tracking_handler pump.listeners["greeter"].handler = tracking_handler
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)): with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Create and inject a Greeting message # Create and inject a Greeting message
thread_id = str(uuid.uuid4()) thread_id = str(uuid.uuid4())
envelope = make_envelope( envelope = make_envelope(
@ -235,17 +228,10 @@ class TestFullPipelineFlow:
pump._reinject_responses = capture_reinject pump._reinject_responses = capture_reinject
# Mock the LLM call since we don't have a real API key in tests # Mock platform.complete since handle_greeting uses platform API (not llm directly)
from xml_pipeline.llm.backend import LLMResponse mock_response = "Hello, Alice!"
mock_response = LLMResponse( with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
content="Hello, Alice!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_response)):
# Inject a Greeting # Inject a Greeting
thread_id = str(uuid.uuid4()) thread_id = str(uuid.uuid4())
envelope = make_envelope( envelope = make_envelope(
@ -479,13 +465,8 @@ class TestThreadRoutingFlow:
pump.listeners["shouter"].handler = trace_shouter pump.listeners["shouter"].handler = trace_shouter
pump.listeners["response-handler"].handler = trace_response pump.listeners["response-handler"].handler = trace_response
# Mock LLM response # Mock platform.complete since handle_greeting uses platform API
mock_llm = LLMResponse( mock_response = "Hello there, friend!"
content="Hello there, friend!",
model="mock",
usage={"total_tokens": 10},
finish_reason="stop",
)
# Capture final output (response-handler sends to console, but console isn't registered) # Capture final output (response-handler sends to console, but console isn't registered)
final_outputs = [] final_outputs = []
@ -498,7 +479,7 @@ class TestThreadRoutingFlow:
pump._reinject_responses = capture_reinject pump._reinject_responses = capture_reinject
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)): with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Inject ConsoleInput (simulating: user typed "@greeter TestUser") # Inject ConsoleInput (simulating: user typed "@greeter TestUser")
# Note: xmlify converts field names to PascalCase for XML elements # Note: xmlify converts field names to PascalCase for XML elements
thread_id = str(uuid.uuid4()) thread_id = str(uuid.uuid4())
@ -642,15 +623,10 @@ class TestThreadRoutingFlow:
pass pass
pump._reinject_responses = noop_reinject pump._reinject_responses = noop_reinject
# Mock LLM # Mock platform.complete since handle_greeting uses platform API
mock_llm = LLMResponse( mock_response = "Hello!"
content="Hello!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)): with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Inject initial message # Inject initial message
thread_id = str(uuid.uuid4()) thread_id = str(uuid.uuid4())
envelope = make_envelope( envelope = make_envelope(

View file

@ -411,7 +411,6 @@ class TestGreeterTodoFlow:
""" """
from handlers.hello import Greeting, GreetingResponse, handle_greeting from handlers.hello import Greeting, GreetingResponse, handle_greeting
from handlers.console import ShoutedResponse from handlers.console import ShoutedResponse
from xml_pipeline.llm.backend import LLMResponse
# Clear registry # Clear registry
todo_registry = get_todo_registry() todo_registry = get_todo_registry()
@ -419,15 +418,10 @@ class TestGreeterTodoFlow:
thread_id = str(uuid.uuid4()) thread_id = str(uuid.uuid4())
# Mock LLM # Mock platform.complete (not llm.complete) since handle_greeting uses platform API
mock_llm = LLMResponse( mock_response = "Hello there!"
content="Hello there!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)): with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Call greeter handler # Call greeter handler
metadata = HandlerMetadata( metadata = HandlerMetadata(
thread_id=thread_id, thread_id=thread_id,
@ -466,7 +460,6 @@ class TestGreeterTodoFlow:
When greeter is called again with raised todos, it should close them. When greeter is called again with raised todos, it should close them.
""" """
from handlers.hello import Greeting, GreetingResponse, handle_greeting from handlers.hello import Greeting, GreetingResponse, handle_greeting
from xml_pipeline.llm.backend import LLMResponse
# Clear registry # Clear registry
todo_registry = get_todo_registry() todo_registry = get_todo_registry()
@ -485,19 +478,14 @@ class TestGreeterTodoFlow:
# Verify eyebrow is raised # Verify eyebrow is raised
assert todo_registry._by_id[watcher_id].eyebrow_raised is True assert todo_registry._by_id[watcher_id].eyebrow_raised is True
# Mock LLM # Mock platform.complete (not llm.complete) since handle_greeting uses platform API
mock_llm = LLMResponse( mock_response = "Hello again!"
content="Hello again!",
model="mock",
usage={"total_tokens": 5},
finish_reason="stop",
)
# Format the nudge as the pump would # Format the nudge as the pump would
raised = todo_registry.get_raised_for(thread_id, "greeter") raised = todo_registry.get_raised_for(thread_id, "greeter")
nudge = todo_registry.format_nudge(raised) nudge = todo_registry.format_nudge(raised)
with patch('xml_pipeline.llm.complete', new=AsyncMock(return_value=mock_llm)): with patch('xml_pipeline.platform.complete', new=AsyncMock(return_value=mock_response)):
# Call greeter with the nudge # Call greeter with the nudge
metadata = HandlerMetadata( metadata = HandlerMetadata(
thread_id=thread_id, thread_id=thread_id,

View file

@ -99,7 +99,8 @@ async def complete(
context_buffer = get_context_buffer() context_buffer = get_context_buffer()
history = context_buffer.get_thread(thread_id) history = context_buffer.get_thread(thread_id)
for slot in history: # get_thread returns None if thread doesn't exist yet
for slot in history or []:
# Determine role: assistant if from this agent, user otherwise # Determine role: assistant if from this agent, user otherwise
role = "assistant" if slot.from_id == agent_name else "user" role = "assistant" if slot.from_id == agent_name else "user"