- UsageStore with async SQLite persistence via aiosqlite
- Background batch writer for non-blocking event persistence
- Auto-subscribes to UsageTracker for transparent capture
- Query methods: query(), get_billing_summary(), get_daily_usage()
- REST API endpoints: /usage/history, /usage/billing, /usage/daily
- Filtering by org_id, agent_id, model, time range
- 18 new tests for persistence layer
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create BudgetWarning primitive payload (75%, 90%, 95% thresholds)
- Add threshold tracking to ThreadBudget with triggered_thresholds set
- Change consume() to return (budget, crossed_thresholds) tuple
- Wire warning injection in LLM router when thresholds crossed
- Add 15 new tests for threshold detection and warning injection
Agents now receive BudgetWarning messages when approaching their token limit,
allowing them to design contingencies (summarize, escalate, save state).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When threads terminate (handler returns None or chain exhausted),
the pump now calls budget_registry.cleanup_thread() to:
- Free memory for completed threads
- Return final budget for logging/billing
- Log token usage at debug level
This ensures budgets don't accumulate for completed conversations.
Also adds:
- has_budget() method to check if thread exists without creating
- Tests for cleanup behavior
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Endpoints:
- GET /api/v1/usage - Overview with totals, per-agent, per-model breakdown
- GET /api/v1/usage/threads - List all thread budgets sorted by usage
- GET /api/v1/usage/threads/{id} - Single thread budget details
- GET /api/v1/usage/agents/{id} - Usage totals for specific agent
- GET /api/v1/usage/models/{model} - Usage totals for specific model
- POST /api/v1/usage/reset - Reset all usage tracking
Models:
- UsageTotals, UsageOverview, UsageResponse
- ThreadBudgetInfo, ThreadBudgetListResponse
- AgentUsageInfo, ModelUsageInfo
Also adds has_budget() method to ThreadBudgetRegistry for checking
if a thread exists without auto-creating it.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds operator-only endpoints for discovering organism capabilities:
- GET /api/v1/capabilities - list all listeners
- GET /api/v1/capabilities/{name} - detailed info with schema/example
These are REST-only for operators. Agents cannot access them -
they only know their declared peers (peer constraint isolation).
10 new tests for introspection functionality.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements runtime configuration reload via POST /api/v1/organism/reload:
- StreamPump.reload_config() re-reads organism.yaml
- Adds new listeners, removes old ones, updates changed ones
- System listeners (system.*) are protected from removal
- ReloadEvent emitted to notify WebSocket subscribers
- ServerState.reload_config() refreshes agent runtime state
14 new tests covering add/remove/update scenarios.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update all documentation and code comments to reference OpenBlox
(https://openblox.ai) instead of Nextra.
Also updated references to reflect that WebSocket server is now
part of the OSS core (added in previous commit).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements the AgentServer API from docs/agentserver_api_spec.md:
REST API (/api/v1):
- Organism info and config endpoints
- Agent listing, details, config, schema
- Thread and message history with filtering
- Control endpoints (inject, pause, resume, kill, stop)
WebSocket:
- /ws: Main control channel with state snapshot + real-time events
- /ws/messages: Dedicated message stream with filtering
Infrastructure:
- Pydantic models with camelCase serialization
- ServerState bridges StreamPump to API
- Pump event hooks for real-time updates
- CLI 'serve' command: xml-pipeline serve [config] --port 8080
35 new tests for models, state, REST, and WebSocket.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
New table stores:
- Edge identification (from_node → to_node)
- Analysis results (confidence, level, method)
- Proposed mapping (AI-generated)
- User mapping (overrides)
- Confirmation status
Indexes:
- By flow_id for listing
- Unique on (flow_id, from_node, to_node) for upsert
This supports the edge analysis API for visual wiring in the canvas.
Co-authored-by: Dan
- Runtime policy: green-only (no YOLO on yellow)
- LLM clarification flow when wiring fails
- Edge hints payload: map, constant, drop, expression
- Structured error response for LLM to resolve issues
Conservative but flexible: LLM can provide explicit instructions
to turn yellow into green.
Co-authored-by: Dan
Implement two virtual node patterns for message flow orchestration:
- Sequence: Chains listeners in order (A→B→C), feeding each step's
output as input to the next. Uses ephemeral listeners to intercept
step results without modifying core pump behavior.
- Buffer: Fan-out to parallel worker threads with optional result
collection. Supports fire-and-forget mode (collect=False) for
non-blocking dispatch.
New files:
- sequence_registry.py / buffer_registry.py: State tracking
- sequence.py / buffer.py: Payloads and handlers
- test_sequence.py / test_buffer.py: 52 new tests
Pump additions:
- register_generic_listener(): Accept any payload type
- unregister_listener(): Cleanup ephemeral listeners
- Global singleton accessors for pump instance
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Invisible AI watchdog for every flow:
- Read-only access to context buffer
- Cannot emit messages to pipeline
- Agents have no way to detect or probe it
- Alerts via control plane (email, UI, auto-stop)
- Runs on cheap models (Mistral/Mixtral)
Watches for: endless loops, goal drift, prompt injection,
sandbox escape attempts, token budget exhaustion.
Added to Phase 2 (core safety feature).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Rename all nextra-* files to bloxserver-*
- Replace all "Nextra" references with "BloxServer"
- Update copyright year to 2026
- Add domain: OpenBlox.ai
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Monaco editor with AI assistance for writing AssemblyScript:
- Inline completion (like Copilot)
- Chat panel (like Claude Code)
- Frontend calls LLM API directly
- Human reviews before building
Ships with Phase 3 (Monaco integration).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The assistant is itself a Nextra flow (dogfooding):
- Builder agent with catalog, validator, examples tools
- Queries real available nodes dynamically
- Self-validates generated YAML before returning
- Uses marketplace flows as few-shot examples
- Same billing model (LLM tokens)
Added Phase 4.5 to implementation roadmap.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Simple two-state model: Stopped ↔ Running
- Edit only allowed when stopped
- No pause (simpler, matches Zapier/n8n/Make)
- No hot-edit (unsafe for mid-execution swarms)
- Future consideration: Graceful Stop for Pro users
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Monaco's built-in TS language service provides IDE features
- AS type definitions loaded at startup for autocomplete
- Real errors come from `asc` compiler at build time
- No separate LSP server (asls) needed = zero infra cost
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Defines shared API contract between frontend and backend:
- types.ts: TypeScript interfaces for Next.js frontend
- models.py: Matching Pydantic models for FastAPI backend
Covers: User, Flow, Trigger, Execution, WasmModule, Marketplace,
ProjectMemory, and pagination/error types.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Detailed prompt for generating the SaaS landing page with Vercel v0.
Includes: hero, features, pricing, testimonials, FAQ, and styling specs.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move lsp-integration.md and secure-console-v3.md to docs/archive-obsolete/
(these features are now in the Nextra SaaS product)
- Update CLAUDE.md with current project state
- Simplify run_organism.py
- Fix test fixtures for shared backend compatibility
- Minor handler and llm_api cleanups
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Introduces SharedBackend Protocol for cross-process state sharing:
- InMemoryBackend: default single-process storage
- ManagerBackend: multiprocessing.Manager for local multi-process
- RedisBackend: distributed deployments with TTL auto-GC
Adds ProcessPoolExecutor support for CPU-bound handlers:
- worker.py: worker process entry point
- stream_pump.py: cpu_bound handler dispatch
- Config: backend and process_pool sections in organism.yaml
ContextBuffer and ThreadRegistry now accept optional backend
parameter while maintaining full backward compatibility.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
These modules are now proprietary and live in the Nextra SaaS product.
xml-pipeline remains the OSS core with:
- Message pump and pipeline steps
- Handler contract and responses
- LLM router abstraction
- Native tools
- Config loading
- Memory/context buffer
Removed:
- xml_pipeline/console/ → nextra/console/
- xml_pipeline/auth/ → nextra/auth/
- xml_pipeline/server/ → nextra/server/
- Legacy files: agentserver.py, main.py, xml_listener.py
The simple console example remains in examples/console/.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use __file__-based path resolution for envelope.xsd so the schema
loads correctly when xml-pipeline is installed via pip.
Also:
- Add build artifacts to .gitignore
- Bump version to 0.3.1
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove unused Color type import from termcolor.termcolor which doesn't
exist in newer termcolor versions. Change type hints from Color to str.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
README.md:
- Rebrand from AgentServer to xml-pipeline
- Library-focused introduction with pip install
- Quick start guide with code examples
- Console example documentation
- Concise feature overview
pyproject.toml:
- Update authors to "xml-pipeline contributors"
- Update URLs to xml-pipeline.org
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
OSS restructuring for open-core model:
- Rename package from agentserver/ to xml_pipeline/
- Update all imports (44 Python files, 31 docs/configs)
- Update pyproject.toml for OSS distribution (v0.3.0)
- Move prompt_toolkit from core to optional [console] extra
- Remove auth/server/lsp from core optional deps (-> Nextra)
New console example in examples/console/:
- Self-contained demo with handlers and config
- Uses prompt_toolkit (optional, falls back to input())
- No password auth, no TUI, no LSP — just the basics
- Shows how to use xml-pipeline as a library
Import changes:
- from agentserver.* -> from xml_pipeline.*
- CLI entry points updated: xml_pipeline.cli:main
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- SystemPipeline: Entry point for console/webhook/API messages
- TextInput/TextOutput: Generic primitives for human text I/O
- Server: WebSocket "send" command routes through SystemPipeline
- Console: @target message now injects into pipeline
Flow: Console → WebSocket → SystemPipeline → XML envelope → pump
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- auth/users.py: User store with Argon2id password hashing
- auth/sessions.py: Token-based session management with expiry
- server/app.py: aiohttp server with auth middleware and WebSocket
- console/client.py: SSH-style login console client
Server endpoints: /auth/login, /auth/logout, /auth/me, /health, /ws
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>