xml-pipeline/agentserver/llm/__init__.py
dullfig a5e2ab22da Add thread registry, LLM router, console handler, and docs updates
Thread Registry:
- Root thread initialization at boot
- Thread chain tracking for message flow
- register_thread() for external message UUIDs

LLM Router:
- Multi-backend support with failover strategy
- Token bucket rate limiting per backend
- Async completion API with retries

Console Handler:
- Message-driven REPL (not separate async loop)
- ConsolePrompt/ConsoleInput payloads
- Handler returns None to disconnect

Boot System:
- System primitives module
- Boot message injected at startup
- Initializes root thread context

Documentation:
- Updated v2.1 docs for new architecture
- LLM router documentation
- Gap analysis cross-check

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 16:53:38 -08:00

40 lines
818 B
Python

"""
LLM abstraction layer.
Usage:
from agentserver.llm import router
# Configure once at startup (or via organism.yaml)
router.configure_router({
"strategy": "failover",
"backends": [
{"provider": "xai", "api_key_env": "XAI_API_KEY"},
]
})
# Then anywhere in your code:
response = await router.complete(
model="grok-4.1",
messages=[{"role": "user", "content": "Hello"}],
)
"""
from agentserver.llm.router import (
LLMRouter,
get_router,
configure_router,
complete,
Strategy,
)
from agentserver.llm.backend import LLMRequest, LLMResponse, BackendError
__all__ = [
"LLMRouter",
"get_router",
"configure_router",
"complete",
"Strategy",
"LLMRequest",
"LLMResponse",
"BackendError",
]