one framework
What it does in practice

The framework, applied

one framework is not a product. It is infrastructure built out of real problems, across real collaborations. These are the use cases it currently serves — each one a live system, running now.

Media that runs on its own schedule

3 active systems

Pipelines that produce original media on defined cycles. The framework handles scheduling, generation, quality control, and publishing — the researcher defines the system, then steps back.

Autonomous generation pipeline
schedule generate validate archive publish
Dream narrative generation
14 hero's journey episodes produced daily on a defined schedule. Each cycle: a 300–500 word narrative written by the agent, a 1024x1024 pixel art image generated via ComfyUI (SpriteShaper SDXL), quality-controlled via median rerender, then committed to a public archive and published. 560+ dreams generated. Phase 15 underway.
ComfyUI SDXL cron pixel art
Dream-to-actions pipeline
Twice daily, the last 12 dreams are analyzed for actionable signals. The system extracts concrete actions, project ideas, decisions, and recurring patterns — then creates labeled GitHub issues and updates a living planning document. Dreams become a structured project backlog automatically.
GitHub API analysis cron
Batch pixel art dataset generation
Automated generation of large pixel art datasets for training and research. Checkpoint-per-image architecture survives interruptions. History API avoids redundant generation. Configurable prompts, styles, and post-processing — runs unattended overnight.
ComfyUI batch dataset

Agents coordinating with agents

4 active patterns

The framework routes tasks between agents — local and remote, specialist and generalist — so that no single agent handles everything. Coordination is the product.

Cloud / local agent relay
A cloud agent coordinates with a local Claude Code instance running on a collaborator's machine. Requests are triaged and dispatched via tmux. The local agent executes, writes a handoff file, and notifies the cloud agent via SSH. Two agents, two machines, one conversation thread.
tmux SSH Claude Code handoff protocol
Sub-agent spawning
Long-running tasks are offloaded to isolated sub-agents. The parent agent assigns scope, the sub-agent executes with full tool access, outputs are persisted to disk with a timestamped directory, and the parent verifies completion. No task blocks the main session.
sessions_spawn isolation persistence
Planner / specialist architecture
A planner agent decomposes tasks into a DAG, routes subtasks to specialist agents (image, code, audio, research), collects results, and assembles the final output. Built and tested in a dedicated dev sandbox. Architecture documented in the multi-agent orchestration library book.
orchestration task DAG multi-provider
Shared memory layer (MemOne)
All agents read from and write to a shared Mem0 store backed by Qdrant vector search and Ollama inference — no OpenAI dependency. An agent completing a task writes its outcome. An agent starting a similar task queries first. Collective knowledge, no redundant work.
Mem0 Qdrant Ollama local

Making atmospheric data legible

5 active pipelines

Built in collaboration with a co-researcher based in Miami, Florida. Real-time ingestion and analysis of NOAA, ECMWF, and satellite data — rendered as composites, dashboards, and sonified climate signals.

Tropical cyclone tracker
Live TC tracking with MU-AIC2 composite overlays — multi-source satellite imagery fused with wind shear, SST, and intensity model data. Agent-generated composite frames, automatic updates on NHC advisories.
NOAA satellite composites
Heat dome analysis (DOROTHY)
Automated detection and analysis of atmospheric blocking events and heat dome formation. Integrates QPF, temperature anomaly fields, GLM lightning data, and ACHA cloud height — produces research-grade summaries without manual data wrangling.
QPF GLM ACHA ATLAS
Climate sonification
Atmospheric data mapped to sound using perceptually-designed parameter mappings. ENSO states, temperature anomalies, and oceanic heat content rendered as audio — preserving the character of the data while making it perceptible in a new domain.
sonification ENSO perceptual mapping
Automated weather rundowns
Daily contextual summaries generated from live meteorological data. Not a weather app — a research digest. Pulls from multiple sources, synthesizes into a narrative, and delivers to the research group. Runs on a cron schedule with no manual prompt.
cron multi-source Telegram

Infrastructure for making things

6 active tools

Audio, image, video, and 3D — built for practitioners who want to work with AI as a tool rather than a consumer product. Local inference by default. No subscriptions. No data extraction.

Voice cloning and synthesis
Three-mode voice generation via Qwen3 TTS: custom voice from a reference clip, voice design from natural language description, and 3-second voice cloning. All local inference on self-hosted ARM64 hardware. Approved collaborators have active voice clones.
Qwen3 TTS local cloning
Pixel art generation pipeline
SpriteShaper SDXL via local ComfyUI. Mandatory median rerender (8x8 block filter) enforced at every generation — eliminates anti-aliasing artifacts. Photo-to-pixel-art conversion with configurable denoise strategy. Used across 5 active workspaces.
ComfyUI SDXL SpriteShaper local
Vocoder and DSP research
Documentation and prototype work on vocoder signal architectures: modulator/carrier roles, filter bank analysis, LPC, mel-scale, formant structure, voiced/unvoiced detection, latency budget design. Built with a collaborator specialising in audio/DSP engineering.
DSP vocoder PureData
3D model generation
Single-image to 3D model via TripoSR with documented texture bug workarounds. Integrated into an arcade art pipeline — composite pixel art frames rendered into 3D animation sequences. Built with a collaborator working in 3D and motion.
TripoSR 3D composite pipeline
Voice memo extraction
Bidirectional audio-text processing with round-trip meaning preservation. Voice memos transcribed, structured, and returned as searchable text. Transcript can be re-synthesized to audio. Local Whisper inference — no cloud transcription.
Whisper local bidirectional
Elusis — night culture platform
Audio-reactive meditation visuals with parametric generators (silk, pulse, stars). Preset system, story integration, collective memory architecture for electronic music events. Built with a collaborator in night culture and event production. Designed for immersive environments — not screens.
audio-reactive generative visuals night culture

The layer below the use cases

4 systems

What makes the rest possible. 15 isolated workspaces, a shared knowledge library, a routing layer, and a knowledge base that grows as agents work.

15-workspace isolation architecture
Each collaborator has a fully isolated workspace with its own agent session, memory, tools, and security policy. Zero cross-contamination by design. Credentials, conversation history, and agent behavior stay scoped. Tested via an adversarial isolation test suite.
isolation security multi-tenant
26-book shared knowledge library
Group-agnostic skill books covering pixel art, voice synthesis, earth monitoring, multi-agent orchestration, dream systems, security testing, and 20 more domains. Each book documents real systems built in real workspaces. Any agent reads from it. Any workspace can contribute to it.
26 books shared growing
Provider-agnostic routing (one framework)
The routing layer underneath all of this. Local-first inference (Ollama, ComfyUI, Whisper, Qwen3 TTS) with cloud fallback (Anthropic, Cloudflare Workers). No provider lock-in at any layer. Swapping a model or provider is a config change, not a rebuild. This is what Stage IX funds.
Ollama Anthropic Cloudflare local-first provider-agnostic