one framework Infrastructure

What this page is about

What we built: A fully self-hosted research stack running on locally-owned hardware in Amsterdam. Two compute nodes (DGX Spark and ASUS GX10 (Roberto)), no cloud dependencies, automated 14-cycle-per-day observation pipeline, 34 isolated workspaces for different collaborators, and a distributed compute model that contributes idle GPU cycles to Folding@Home.

Why it matters for Stage IX: Stage IX asks what the infrastructure of an ongoing practice looks like. This page answers that question with specifics: what runs, where it runs, what it costs, and how it stays operational without external services. The infrastructure is not peripheral to the research -- it is an implementation of the research's core principle: locally-controlled, transparent, and sustainable.

How Stage IX resources would be used: The current setup is one researcher, two compute nodes. Stage IX funding would allow additional compute nodes to be brought into the network, supporting parallel workloads and reducing single-point dependency. It would also fund the transition from a solo operation to a documented, onboardable infrastructure that other practitioners can replicate.

How it proves permanent beta: Permanent beta is not a design philosophy -- it is an operational condition. The pipeline runs continuously. Workspaces are added as collaborators join. The infrastructure changes as the research changes. Nothing in the stack is static. The 14-cycle-per-day schedule has run without interruption. The idle GPU contributes to science between cycles. The system does not stop between uses.

The technical infrastructure

Locally owned.
No cloud dependencies.

one framework runs on hardware owned outright in Amsterdam. Core processing, image generation, and data storage run entirely on locally-owned compute with no cloud dependencies. AI language processing currently uses cloud APIs as an interim measure — the research grant funds the transition to distributed local processing across the contributor network. The infrastructure is designed for that transition from the start.

What runs where

Network topology — Amsterdam
DGX Spark primary compute GX10 (Roberto) secondary compute Tailscale mesh 34 workspaces Folding@Home 192.168.1.x on-demand VPN isolated idle cycles
DGX Spark

Primary compute. Amsterdam.

Core AI processing, ComfyUI image generation, observation pipeline, pattern extraction. Owned outright. Running continuously.

ASUS GX10 (Roberto)

Secondary compute node. Amsterdam.

Local GPU workloads, model training, and heavy processing tasks. Co-contributor infrastructure managed via Tailscale SSH.

Netherlands

Self-hosted. No cloud.

All data remains on locally-owned hardware. No data leaves the jurisdiction for processing. No external APIs handle core function. Operating cost: under EUR 5 per month in electricity.


The observation pipeline

Each of 14 daily observation cycles passes through eight stages. The pipeline is automated end-to-end. Human intervention is not required for any stage of a standard cycle.

01

Schedule trigger. Cron fires 14 times per day at fixed intervals. Consistent timing makes cycles comparable across the archive.

02

Dream narrative generation. Local language model generates the field observation narrative. 300-500 words. Consistent world, consistent characters, linear timeline.

03

Structured extraction. A second model pass produces extract.json. Four fields: actions, patterns, ideas, decisions. Empty arrays are valid and meaningful outputs.

04

Pixel art generation. ComfyUI with the SpriteShaper model renders a 1024x1024 pixel art artifact from a prompt derived from the narrative.

05

Post-processing. Median rerender applied to produce clean pixel art at correct block scale. Raw ComfyUI output is discarded. Final artifact averages 74KB.

06

Archive commit. Narrative, extract.json, and artwork committed to the public git repository with timestamp. Immutable record.

07

Pattern issue creation. Patterns and actions from extract.json generate tracked issues in the dreams-to-actions repository.

08

Public gallery update. The public archive at oneframework.to/dreams updates automatically. Every cycle is visible within minutes of generation.


34 collaborator workspaces

34

one framework operates 34 isolated workspaces for different collaborators and research threads.

Each workspace has independent memory, independent cron jobs, and independent behavior reviews. Memory isolation is architecture, not policy: workspaces cannot access each other's data. The isolation is enforced at the infrastructure level, not by convention.


Ethical AI Foundation

The infrastructure operates under the Ethical AI Foundation framework at ethicalaifoundation.cc. This implements the OECD AI principles: local-first operation, transparent decision-making, fair treatment of contributors.

The infrastructure is the implementation of these principles, not a separate commitment made alongside it. Local-first means the compute is local. Transparent means every cycle is in a public repository. Fair means every contribution is tracked and attributable.


Distributed compute model

When the observation pipeline is idle, GPU cycles are contributed to Folding@Home, a distributed protein structure research project. The compute does not sit unused between cycles.

The broader compute model draws from the BOINC contribution framework: contribution value logged for contributed cycles, knowledge contributions tracked alongside compute, and attribution so that contributions from collaborators are recorded against the outputs they enabled. All contributions are tracked transparently. There is no speculative token layer. There are no tokens. The accounting is straightforward: contribution in, attribution out.

BOINC model precedent — 20 years of contribution tracking
idle compute cycles contribution ledger BOINC / contribution value science output contribution value contribution in, attribution out - no speculative layer
Record-keeping stack — how contributions are stored
IPFS - content-addressed storage Ceramic - mutable data streams public ledger - audit trail storage streams audit transparency

Every contribution is recorded at all three layers. Immutable, auditable, not dependent on trust in any single party.

The audit layer is designed to be irrefutable and publicly verifiable. Current implementation uses content-addressed storage for immutability. The roadmap includes a public ledger layer — either Hyperledger (permissioned, suited to institutional partners) or an open public chain — so that contribution records can be independently verified by anyone, without needing to trust the network operator.

Operating costs are shared proportionally across contributors — the same principle that governs contribution accounting throughout the network.


Current state / target state

The local compute infrastructure handles image generation, processing, and data storage. AI language processing currently runs on cloud APIs (Anthropic, Gemini, and others via the one routing layer — the system that dispatches tasks to whichever AI service is available). This is a deliberate interim position, not the end state.

The research grant funds the transition: replacing cloud-based processing with distributed local processing across the contributor network. Compute nodes contributed by researchers, artists, and institutions will handle language tasks directly. No external API dependency. No external jurisdiction. No ongoing subscription cost.

The architecture is designed for this transition from the start. Routing logic is provider-agnostic. Local endpoints are already supported. What the grant buys is the network scale required to make local processing reliable at research scale.

Operating costs

The entire research infrastructure runs for under EUR 5 per month. This is not an estimate -- it is the actual cost based on electricity consumption of the local compute nodes in Amsterdam.

DGX Spark (continuous operation) ~3.50 EUR / month
ASUS GX10 / Roberto (on-demand, not continuous) ~1.20 EUR / month
Domain and DNS ~0.20 EUR / month
Total < 5 EUR / month

The pipeline generates 14 observation cycles per day. That is approximately 420 cycles per month, each producing a narrative, an extract.json, and a pixel art artifact. Cost per cycle: under EUR 0.012. No cloud API costs. No subscription fees. The hardware is owned outright.