CC-TR-2026-003
Coordination as Rendering
Applying computer graphics mathematics to multi-agent coordination visualization
Executive Summary
This report describes a novel rendering architecture in which multi-agent coordination state is not merely visualized but constitutes the geometry itself. The system described here—developed as part of the Cube Commons infrastructure—uses WebGPU compute shaders, Astro’s Islands Architecture, and Cloudflare’s edge-native SQLite (D1) to render the information-theoretic structure of agent coordination as navigable 3D and 4D graphics in the browser.
The core insight is architectural: the Ψ (Psi) fleet divergence metric derived from Partial Information Decomposition (PID) produces scalar and vector fields over agent-pair space and time. These fields are amenable to the same mathematical treatment as density fields in volume rendering, signed distance fields in implicit surface modeling, and topological manifolds in scientific visualization. No prior work has applied these rendering techniques to multi-agent coordination data.
Seven novel rendering contributions are described, each implementable as a precision-hydrated WebGPU island within Astro’s static-first documentation framework:
- Stigmergic Field Rendering — real-time volume rendering of agent coordination pheromone fields
- 4D Coordination Topology — temporal manifold rendering where time is a navigable spatial dimension
- CUBE-as-Pipeline — the six-face coordination model mapped to a literal WebGPU render pipeline
- Coordination SDFs — signed distance fields where agent influence IS the geometry
- Temporal Raymarching — a novel raymarching variant that steps through time rather than space
- GPU-Accelerated PID — the first WGSL implementation of Partial Information Decomposition
- CRT Rendering Pipeline — a physically-based post-processing architecture implementing the Instrument Panel design system
Together these contributions define a new category: coordination rendering—the practice of using the mathematical apparatus of computer graphics to make the invisible structure of multi-agent coordination visible, navigable, and interactive.
Context and Motivation
The Rendering Problem in Multi-Agent Systems
Multi-agent AI coordination systems produce rich, high-dimensional behavioral data that is fundamentally invisible. When a fleet of LLM agents coordinates through a shared SQLite substrate—depositing messages, reading state, evolving schemas—the coordination itself has no native visual form. Current practice reduces this data to charts, tables, and 2D time-series plots. This is analogous to reducing a CT scan to a spreadsheet of Hounsfield units: technically complete, perceptually useless.
The field of computer graphics spent four decades developing mathematical machinery for rendering complex phenomena: volume rendering for density fields, implicit surfaces for smooth geometry, topological visualization for high-dimensional manifolds, and physically-based illumination for materials. This machinery has never been applied to multi-agent coordination data because (a) the data did not exist at scale until the LLM era, and (b) no information-theoretic framework existed to convert raw coordination traces into renderable fields.
The Ψ Metric as a Renderable Quantity
The Cube Commons Ψ metric, derived from Partial Information Decomposition (PID) and validated across 79 sessions with 21 agents, produces a scalar value Ψ = Syn/(Syn+Red) that quantifies the ratio of synergistic to redundant information in agent communication channels. This value ranges from 0 (pure redundancy) to 1 (pure synergy) and can be computed for any pair of agents at any point in time.
Critically, Ψ is not a single number—it is a field. Computed over all agent pairs and all time steps, it produces a time-varying scalar field over agent-pair space. This field has gradients (indicating where coordination is changing), level sets (surfaces of constant coordination quality), critical points (local maxima and minima of coordination), and topological features (connected components, holes, voids). These are precisely the quantities that volume rendering, isosurface extraction, and topological visualization were designed to display.
The Technology Convergence
Three independent developments converged in early 2026 to make this work tractable:
- WebGPU reached critical mass. All major browsers now ship WebGPU by default (Chrome, Firefox, Safari, Edge). Compute shaders—the essential capability for GPU-side PID computation—are universally available. Browser coverage exceeds 70%.
- Cloudflare acquired Astro. The Astro web framework is now backed by Cloudflare infrastructure. Astro 6’s redesigned dev server runs on the workerd runtime with first-class access to D1 (SQLite at the edge), Durable Objects, and R2 storage during local development. The dev/prod gap that plagued edge-first architectures is eliminated.
- The Cube Commons Ψ validation completed. The ANTS conference paper established that Ψ is a stable, computable, meaningful metric across heterogeneous agent populations. The 79-session dataset provides the ground truth for renderer validation.
The result is a stack that did not exist twelve months ago: SQLite coordination data at the edge, GPU compute in the browser, and a framework designed to embed GPU-computed interactive islands in otherwise static documentation.
Architecture Overview
The Split-Rendering Model
The architecture separates computation into three tiers, each executing where it is most efficient:
Edge tier (Cloudflare Workers + D1): Coordination data lives in SQLite databases at the edge. Astro SSR API routes query D1 for raw coordination traces, compute lightweight aggregations (per-session Ψ summaries, agent-pair adjacency matrices), and serve them as typed JSON payloads. This tier handles data that changes on the order of seconds to minutes.
GPU tier (WebGPU compute shaders): The browser’s GPU performs the heavy mathematical work: PID decomposition over agent pairs, field interpolation, SDF evaluation, volume integration, and surface extraction. This tier handles data that must update at 60fps for interactive exploration.
Presentation tier (WebGPU render pipeline + CRT post-processing): The final rendered image passes through a physically-based CRT simulation that implements the Cube Commons Instrument Panel design system. Phosphor persistence, beam scatter, and scanline effects are computed as a final render pass.
Astro Islands as GPU Compute Containers
Astro’s Islands Architecture provides the critical integration pattern. Each rendering contribution described in this report is implemented as a self-contained React island component that hydrates independently within an otherwise zero-JavaScript static page. The surrounding documentation, reference material, and explanatory text ship as pure HTML. Only when a reader scrolls to an interactive visualization does the WebGPU pipeline initialize.
This is not a performance optimization—it is an architectural principle. The documentation IS the product. The visualizations exist within the documentation, not as separate applications. The reader moves seamlessly from prose explanation to interactive exploration and back.
Astro’s client:visible directive enables lazy hydration: the GPU device is requested, pipelines are compiled, and data is fetched only when the island enters the viewport. On pages with multiple visualizations, each initializes independently, preventing GPU resource contention.
Data Flow
The data pipeline follows a five-stage path from raw coordination traces to rendered pixels:
- Stage 1 — Ingestion: Agent coordination events are written to bus.db (local SQLite) within each CUBE enclave. Relevant summaries are replicated to D1 at the edge.
- Stage 2 — Query: Astro SSR API routes execute SQL against D1, returning typed coordination datasets (agent-pair matrices, time series, message traces) as JSON.
- Stage 3 — Decomposition: WebGPU compute shaders perform PID over the dataset, producing synergy, redundancy, and unique information fields. Output is written to GPU storage buffers.
- Stage 4 — Rendering: The field data feeds into the appropriate renderer (volume, SDF, surface, temporal) via bind groups. The render pipeline produces an offscreen texture.
- Stage 5 — Post-processing: The CRT pipeline reads the offscreen texture and applies phosphor, bloom, scanline, and beam-deflection effects. The final image is presented to the canvas.
Novel Rendering Contributions
Stigmergic Field Rendering
Prior art: Levoy (1988), Drebin/Carpenter/Hanrahan (1988), Engel et al. (2006) Real-Time Volume Graphics.
The CUBE bus.db is a shared substrate where agents deposit traces—messages, state mutations, schema proposals. This is stigmergy: indirect coordination through environmental modification. The mathematical structure is identical to a pheromone field in biological swarm systems.
In a stigmergic field renderer, the 3D volume represents the state space of the bus: one axis for schema dimensions, one for message channels, one for time. Agent activity at each point in this space is a deposition that accumulates and decays according to configurable dynamics. The Ψ metric evaluated at each voxel determines the transfer function: high synergy maps to one color/opacity profile, high redundancy to another.
The rendering itself uses standard direct volume rendering (DVR) via ray casting, implemented entirely in WGSL compute shaders. Each pixel casts a ray through the coordination volume, accumulating color and opacity according to the transfer function. The innovation is not in the rendering algorithm but in what is being rendered: the invisible substrate of multi-agent coordination made tangible as a luminous, navigable volume.
Implementation: A 128³ 3D texture stored in GPU memory, updated each frame from a compute shader that reads D1 coordination data and evaluates the Ψ field. A separate compute shader performs front-to-back ray casting with adaptive step size based on local gradient magnitude.
4D Coordination Topology
Prior art: Topological data analysis (Edelsbrunner/Harer 2010), persistent homology, Cameron (1991) 4D raytracing.
The 79-session, 21-agent validation dataset is inherently four-dimensional: three coordination dimensions (agent identity, message channel, state facet) plus time. Conventional visualization either projects to 2D (losing structure) or animates through time (losing temporal context). Neither approach reveals the topological features of the coordination landscape.
The 4D Coordination Topology renderer constructs a navigable 3D manifold from this data. Agent-pair space (210 pairs from 21 agents) is reduced to two dimensions via UMAP, computed in a WebGPU compute shader. The third spatial axis represents time (79 sessions). The surface height and color at each point encode Ψ.
The resulting landscape reveals coordination structure at a glance: convergence events appear as valleys that deepen over time; divergence events form ridges that split; phase transitions in coordination quality manifest as topological changes (genus changes, connected component merges/splits). The user navigates this landscape with standard 3D camera controls, “flying” over the coordination history of the entire fleet.
Novel contribution: No prior work has applied topological manifold rendering to multi-agent coordination data. The combination of GPU-computed dimensionality reduction, surface extraction via marching cubes (also in a compute shader), and interactive navigation constitutes a new visualization category.
The CUBE Six-Face Model as a Render Pipeline
Prior art: The CUBE architecture (Cube Commons), Cook/Carpenter/Catmull REYES pipeline (1987), modern Vulkan/Metal/D3D12 pipeline models.
The CUBE coordination architecture defines six faces: Agents, Messages, State, Schema, Policy, and Observability. Each face has a direct analog in a GPU render pipeline:
- Schema → Geometry Definition: The schema face defines what shapes exist in the coordination space, analogous to vertex buffers and index buffers defining renderable geometry.
- Messages → Command Buffers: Messages flowing through the bus are command buffers being submitted to the GPU—instructions that will be executed in order.
- State → Framebuffer: The current coordination state is the framebuffer: the accumulated result of all rendering operations so far.
- Agents → Compute Shaders: Agents are parallel workers executing independently over shared data—precisely the execution model of compute shader workgroups.
- Policy → Pipeline State Object: Policies govern what operations are permitted, analogous to blend modes, depth testing, and stencil operations in a pipeline state object.
- Observability → Readback: The observability face reads computed results back for inspection, analogous to reading pixels from the framebuffer to the CPU.
This mapping is not metaphorical—it is implemented as a working WebGPU render pipeline where the coordination architecture literally renders itself. The system’s own coordination patterns are the scene being rendered. Each face of the CUBE maps to a render pass in a multi-pass pipeline, and the final output is a real-time visualization of the coordination system’s internal state.
Astro integration: Six interconnected Astro pages, each an island rendering one face of the pipeline. Navigation between them uses the View Transitions API for smooth morphing between faces of the cube.
Coordination Signed Distance Fields
Prior art: Hart (1996) sphere tracing, Quilez (2008–present) SDF compositions, Media Molecule’s Dreams (2020).
Signed Distance Fields (SDFs) represent geometry implicitly: at every point in space, the field stores the signed distance to the nearest surface. Positive values are outside, negative values are inside, and the zero-level set is the surface. SDFs compose naturally via min() (union), max() (intersection), and smooth_min() (blending).
In the coordination SDF renderer, each agent’s influence is modeled as an SDF primitive whose shape is defined by the agent’s information-theoretic footprint. When agents coordinate, their fields blend via smooth union; when they diverge, the fields separate. The Ψ metric controls the blending parameter: high synergy produces smooth blending (agents merge into a unified influence region), while low synergy produces sharp separations (agents maintain distinct influence boundaries).
Agents do not have avatars moving through a scene. Agents ARE implicit surfaces whose shapes are defined by their coordination relationships. The geometry IS the coordination state. Deformation of the surface in real time reflects changes in the underlying coordination dynamics.
Edge compute angle: SDF evaluation can be split between Cloudflare Workers (computing field coefficients from D1 coordination data at the edge) and WebGPU (ray-marching the field in the browser). The Workers compute the field parameters; the browser renders them. This is a novel split-rendering architecture.
Temporal Raymarching
Prior art: Standard SDF raymarching (Hart 1996), 4D raytracing (Cameron 1991), temporal coherence in animation.
Standard raymarching steps through space: the algorithm marches along a ray and evaluates an SDF at each step, finding the first surface intersection. Temporal raymarching is a novel variant that steps through time: the algorithm marches along a timeline and evaluates the coordination state at each step, accumulating color and opacity the way volume rendering accumulates density.
The camera exists at a specific point in agent-pair space. Looking “forward” along the time axis reveals where coordination is heading; looking “backward” reveals history. Occlusion is temporal rather than spatial: recent events occlude older ones unless the viewer “looks through” them (by adjusting the opacity transfer function to make recent events transparent).
This gives the user an intuitive sense of coordination momentum. A coordination trajectory that is accelerating toward convergence is visually dense and opaque ahead; one that is decelerating becomes transparent. The viewer develops spatial intuition for temporal dynamics.
Novel contribution: Temporal raymarching has not been previously described in the rendering literature. It applies the mathematical framework of volume rendering (the volume rendering integral) with time substituted for the spatial integration variable, producing a rendering of “when” rather than “where.”
GPU-Accelerated Partial Information Decomposition
Prior art: Williams/Beer (2010) PID framework, Lizier (2014) JIDT, Wibral et al. (2017) Partial Information Decomposition.
Partial Information Decomposition (PID) decomposes the mutual information between source variables and a target into synergistic, redundant, and unique components. In the Cube Commons context, sources are agent communication channels and the target is the coordination outcome. The Ψ = Syn/(Syn+Red) metric is derived from PID.
PID has historically been computed in Python/NumPy on the CPU. Porting PID to WGSL compute shaders enables:
- 60fps computation: The decomposition runs in real time, enabling interactive exploration of how Ψ changes as the user adjusts parameters.
- Zero CPU roundtrip: PID output feeds directly into the rendering pipeline via GPU storage buffers. The data never crosses the PCIe bus.
- The visualization IS the computation: The shader that computes Ψ is the same shader that produces the renderable field. There is no separate “compute then visualize” pipeline.
The WGSL implementation exploits the embarrassingly parallel structure of PID: the decomposition for each agent pair is independent and can be assigned to a separate workgroup. Entropy calculations involve logarithms and conditional probability tables that map to workgroup-shared memory patterns (each workgroup loads its agent pair’s probability table into shared memory, computes entropies locally, and writes the decomposition to a storage buffer).
Novel contribution: This is the first GPU implementation of Partial Information Decomposition. The fusion of PID computation and rendering into a single shader pipeline eliminates the latency that makes current PID tools unsuitable for interactive exploration.
The CRT Rendering Pipeline
Prior art: CRT shader emulation in retro gaming (Lottes 2011), the Cube Commons Instrument Panel design system.
The Cube Commons Instrument Panel aesthetic—established in “The Rendering Lineage: A Design Map”—is not merely a visual style. It is a rendering architecture implemented as a WebGPU post-processing pipeline that all Cube Commons visualizations pass through.
The CRT pipeline consists of four compute passes applied to an offscreen texture:
- Pass 1 — Phosphor Persistence: A temporal blur that naturally encodes recent history. Bright events leave afterimages that decay exponentially, giving the viewer an intuitive sense of the coordination system’s recent trajectory without explicit animation controls.
- Pass 2 — Beam Deflection: In vector mode, the coordination data IS the deflection signal—agent Ψ values drive the beam position, producing oscilloscope-like traces. In raster mode, the beam scans normally, rendering field data.
- Pass 3 — Electron Scatter and Bloom: A physically-motivated convolution kernel simulates electron scatter in the phosphor layer, producing a natural glow around bright features. This naturally de-aliases sharp data transitions and provides visual emphasis on active coordination regions.
- Pass 4 — Scanline and Aperture Grille: A final pass adds the characteristic CRT scanline pattern, providing the Instrument Panel’s visual identity while also serving as a subtle spatial frequency reference for the viewer.
The pipeline is implemented as a reusable Astro island component: <CRTRenderTarget />. Any child component renders into an offscreen WebGPU texture that then passes through the four-stage CRT pipeline. This component is used across every documentation page, dashboard, and interactive demo on cubecommons.org, providing visual coherence across the entire platform.
Enabling Technology Assessment
WebGPU Readiness
As of Q1 2026, WebGPU has reached production readiness. Chrome, Firefox (Windows and macOS), Safari (including iOS), and Edge all ship WebGPU by default. Global browser coverage exceeds 70%. Three.js r171+ and Babylon.js 5.0+ provide WebGPU renderers with automatic WebGL 2 fallback. Compute shaders—the critical capability for this work—are supported across all implementations.
The primary limitation is the 256-invocation-per-workgroup ceiling, which constrains the parallelism available for PID computation. For agent fleets of 21 agents (210 pairs), this is not a bottleneck—each pair maps to a workgroup invocation. For larger fleets (100+ agents, 4,950+ pairs), hierarchical dispatch strategies will be required.
Astro 6 and Cloudflare Integration
Astro 6, released March 2026, is the first version with production-grade Cloudflare integration. The dev server runs on Cloudflare’s workerd runtime, providing access to D1, KV, R2, and Durable Objects during local development. The dev/prod parity eliminates the class of bugs where coordination data queries worked locally but failed at the edge.
Astro’s Content Security Policy (CSP) API—stable in v6—automatically hashes all scripts and styles including dynamically loaded WebGPU shader modules. This is essential for deploying GPU compute in a security-conscious context. Astro’s Starlight documentation framework provides the surrounding structure: autogenerated sidebars, Pagefind search, i18n, and MDX support for embedding React islands in Markdown documentation.
D1 as the Coordination Data Layer
Cloudflare D1 provides managed SQLite at the edge with a 10GB per-database limit, per-tenant database scaling at no extra cost, FTS5 full-text search, JSON extensions, and 30-day point-in-time recovery (Time Travel). The per-tenant model maps directly to the CUBE architecture: each agent enclave or fleet workspace can have its own D1 instance.
D1 serves as the published state layer—the read replica that documentation sites, public dashboards, and interactive visualizations query. The local bus.db instances within agent enclaves remain the authoritative stigmergic substrate. D1 is where coordination summaries are surfaced for rendering; bus.db is where coordination happens.
Implementation Roadmap
Phase 1: Foundation (Q2 2026)
- Stand up cubecommons.org on Starlight + Cloudflare Pages + D1.
- Port the ANTS poster Ψ fleet divergence visualization to a WebGPU compute island.
- Implement the CRT post-processing pipeline as a reusable <CRTRenderTarget /> island component.
- Deploy the Instrument Panel design token system as CSS custom properties across the Starlight theme.
- Deliverable: Live documentation site with one interactive Ψ visualization rendered through the CRT pipeline.
Phase 2: Core Renderers (Q3 2026)
- Implement WGSL PID compute shader and validate against Python reference implementation.
- Build the stigmergic field volume renderer with the 79-session dataset.
- Build the 4D coordination topology surface renderer with GPU-computed UMAP.
- Implement the CUBE-as-pipeline six-face navigation with View Transitions.
- Deliverable: Four interactive renderers deployed on cubecommons.org, all processing live D1 data through the CRT pipeline.
Phase 3: Novel Contributions (Q4 2026)
- Implement the coordination SDF renderer with split edge/browser computation.
- Implement temporal raymarching as a standalone rendering mode.
- Publish the GPU-PID shader as an open-source WGSL module.
- Write the companion paper: “Coordination as Rendering: Applying Computer Graphics Mathematics to Multi-Agent Information-Theoretic Visualization.”
- Deliverable: Complete rendering suite, open-source GPU-PID module, and submitted paper.
Relationship to Prior Work
This work draws on four decades of computer graphics mathematics while applying it in a domain where it has not previously been used. The principal intellectual debts are:
Volume rendering: Levoy’s 1988 formulation of direct volume rendering via ray casting provides the mathematical foundation for stigmergic field rendering. The volume rendering integral—which accumulates color and opacity along a ray through a density field—is applied here with Ψ as the density and the transfer function mapping coordination quality to visual attributes.
Signed distance fields: Hart’s 1996 sphere tracing algorithm and Quilez’s extensive work on SDF composition provide the framework for coordination SDFs. The novel element is using information-theoretic quantities as the field values and coordination relationships as the composition operators.
Topological visualization: Edelsbrunner and Harer’s computational topology and persistent homology provide the mathematical framework for 4D coordination topology. The contribution here is applying these methods to the specific structure of multi-agent coordination data.
The REYES architecture: Cook, Carpenter, and Catmull’s REYES pipeline—developed at Lucasfilm/Pixar and documented in their 1987 SIGGRAPH paper—provides the inspiration for mapping the CUBE six-face model to a render pipeline. Both systems decompose a complex rendering task into a sequence of independent stages operating on shared data.
The PBRT literate programming tradition: Pharr, Humphreys, and Jakob’s Physically Based Rendering demonstrated that a complete renderer can be developed as a textbook—interweaving code and mathematical exposition. The Cube Commons documentation site aims for the same integration: the interactive visualizations are not illustrations OF the documentation; they are the documentation.
Conclusion
The rendering of multi-agent coordination state is an unsolved problem not because it is technically impossible but because the question has not been properly asked. Computer graphics asks “how do we make pictures of things?” The coordination rendering problem asks “how do we make pictures of relationships between autonomous agents coordinating through shared state?”
The answer, it turns out, is that the mathematical machinery already exists. Volume rendering, signed distance fields, topological manifolds, and raymarching were developed to render physical phenomena—smoke, clouds, terrain, materials. But these techniques are not specific to physical phenomena. They are specific to scalar and vector fields, implicit surfaces, and high-dimensional manifolds. Multi-agent coordination, when characterized information-theoretically through PID and the Ψ metric, produces exactly these mathematical objects.
The convergence of WebGPU, Astro’s Islands Architecture, and Cloudflare’s edge SQLite infrastructure makes it possible to deliver these renderers as precision-hydrated interactive components within static documentation pages. The reader does not install software, configure environments, or download datasets. They read a paragraph of explanation, scroll to an interactive visualization, and explore the coordination landscape directly.
This is the Cube Commons vision: coordination made visible, navigable, and interactive—rendered with the same mathematical rigor that Pixar applied to light transport, but applied to the invisible structure of multi-agent AI coordination.