Browser app served from GH Pages. No server-side rendering. bridge-sw.js service worker sets COOP/COEP headers on all responses to enable crossOriginIsolated (required for SharedArrayBuffer/Atomics).
bridge-sw.jsintercepts all same-origin.wasmrequests and caches them in Cache Storage API (wasm-chunkscache)- Cache-first serving: SW returns cached response on hit; on miss, fetches, stores, and returns
- Version key stored under
cache-versionkey: concatenation ofnodejs.chunks+|+layers.jsoncontents; on SW activate, fetches both manifests, deletes all cache entries if version changed withCoi()applied to all responses (cached and network) to set COOP/COEP/CORP headers- Worker WASM fetches happen inside Web Workers — not interceptable via
page.on('request'); verify viapage.evaluate(() => caches.open('wasm-chunks').then(c => c.keys())) - Validated: 64/64 WASM requests served from SW cache (0 network hits) on second page load; opencode layer chunks cached correctly
- GH Pages throttles concurrent large requests: fetching 32×50MB chunks via
Promise.alltriggersTypeError: Failed to fetcharound chunk 19. Fix: batch fetches to 4 concurrent per batch (sequential batches). Max 4 concurrent = 200MB peak, avoids GH Pages throttling. Commit f66fde2b3c.
- WASM chunks served from
/containers/nodejs*.wasm, count from/containers/nodejs.chunks wc.jsexportscreateSystem(id, opts)returning{id, status, boot(), spawnShell(), destroy(), onStatus()};getSystem(id)retrieves from registry or returns null;bootAssets()caches CDN fetches once across all systems- Each system owns its own
worker,stackWorker,nwStack,status, andcbsSet — twocreateSystemcalls produce two independent WASM workers _registry(Map) keyed by id;createSystemre-uses existing entry if id already present (resumable pattern)opts.mode:'ephemeral'|'persistent'|'resumable'— stored on the system object for lifecycle management by callers- Backward-compat
boot(),spawnShell(),wcStatus(),onWcStatus()delegate to the'default'system created at module init - Stack worker (networking proxy) lives in
wc-stack-worker.js— served as static file, fetched as text at boot, blobbed into a Worker - Two workers per system: main TTY worker (runs the container WASM) + stack worker (runs
c2w-net-proxy.wasmfor HTTP proxy) - Networking via virtual IP
192.168.127.253:80, env varshttp_proxy/https_proxy/SSL_CERT_FILEinjected at boot window.newStack,window.openpty,window.TtyServercome from CDN UMD scripts loaded once viabootAssets()crossOriginIsolatedis false on first visit — service worker installs, reloads page, then it's true
.github/workflows/build-wasm.yml— triggers on every push to master (skips if actor is github-actions bot to avoid loops)- Installs c2w v0.8.4 linux-amd64, runs
c2w --net=browser node:23-alpine, splits at 50MB, names chunksnodejs00.wasmetc. - Writes chunk count integer to
containers/nodejs.chunks - Commits and pushes
containers/to master (requirescontents: writepermission) - CI push pattern (both build-wasm and build-layers write-manifest):
git fetch origin masterthengit reset --soft origin/masterthengit restore --staged .thengit add <files>thengit committhengit push origin HEAD:master; therestore --stagedis critical — without it, workflow files from other commits get staged and GitHub rejects the bot push with "refusing to allow a GitHub App to create or update workflow"
Local git commit fails when text file changes are made AFTER CI pushes WASM blobs. Root cause: repo uses partialclonefilter=blob:none (large WASM files ~1.5 GB not stored locally); when git writes a new tree object, it must traverse all parent blob SHAs, but remote-only blobs cause "fatal: could not fetch from promisor remote" even with GIT_NO_LAZY_FETCH=1.
Workaround: GitHub API direct commit
GET /repos/AnEntrypoint/opencrabs/git/ref/heads/master→headShaGET /repos/AnEntrypoint/opencrabs/git/commits/<headSha>→treeShaPOST /git/blobswith file content →blobShaPOST /git/treeswithbase_tree: treeSha+ new blob entry →newTreeShaPOST /git/commitswith message, tree, parents →newCommitShaPATCH /git/refs/heads/masterwithnewCommitSha→ push to master- Locally:
git fetch origin master+git reset --soft origin/masterto sync
API creates commits without needing local blob objects. Auth: gh auth token for Bearer token. Use only for text files after WASM CI pushes; prefer local git for code-only changes.
wc-stack-worker.jsusesimportScripts(not ES modules) — must be plain global JS, noimport/exportserveIfInitMsgin the stack worker must gateonmessage = null— fires on every message otherwise- xterm-pty
loadAddon(master)uses duck-typing, not instanceof — compatible with@xterm/xtermscoped package window.newStacksecond argument isIMAGE_PREFIX(string path prefix), third is chunk count (integer) — not a full URL array- Blob workers have no base URL —
IMAGE_PREFIXmust be resolved to absolute URL (new URL(IMAGE_PREFIX, location.href).href) before passing tomakeWorkerBlob - VM boots to
/bin/sh(busybox) via-entrypoint /bin/sh -- -i; the--separator overrides the container's baked-in CMD (node);-imakes sh interactive;makeWorkerBlob5th paramcmd(default['-i']) replaces what follows--— pass['sh','-c','exec myapp']to launch a specific process;cmdisJSON.stringify'd into the blob template string at call time, not at worker eval time wasiHack(TTY fd_read/fd_write/poll_oneoff patches) is defined inline in themakeWorkerBlobblob source inwc-workers.js— it is NOT in the shared CDN scripts- Worker blob source lives in
wc-workers.js(exported);wc.jshandles boot orchestration only appMachinecontext field isshowSystems(notshowShell);SHOW_SHELLevent is a kept alias that setsshowSystems— readingctx.showShellwill beundefined, always readctx.showSystemsappMachinecontextsystems[]shape:{id, name, mode:'ephemeral'|'persistent'|'resumable', status, layers:[], terminals:[{id,label,cmd,wcId}], selectedTerminalId};createAgentConfiggainssystemMode(default'ephemeral')components/systems-panel.jsexportsmount(el, actor)— replacesshell-panel.js; left sidebar = systems list, right = terminal tabs + xterm; each terminal gets its own independent WASM worker (keyed bywcIdin terminal record)components/term-view.jsexportsmount(el, sys)— mounts a single xterm Terminal with CanvasAddon + FitAddon intoel, connects viasys.spawnShell(); returns{dispose()}sys._onProgresscallback: set by callers (e.g. term-view.js) before boot completes; wc.js forwards{type:'wasm-progress',loaded,total}(per-chunk WASM fetch via_picounter in workerPromise.all),{type:'opfs-init',path,loaded,total}, and{type:'desktop-init',path,loaded,total}; callers clear it after boot; term-view.js renders cyanLoading WASM N/Mfor wasm-progress, yellow<path>: N/Mfor opfs/desktop-init, then clears the line when shell is ready; no progress posted on cache hit — handler must tolerate zero calls- Terminal
wcIdfield: each terminal spawns its owncreateSystem(wcId, { mode, layers })worker (layers from parent system record) so multiple terminals = multiple independent workers with the correct layer WASM loaded;_termSystemsMap in systems-panel tracks wcId→system;window.__debug.systemsexposes it - Ephemeral mode: when last terminal of a system is closed, all wcId workers for that system are destroyed
createSystem(id, {layers:['opencode','claude']})passes layer ids;layers.jsonis source of truth for each layer'smountPathandtools; layers are OPFS mount descriptors, not WASM chunk listsmakeWorkerBlob(chunks, env, scripts, imagePrefix, cmd, extraUrls=[], mounts=[])— 6th param extraUrls appended to chunk URL array; 7th param mounts=[] is array of{vmPath, opfsPath}(OPFS) or{vmPath, type:'desktop'}(desktop) mount descriptors baked into blobwc-workers-desktop.jsexportsdesktopBlobSrc(mounts)— returns blob template string for desktop FS support (desktopWalk, DesktopOpenFile, DesktopPreopenDir, _desktopHandles, _desktopFiles); imported by wc-workers.js and inlined into blob source before the OPFS code- Desktop mount flow:
opts.mountsentries withdesktopHandle:FileSystemDirectoryHandleare stripped to{vmPath,type:'desktop'}for the blob (not JSON-serializable); handles posted viaworker.postMessage({type:'desktop-handles',handles:[{vmPath,handle}]})immediately afternew Worker(); worker IIFE awaits this message before settingonmessage; write-back via{type:'desktop-write',dh,name,data:[]}flushed in wc.js viadh.getFileHandle(name,{create:true}).then(fh=>fh.createWritable()) _desktopHandlesMap insystems-panel.js(sysId→mounts[]) persists FileSystemDirectoryHandle across terminal spawns;window.__debug.systems.desktopHandlesexposes it; showDirectoryPicker UI rendered only whenwindow.showDirectoryPickerexists (Chrome/Edge); AbortError on cancel swallowed, other errors re-thrown- Worker pre-init rendezvous always fires: even with zero desktop mounts wc.js posts
{type:'desktop-handles',handles:[]}so worker never deadlocks waiting for the message containers/layers.jsonformat:{id, label, mountPath, tools:[{name, url, installCmd}]}— mount descriptors for OPFS-backed tool storage; no chunk counts; source of truth for layer config- Shell heredocs (
<<EOF) inside YAMLrun:blocks break GitHub Actions YAML parsing — the unquoted content lines become bare YAML tokens; useprintf 'line1\nline2\n' > fileinstead of heredocs in anyrun:step - xterm CPR escape code pollution: xterm sends ESC[6n (cursor position request) during initialization; the response (ESC[row;colR) flows PTY master→ldisc→slave→WASM stdin before the shell is ready, causing
^[[1;5Rto appear as visible text in the terminal. Fix: inwc.jsspawnShell(), callonData({xtermAddon:master})FIRST to load the PTY addon and flush xterm's init sequences, await 50ms, then callnew window.TtyServer(slave).start(worker, nwStack) - xterm canvas text validation: pixel counting on
.xterm-text-layercanvas (viagetImageData) works ONLY when CanvasAddon is NOT active; when CanvasAddon is active, text renders via WebGL offscreen and 2D canvas remains empty. Count pixels with r/g/b > 60 to detect rendered output. Do NOT usepage.screenshot()for terminal validation — WebGL compositing is not captured by CDP. Tab switches destroy terminal state:systems-panel.jsmountTerminal()callsel.innerHTML = ''recreating xterm, so historical output is lost; WASM worker continues running but new xterm starts blank. Validate on ONE tab only. - Worker
recv-is-readablemessage type: high-frequency signals (300/s) from xterm-pty TtyServer to WASM worker indicating stdin availability. Data flows via SharedArrayBuffer/Atomics, NOT postMessage. Worker silence (no postMessage output) is NORMAL for TTY workers; do not interpret lack of messages as a hung worker. .github/workflows/build-layers.yml— matrix workflow building layer WASMs (opencode, claude, kilo, codex); each job: checkout → install c2w v0.8.4 → write Dockerfile (node:23-alpine + bun + layer npm package) → docker build → c2w build → split at 50MB → commit layer-{id}*.wasm + layer-{id}.chunks; uses same 3-step CI push pattern as build-wasm.yml; each job hascontinue-on-error: trueand skips if actor is github-actions[bot]- AI coding tool npm packages (opencode-ai, @kilocode/cli, @anthropic-ai/claude-code, @openai/codex) ship prebuilt platform binaries as optional dependencies with bun-optional postinstall scripts (e.g.,
"bun ./postinstall.mjs || node ./postinstall.mjs"); this meansnpm install -g <pkg>works WITHOUT bun on Alpine. Remove bun from build-layers.yml Dockerfile to save ~300 MB per WASM image (opencode: 1.511 GB → ~855 MB). This is critical for staying under Chrome's 1 GB WebAssembly module size hard limit (1,073,741,824 bytes). Chrome enforces this limit on bothWebAssembly.instantiate()andWebAssembly.compileStreaming()— modules exceeding it fail with"size > maximum module size (1073741824)"regardless of API used. - opencode WASM memory sizing: c2w default linear memory is 200MB; opencode musl binary is ~155MB → OOM on boot. Fix: add
--build-arg=WASM_MEM_SIZE=536870912(512MB) to c2w invocation in build-layers.yml opencode job only. This allocates memory at runtime, not in the binary. Other layers have smaller binaries and don't need this flag. - opencode binary selection on Alpine: npm install -g opencode-ai creates wrapper at
/usr/local/bin/opencodethat caches binary path in.opencode. It finds glibc binary (167MB) instead of musl binary (162MB) and fails with "/bin/sh: not found" (missing /lib64/ld-linux-x86-64.so.2). Fix: after npm install in Dockerfile, runcp /usr/local/lib/node_modules/opencode-ai/node_modules/opencode-linux-x64-musl/bin/opencode /usr/local/lib/node_modules/opencode-ai/bin/.opencodeto replace glibc with musl binary. This ensures the wrapper selects the correct binary at runtime. - opencode TUI behavior in WASM terminal: opencode is a full TUI app —
opencode --versiondoes NOT print a version string and exit like traditional CLI tools; instead it initializes a terminal TUI (alternate screen buffer), waits for network/config, and runs until explicitly exited.timeout N opencodereturns Terminated after N seconds. Validation: usewhich opencode(returns /root/.local/bin/opencode) andod -c /root/.local/bin/opencode | head -1(shows ELF magic177 E L F) to confirm binary is present and is a valid ELF executable. Validated: binary executes in browser WASM Alpine terminal via OPFS install (wc-layer-install.js), GCC 15 libs baked into WASM base image, 5 OPFS execute permission patches applied.