Bring ussycode substantially closer to the practical end-state of replicating exe.dev's product experience while preserving its self-hosted / Ussyverse identity.
This plan is based on:
- live CLI reconnaissance against
exe.devviassh exe.dev helpandssh exe.dev help <command> - a site crawl of
https://exe.devdocumentation - direct comparison against the current
ussycoderepository state - current verified issues in
ussycode(API wiring, browser auth mismatch, failing gateway test, incomplete agent join path, partial cluster wiring) - independent codebase verification (2026-03-20): all claims verified against 62 Go files, 80+ tests, 7 PROGRESS docs. See corrections log below.
This document is a plan only. It does not implement changes.
The following corrections were made after independent verification against the actual codebase:
- Blocker 6 revised: README is current (not stale) — uses correct
ussycodenaming. Changed to "needs parity roadmap restructuring." - Blocker 1 expanded: API handler passes nil for executor AND KeyResolver AND Config (all three, not just executor).
- Section 3.1B upgraded: LLM gateway described as "substantially complete" (5 providers, BYOK, rate limiting, usage tracking, SSE) instead of "roughly correct."
- Section 3.1D upgraded: API token system (usy0/usy1) described as fully implemented with current limitations noted.
- Section 2.3 added: Development track completion status showing Tracks A, B, E, F, G all complete; Track C 1/5 done.
- Phase 0 expanded: Added 4 new tasks (wire KeyResolver/Config, audit_logs, proxy tests, NetworkManager interface).
- Phase 1.2 updated: Documented that
newonly supports--name/--image; VMStartOptions.Env exists but is disconnected. - Phase 2.1 updated: Added current share model state including missing RemoveShareLink and proxy link-token redemption gap.
- Phase 3.2 updated: Token system described as implemented; focus shifted to permission enforcement and context injection.
- Phase 5.3 updated: LLM gateway described as substantially complete; scope narrowed to hardening/docs.
- Phase 6.1 updated: Quota system described as implemented; scope narrowed to team extension.
- Phase 6.2 updated: Noted that zero team infrastructure exists (greenfield).
- Milestone A expanded: Added 3 new prerequisite tasks.
exe.dev is not "just VMs over SSH." It is a tightly integrated product made of:
- SSH-first VM lifecycle management
- persistent, normal Linux machines
- automatic HTTPS proxying with auth-aware sharing
- human-usable and agent-usable APIs
- a default opinionated image (
exeuntu) that includes agent tooling - browser login / magic-link based web UX
- per-VM app sharing + public/private control + share links + email invites
- built-in metadata-side LLM and email gateways
- agent-centric workflows via Shelley, AGENTS.md, prompt-on-create, and direct agent hosting
- clear hosted-product operational constraints (resource pools, pricing, teams, SSO, burst semantics)
ussycode already has meaningful overlap with exe.dev:
- SSH gateway and REPL command surface
- SQLite-backed state model
- VM manager, image handling, networking, proxy/auth concepts
- metadata service
- admin panel package
- API package
- magic-token DB support
- LLM/email gateway packages
- templates, tutorial, arena, deployment assets, and docs
But it does not yet deliver exe.dev parity because several critical product flows are still incomplete, mismatched, or only partially wired.
ussycode has many of the components of exe.dev, but not yet the cohesive, verified product loop that makes exe.dev feel like one seamless system.
help
doc
ls
new
rm
restart
rename
tag
cp
share
whoami
ssh-key
shelley
browser
ssh
exit
share show
share port
share set-public
share set-private
share add
share remove
share add-link
share remove-link
share receive-email
share access
ssh-key list
ssh-key add
ssh-key remove
ssh-key rename
shelley install
shelley prompt
- command names are short, memorable, and shell-native
- JSON mode is pervasive (
--jsonon many commands) - product UX is designed for both humans and automation
- shell help is documentation-quality, not just parser output
newsupports:- custom image
- env injection
- name override
- container command override
- prompt-on-create for agent bootstrapping
shareincludes public/private, invite by email, link sharing, port selection, inbound email, and team access control- browser login is treated as a first-class capability
- agent lifecycle is a core user story, not an afterthought
- live output from
ssh exe.dev help - live output from
ssh exe.dev help <command>
From https://exe.dev/ and what-is-exe:
- persistent disks
- normal Linux machines
- fast provisioning
- HTTPS by default
- auth handled by the platform
- agent-friendly environments
- ability to clone environments quickly
- shared underlying resources across multiple VMs
From proxy, sharing, cnames, login-with-exe:
- platform terminates TLS and proxies to per-VM app ports
- private by default, public if explicitly changed
- alternate internal ports remain access-controlled unless the primary public port is selected
- users can share access via:
- public mode
- direct email invite
- share links
- custom domains support subdomain and apex-domain mapping
- identity can be forwarded to apps using HTTP headers like:
X-ExeDev-UserIDX-ExeDev-Email
- special auth routes exist at the VM domain level:
__exe.dev/login__exe.dev/logout
From api and https-api:
- primary API is SSH itself
- HTTPS API is simply
POST /execwith SSH-style command bodies - JSON output is automatic in the HTTPS API path
- auth tokens are signed locally with SSH private keys
- auth model supports:
- bearer token to platform API
- bearer/basic auth to VM proxy
- context-carrying signed VM tokens
- short opaque tokens derived from long signed tokens
- permissions can be scoped by commands and expiry
From receive-email and send-email:
- inbound mail can be toggled per VM
- inbound mail goes to
~/Maildir/new/ - mail is delivered to VM-name-based addresses
- backlog safety limits exist
- outbound mail is allowed to the VM owner via metadata-side gateway
- outbound is explicitly rate-limited
From shelley/*, use-case-agent, use-case-openclaw, guts, agents-md:
- default image includes agent-oriented tooling
- agent is web-facing and mobile-usable
- LLM gateway is built into the platform at metadata service addresses
- BYOK exists for agent/provider override
- AGENTS.md is part of product behavior
- users are expected to run OpenClaw / Codex / Claude-style agents on these VMs
- default image is opinionated, productive, and application-oriented
From faq/how-exedev-works:
- hosted product currently uses Cloud Hypervisor
- VM boot is container-image based rather than traditional long-lived VM image building
- HTTPS reverse proxy and SSH routing hide lack of public per-VM IPs
- there is not a built-in east-west private network between VMs by default
ussycode does not need to copy exe.dev's exact infrastructure choices (e.g. Cloud Hypervisor) to replicate its product semantics. It does need to replicate the user-facing capabilities and reliability guarantees.
Per PROGRESS-A.md through PROGRESS-G.md, the following tracks are complete:
| Track | Description | Status | Tests |
|---|---|---|---|
| A | Core Hardening (rename, config, ZFS, nftables) | ✅ COMPLETE | 14 ZFS + 10 nftables |
| B | Ussyverse Server Pool (gRPC proto, agent, PKI, heartbeat, WireGuard stub, scheduler, installer) | ✅ COMPLETE | 7 PKI + 10 scheduler |
| C | UX & Onboarding | Tutorial done; browser/doc/templates/MOTD pending | |
| D | Gateway Services | (merged into Track E) | - |
| E | API & Admin (HTTPS API, admin panel, trust/quotas, custom domains) | ✅ COMPLETE | 18 API + 27 admin + 7 quota + 8 domain |
| F | Ussyverse Integration (arena, templates, branding) | ✅ COMPLETE | Arena tests pass |
| G | Deployment & Ops (Ansible, installers, docs) | ✅ COMPLETE | No Go files modified |
Important: 5 of 6 tracks are complete. 80+ tests across 12 suites. The parity roadmap builds on top of this foundation - it does not need to re-implement these subsystems.
Relevant files:
internal/ssh/gateway.gointernal/ssh/shell.gointernal/ssh/commands.gointernal/ssh/browser.gointernal/ssh/tutorial.gointernal/ssh/community.gointernal/ssh/arena.go
Status:
- command-oriented shell exists
- many exe.dev-like verbs exist already (
new,ls,rm,restart,rename,tag,cp,share,ssh-key,browser,ssh)
Relevant files:
internal/gateway/metadata.gointernal/gateway/llm.gointernal/gateway/email.gointernal/gateway/email_send.go
Status:
- fully aligned with exe.dev's metadata-side gateway pattern
- LLM gateway is substantially complete: 5 providers (Anthropic, OpenAI, Fireworks, Ollama, VLLM), BYOK with AES encryption, per-user token bucket rate limiting, usage tracking in DB, SSE streaming passthrough - all tests pass
- Inbound email is substantially complete: SMTP server with Maildir delivery, per-VM rate limiting (100/hr), unread quota enforcement (1000) - 8/9 tests pass (only DotStuffing test fails)
- Outbound email is substantially complete: owner-only restriction, SMTP relay, per-VM rate limiting (10/hr), RFC 2822 formatting - all tests pass
- metadata service serves AWS-style metadata, VM/user metadata, SSH keys, env, and proxies to LLM/email gateways
Relevant files:
internal/proxy/caddy.gointernal/proxy/auth.gointernal/admin/admin.gointernal/ssh/browser.go
Status:
- reverse proxy + auth proxy + browser access concepts exist
- custom-domain support exists in current progress docs / command layer
Relevant files:
internal/api/handler.godocs/api.md
Status:
- ussycode mirrors exe.dev's "SSH commands over HTTP" model
- both token formats are implemented:
usy0.*(stateless, SSH-signed with exp/nbf/perms/nonce/ctx) andusy1.*(DB-backed opaque) - POST /exec, GET /health, GET /version endpoints exist
- per-fingerprint rate limiting (60 req/min) implemented
- 14+ API tests exist with mock executor
- runtime wiring is broken: executor, KeyResolver, and Config are all passed as nil in main.go
Relevant files:
internal/gateway/llm.gotemplates/*internal/ssh/commands.gointernal/ssh/browser.gointernal/ssh/community.gointernal/ssh/arena.go
Status:
- repo is pointed toward agent hosting and Ussyverse workflows
- Shelley-equivalent semantics are not yet cohesive
Relevant files:
cmd/ussycode/main.go(line 162)internal/api/handler.go
Problem:
api.NewHandler(database, nil, nil, logger, nil)is wired with nil executor, nil KeyResolver, AND nil ConfighandleExecdirectly callsh.exec.Execute(...)- nil pointer panic at runtime- rate limiting config is not passed (nil Config)
- token verification for
usy0.*tokens cannot resolve SSH public keys (nil KeyResolver) - parity with exe.dev's HTTPS API is therefore blocked at runtime
Impact:
- ussycode cannot yet honestly claim exe.dev-like programmatic API behavior
- ALL three nil arguments must be wired, not just the executor
Relevant files:
internal/ssh/browser.gointernal/admin/admin.gointernal/db/queries.go
Problem:
- browser command creates a token and emits
https://<domain>/__auth/magic/<token> - admin panel expects
/admin/login/callback?token=... - no verified bridge handler exists for the generated path
Impact:
- browser-based product parity is broken
- first-class web UX cannot be trusted end-to-end
Relevant files:
internal/ssh/commands.gointernal/proxy/auth.gointernal/proxy/caddy.gointernal/db/models.gointernal/db/queries.go
Problem:
- ussycode supports several share concepts, but current naming and semantics differ from exe.dev
- email invites, link shares, public/private, team access, port selection, and inbound email need to be normalized into one coherent model
Impact:
- web sharing parity is incomplete
- auth-aware app-hosting UX remains fragmented
Relevant files:
internal/storage/zfs.gointernal/vm/manager.gointernal/vm/image.go
Problem:
- the code contains a StorageBackend abstraction, but VM manager still uses direct ext4 file copying / disk creation flows
- current runtime behavior does not cleanly reflect the "persistent, cloneable disk" product story
Impact:
- parity with exe.dev's fast clone / persistent disk / predictable storage semantics remains uncertain
Relevant files:
images/ussyuntu/*templates/*internal/gateway/llm.gointernal/ssh/commands.go
Problem:
- ussycode has pieces of an agent platform, but not a single coherent default "this VM is ready for agents immediately" story comparable to exe.dev + Shelley
Impact:
- agent-hosting parity remains conceptual, not productized
Relevant files:
README.mddocs/*
Problem:
- README is current and uses
ussycodenaming (the rename fromexedevussywas completed in Track A.1), but it does not surface the exe.dev-parity roadmap or product positioning - 5 comprehensive docs exist (api, architecture, getting-started, self-hosting, contributing-compute) but none describe the parity target
- users and contributors cannot infer the parity roadmap from current docs
Impact:
- moderate onboarding friction for parity-aware contributors (docs themselves are fine for general use)
The implementation roadmap should be guided by these principles.
Prioritize parity in:
- provisioning UX
- persistent disk semantics
- clone/copy behavior
- share/auth flows
- HTTP proxy behavior
- browser login behavior
- API shape
- agent readiness
Do not block parity work on switching hypervisors unless Firecracker specifically prevents the product behavior.
exe.dev's strongest idea is that the product's human API and automation API are the same command model.
Implication for ussycode:
- every important operation should be available in the SSH CLI first
- HTTP API should remain a thin transport over that command model
- admin UI should use the same underlying policies and capabilities, not a parallel hidden control plane
exe.dev parity requires that "run app → get URL → share URL → control access" feel native.
Implication:
- proxy selection, auth, public/private state, port behavior, share links, and custom domains must become a single cohesive subsystem
exe.dev's default image is not empty infrastructure - it is a productive agent/developer environment.
Implication:
ussyuntuneeds to become clearly competitive withexeuntu- initial prompt / agent support / app-server defaults should feel intentional
Per project instructions, all new work must include telemetry.
Implication:
- every new user-facing operation in this roadmap needs structured logs, latency metrics, and identifiable request/user/VM context
/exec, proxy auth, share changes, token issuance, VM lifecycle, email delivery, and browser-login flows must all be instrumented
This phase is mandatory. Do not build more parity features on top of broken core flows.
- make current product claims true
- remove obvious runtime mismatches
- establish a reliable baseline for parity work
- Fix
POST /execruntime wiring by providing a real command executor tointernal/api/handler.go - Also wire KeyResolver and Config (both nil) into the API handler - rate limiting and
usy0.*token verification depend on these - Introduce a shared command execution layer used by SSH shell and HTTP API
- Fix browser magic-link flow so generated URLs land in a real, verified auth path (browser.go emits
/__auth/magic/<token>but no handler exists; admin expects/admin/login/callback?token=...) - Decide whether admin login and general browser login share the same token type or use separate paths
- Fix
internal/gatewayfailingTestSMTPServer_DotStuffing(Maildir path issue in test setup) - Restructure
README.mdto surface parity roadmap (current README uses correct naming but doesn't describe parity goals) - Re-run
go build ./...andgo test ./...until green baseline is restored (currently only 1 test fails) - Add telemetry foundation - OTEL/Maple ingest wiring, instrument
/exec, browser token creation/consumption, and SMTP delivery failures (zero observability exists today) - Wire audit_logs table into admin operations (migration 007 created the table but no code writes to it)
- Add tests for
internal/proxypackage (zero test coverage on critical auth path) - Extract NetworkManager interface to match StorageBackend/FirewallManager pattern for testability
-
curl -X POST /execworks with a real executor, KeyResolver, and Config - browser-generated URL logs user in successfully and predictably
-
go test ./...passes (all packages) - smoke test documented for API + browser + SMTP paths
- telemetry events visible in local dev for at least one instrumented path
This phase is about the minimum lovable parity loop:
SSH in → create VM → SSH into it → run app → get HTTPS URL → control access → script it
Relevant files:
internal/ssh/commands.gointernal/ssh/browser.gointernal/ssh/tutorial.godocs/getting-started.md
Tasks:
- Audit command names, arguments, and help output against exe.dev
- Add missing parity behaviors for:
-
new --env -
new --command -
new --prompt -
ssh-key rename -
share show -
share add-link -
share remove-link -
share port -
share set-public -
share set-private -
share receive-email -
share access allow|disallow
-
- Decide which exe.dev commands should map directly and which should intentionally remain Ussyverse-specific
- Ensure help text is polished and script-friendly
- Ensure
--jsonbehavior is available and consistent across all major commands
Validation:
-
helpoutput documents parity-grade commands and options - command help pages mirror actual runtime behavior
- JSON mode contract is documented and stable
Relevant files:
internal/ssh/commands.go(cmdNew at line 172)internal/vm/manager.gointernal/vm/firecracker.go(VMStartOptions.Env field exists at line 261 but is unused)internal/vm/image.goimages/ussyuntu/*
Current state (verified):
newsupports only--name=and--image=flags- VM defaults: 1 vCPU, 512MB RAM, 5GB disk, image "ussyuntu"
- VMStartOptions struct has an
Envfield but Manager.CreateAndStart() doesn't accept or pass environment variables - No
--commandor--promptsupport exists at any layer - Quota enforcement works (trust-level-based VM limits)
- Proxy route auto-registration on creation works
Tasks:
- Verify
newdefaults are fast and predictable - Add env injection support - thread Env from cmdNew through Manager.CreateAndStart() to VMStartOptions (plumbing exists in Firecracker backend but is disconnected)
- Add container-command override behavior if image-driven boot supports it
- Add prompt-on-create flow that bootstraps default agent environment or first-run automation
- Ensure created VM has a stable SSH destination and predictable public URL
- Expose provisioning failures with clear, user-readable messages
- Instrument create/start timing from command issue → usable VM
Validation:
-
newcreates a ready-to-use VM with minimal arguments - prompt-on-create can bootstrap a sample app or agent task
- provisioning telemetry shows success/failure and latency
Relevant files:
internal/proxy/caddy.gointernal/proxy/auth.gointernal/ssh/commands.gointernal/db/models.gointernal/db/queries.go
Tasks:
- Define canonical per-VM URL scheme and document it
- Implement single primary proxied port semantics
- Implement alternate port forwarding behavior for allowed ranges if desired
- Add/verify automatic port discovery strategy
- Ensure auth proxy supports private-by-default web access
- Ensure public mode is explicit and reversible
- Ensure proxy passes correct forwarded headers
- Add request correlation logs for auth proxy decisions
Validation:
- app on VM becomes reachable over HTTPS without manual Caddy config
- public/private flips work correctly
- proxy headers are documented and tested
This is the highest product-value parity layer after basic VM creation.
Relevant files:
internal/ssh/commands.gointernal/proxy/auth.gointernal/db/models.gointernal/db/queries.gointernal/admin/admin.go
Current state (verified):
- Share model exists with VMID, SharedWith (user), LinkToken, IsPublic fields
- DB has: ShareVMWithUser, ShareVMWithLink, RemoveShare, SharesByVM, ShareByLinkToken, SetVMPublic, IsVMPublic, HasShareAccess
- Missing from DB: no RemoveShareLink function (links can be created but never revoked)
- Missing from proxy: auth proxy checks ownership and share-by-user but has no flow for redeeming a share link token and granting a session - ShareByLinkToken exists in DB but proxy.auth.go never calls it
- Custom domain support already exists (share cname, cname-verify, cname-rm) - this is a parity advantage
Tasks:
- Define a single share model covering:
- owner-only access
- public access
- invite-by-email access
- share-link access
- team access
- SSH/Shelley/team-access distinctions
- Ensure command names and DB schema align with this model
- Add link lifecycle operations and proper revocation semantics
- Define what access is granted by link redemption versus direct invite
- Decide whether accepted link access is durable or temporary
- Add audit logs for all share mutations
Validation:
- full share matrix is documented and testable
- invites, links, and public mode behave distinctly and correctly
- access rules are enforced consistently by proxy and SSH layer
Relevant files:
internal/proxy/auth.gointernal/ssh/browser.gointernal/admin/admin.gointernal/db/queries.go
Tasks:
- Add VM-domain auth endpoints analogous to exe.dev's
__exe.dev/loginand logout flows - Decide namespacing for special auth routes (e.g.
__ussycode/login) - Inject stable user identity headers to proxied apps
- Document the header contract for applications
- Support both public proxies with optional login and private proxies with mandatory login
- Add local-dev testing helpers or sample middleware docs for app developers
Validation:
- a proxied app can reliably identify logged-in users from headers
- login/logout round-trip works on VM domains
- headers are absent when expected and present when expected
Relevant files:
internal/ssh/browser.gointernal/admin/admin.gointernal/proxy/auth.go
Tasks:
- Build one canonical browser-access flow from SSH command to session establishment
- Add QR output only if it becomes genuinely useful and reliable
- Ensure browser sessions are secure, short-lived where appropriate, and revocable
- Decide whether browser login lands on admin panel, user dashboard, or VM web surface first
- Add telemetry around token issuance, redemption, expiry, and invalid use
Validation:
-
browsercommand works predictably for primary user journeys - no dead-end or mismatched routes remain
Relevant files:
internal/api/handler.gointernal/auth/token.godocs/api.mddocs/self-hosting.md
Tasks:
- Preserve the SSH command model as the canonical API contract
- Ensure HTTPS API request body semantics match actual SSH command parsing
- Make JSON output behavior consistent and default where appropriate for HTTP
- Provide documented response codes and stable error payloads
- Add exhaustive integration tests for
/exec
Validation:
- shell commands and HTTP API produce meaningfully equivalent results
- docs match runtime behavior exactly
Relevant files:
internal/auth/token.gointernal/api/handler.go(lines 237-325: both usy0/usy1 fully implemented)internal/proxy/auth.godocs/api.md
Current state (verified):
usy0.*stateless tokens: SSH-signed, supports exp, nbf, perms (string array), nonce, ctx (user handle) - implemented in handler.go:246-291usy1.*DB-backed tokens: opaque ID lookup, last-used tracking, revocation - implemented in handler.go:322-340- Token creation, verification, and handle generation all have tests in auth/token_test.go
- Permission field (perms) exists in TokenPayload but enforcement of command allowlists is not implemented - perms are stored but not checked against the requested command
Tasks:
- Audit and compare
usy0/usy1semantics against exe.dev'sexe0/exe1for any missing capabilities - Implement permission enforcement - check token's
permsagainst requested command in handleExec - Add context payload injection for downstream apps (the
ctxfield exists for user handle but not for arbitrary app context) - Add VM-scoped token namespace for proxied HTTPS services
- Support bearer auth for VM proxy APIs
- Consider basic-auth token support for Git-over-HTTPS parity
- Decide whether short opaque handles are necessary now or later
- Add docs and helper scripts for token creation
Validation:
-
/exectoken flow works end-to-end - VM-proxy bearer auth works end-to-end
- downstream app receives signed context header if configured
Relevant files:
internal/gateway/email.gointernal/ssh/commands.godocs/*
Tasks:
- Make
share receive-email <vm> on|offa canonical command path - Ensure delivery address format is explicit and documented
- Deliver to
~/Maildir/new/exactly and reliably - Add backlog safety disablement logic if not already equivalent
- Fix and expand SMTP tests
- Document limitations clearly
Validation:
- toggling receive-email changes actual delivery behavior
- mail lands in correct Maildir path
- overload behavior is safe and recoverable
Relevant files:
internal/gateway/email_send.gointernal/gateway/metadata.go
Tasks:
- Ensure metadata-side send-email endpoint is stable
- Restrict
toto VM owner email (or explicit safe policy) - Add rate-limiting and owner enforcement tests
- Add structured audit logs for sends, denials, and limits
Validation:
- VM can send owner email through metadata endpoint
- abuse controls are observable and effective
This is where ussycode should stop merely resembling exe.dev infrastructure and start resembling exe.dev's practical agent cloud.
Relevant files:
images/ussyuntu/Dockerfileimages/ussyuntu/*templates/*docs/getting-started.md
Tasks:
- Define the default image contract explicitly
- Preinstall and validate core agent tooling
- Make default services predictable and documented
- Add first-run UX for agent workflows
- Ensure app servers on common ports are easy to expose
- Add example templates for agent-centric workloads
Validation:
- fresh VM feels immediately useful for AI-assisted development
- agent examples run without manual platform spelunking
Options:
- Option A: build a first-party browser agent integrated into ussycode
- Option B: deeply support third-party agents (Pi/OpenClaw/Codex/Claude/etc.) and keep browser UI minimal
- Option C: do both, but in phases
Recommended near-term choice:
- Option B first, because the current repo already has agent-friendly substrate pieces and can reach useful parity faster by making external agents first-class
Tasks:
- Define a standard "agent-ready VM" capability checklist
- Add guidance-file support parity (
AGENTS.md/ equivalents) in docs and templates - Add example workflows for running coding agents on ussycode VMs
- Consider a
prompt-on-createbootstrapper for agents - Ensure LLM gateway works smoothly from the default image
Validation:
- sample "new + prompt + build app" flow works
- sample "run external agent on VM" guide works end-to-end
Relevant files:
internal/gateway/llm.gointernal/ssh/commands.go(llm-key command at line 1562)docs/*
Current state (verified):
- 5 providers supported: Anthropic, OpenAI, Fireworks, Ollama, VLLM
- BYOK with AES-GCM encryption - keys stored encrypted in DB per user/provider
- Per-user token bucket rate limiting (configurable)
- Usage tracking (requests + token estimates) stored in DB
- SSE streaming passthrough with -1 FlushInterval
llm-keycommand registered (set/list/rm subcommands) - all functional- All LLM gateway tests pass
- metadata service routes:
/gateway/llm/{provider}proxies to LLM gateway
Tasks:
- Decide whether the default gateway should be subscription-backed, BYOK-only, or hybrid
- Document endpoint usage from inside VMs clearly (curl examples for each provider)
- Add usage tracking telemetry (usage already tracked in DB - needs OTEL export)
- Harden key-management UX (key rotation, provider validation)
- Confirm gateway behavior from default image and from agent tooling
Validation:
- curl examples work from inside VMs
- BYOK and platform-key modes are both clearly defined
ussycode is self-hosted and Ussyverse-driven, but to replicate exe.dev's practical end-state it still needs a clearer product model for hosted usage.
Relevant files:
internal/db/models.gointernal/db/queries.gointernal/ssh/commands.gointernal/admin/admin.go
Current state (verified):
- 4-tier trust system already implemented: newbie/citizen/operator/admin
- Per-tier quotas: VM limit, CPU limit, RAM limit, disk limit - enforced in cmdNew
- SetUserTrustLevel correctly updates both trust_level AND all 4 quota columns
- Admin CLI command for trust level changes exists
- Over-quota error messages exist ("VM limit reached (X/Y). Upgrade trust level or remove a VM.")
- 7 quota tests pass
Tasks:
- Decide whether quotas should extend to per-team in addition to per-user
- Decide whether VMs share aggregate CPU/RAM/disk pools like exe.dev
- Add visible quota introspection command (e.g.
whoami --quotasorquota) - Add admin controls for per-user quota overrides beyond trust-level defaults
Validation:
- users can understand what resources they have and why creation fails
Current state: No team infrastructure exists. No team model, no migration, no queries, no DB table. This is greenfield work.
Tasks:
- define team entity and membership model in DB (new migration required)
- define share semantics for
team - define admin/operator abilities inside teams
- wire team access into proxy/auth decisions
- decide whether team burst capacity exists and how it's enforced
Validation:
- team sharing works consistently across SSH, web proxy, and admin views
Potential parity items to defer or adapt:
- full hosted billing
- SSO
- enterprise VPC integration
- usage-based enterprise plans
Plan guidance:
- self-hosted parity does not require billing parity immediately
- team/admin model should come before billing
- billing can be abstracted as future hosted mode if desired
This phase matters if the real end-goal is not only UX parity with exe.dev, but also scalable hosted or community-contributed compute.
Context:
- exe.dev uses Cloud Hypervisor today
- ussycode uses Firecracker abstractions already
Recommendation:
- keep Firecracker unless it materially blocks parity in:
- startup speed
- disk semantics
- container-image-backed rootfs behavior
- SSH/proxy model
- operator ergonomics
Tasks:
- benchmark create/start/clone flows against parity goals
- identify any Firecracker-specific blockers to exe.dev-like UX
- only consider hypervisor swap if UX parity demands it
Relevant files:
internal/controlplane/nodemanager.gointernal/scheduler/scheduler.gointernal/mesh/*internal/agent/*cmd/ussyverse-agent/main.go
Tasks:
- implement real control-plane client transport for agent join/run
- connect heartbeat command handling to VM management
- replace or finish stub WireGuard path
- define scheduler integration into actual placement flow
- define whether VMs can migrate, or only be rescheduled on recreate
- add end-to-end multi-node test harness
Validation:
- agent can join, heartbeat, receive commands, and host a VM
- scheduler chooses nodes for real VM placements
- node health transitions trigger usable operator behavior
exe.dev docs explicitly say no built-in private VM network by default.
Recommendation:
- do not make a private mesh the default user model just because cluster infrastructure exists
- keep VMs isolated by default
- provide optional overlays (e.g. Tailscale) for users who want private connectivity
Tasks:
- document default isolation model clearly
- decide whether current cluster networking is control-plane-only or user-facing
- avoid exposing unnecessary implicit connectivity between VMs
This is not polish. It is necessary to prevent ongoing cognitive debt.
- Rewrite
README.mdaround the actual current product and the exe.dev-parity roadmap - Add a dedicated
docs/parity-with-exe-dev.mdcomparison doc - Add a "hosted mode vs self-hosted mode" explanation if relevant
- Add docs for:
- browser login
- HTTP proxy and forwarded headers
- Login with ussycode header contract
- signed API token creation
- VM auth tokens for proxied apps
- inbound and outbound email
- agent-ready workflows
- Make docs executable with copy-paste examples
Validation:
- a new contributor can explain the parity roadmap without reading old progress docs
Per repo instructions, parity work must be telemetry-first.
- create requested
- create started
- image resolved
- rootfs/disk ready
- network allocated
- boot success/failure
- proxy registration success/failure
- stop/restart/remove events
-
/execrequest count, latency, success/failure - command name distribution
- auth failures by reason
- token verification failures
- auth proxy allow/deny decisions
- reason codes (public, owner, shared, token, denied)
- share-link redemption events
- browser-login token creation/redemption/expiry
- provider selection
- request latency
- error class
- user/VM correlation
- quota usage
- inbound delivery attempts + failures
- backlog auto-disable events
- outbound send attempts + rate-limit hits
- agent join attempts
- heartbeat health transitions
- scheduler decisions
- node-drain / node-dead events
- instrument to Maple ingest / OTLP as required by project policy
- validate telemetry in local dev before marking each phase done
- Fix API executor + KeyResolver + Config wiring (all three nil)
- Fix browser login path (URL mismatch)
- Fix gateway test failure (DotStuffing)
- Restructure README around parity roadmap
- Wire audit_logs table into admin operations
- Add proxy auth tests (zero coverage)
- Add telemetry foundation (OTEL/Maple ingest)
- green build/test baseline
- normalize SSH command parity
- finish
newenv/command/prompt semantics - finish HTTPS proxy + public/private + port flow
- finish browser login and identity headers
- full share subsystem
- signed token parity for
/execand VM APIs - docs + examples + integration tests
- inbound/outbound email parity
- agent-ready default image
- LLM gateway hardening and docs
- team model
- admin controls
- quota/burst semantics
- real agent join/run
- scheduler integration
- mesh transport completion
- end-to-end cluster tests
-
ssh ussyco.de helpexposes polished command tree -
ssh ussyco.de new --name demoprovisions usable VM -
ssh ussyco.de ssh democonnects successfully - start app on VM, HTTPS URL works
-
share set-public demochanges accessibility -
share add demo user@example.comgrants private access -
share add-link democreates login-gated share link -
browsercommand creates usable web login flow -
curl POST /execworks with signed token - VM-level bearer token reaches app with identity/context headers
-
share receive-email demo ondelivers Maildir message - metadata-side send-email can notify owner
- create default VM and verify agent tooling exists
- prompt-on-create bootstraps simple app
- AGENTS.md guidance works in chosen agent workflow
- metadata-side LLM gateway works via curl and via agent
- admin panel login works from browser path
- trust level change affects quotas
- custom domain mapping works end-to-end
- proxy auth logs show clear allow/deny reason
- agent joins successfully
- node heartbeat visible
- scheduler places at least one VM to agent node
- node unhealthy path is observable and safe
cmd/ussycode/main.gointernal/api/handler.gointernal/auth/token.gointernal/ssh/commands.gointernal/ssh/browser.gointernal/proxy/auth.gointernal/proxy/caddy.gointernal/admin/admin.gointernal/db/models.gointernal/db/queries.gointernal/gateway/email.gointernal/gateway/email_send.gointernal/gateway/metadata.goREADME.mddocs/api.mddocs/getting-started.mddocs/self-hosting.md
images/ussyuntu/Dockerfileimages/ussyuntu/init-ussycode.shimages/ussyuntu/*templates/*- additional docs under
docs/
internal/controlplane/nodemanager.gointernal/scheduler/scheduler.gointernal/mesh/wireguard.gointernal/mesh/allocator.gointernal/agent/agent.gointernal/agent/heartbeat.gocmd/ussyverse-agent/main.go- deployment assets under
deploy/
These should be answered before or during Phase 1-3.
- Should
ussycodekeep its current command names where they differ from exe.dev, or prioritize command-level parity explicitly? - Should browser auth primarily land in a general user web dashboard, the admin panel, or directly support VM domain auth first?
- Should ussycode implement a first-party Shelley-like browser coding agent, or focus on making third-party agents the canonical experience?
- Should the default image remain Ubuntu-oriented, or become more container/image-driven and opinionated like exe.dev's
exeuntustory? - Should VM proxy auth support both bearer and basic auth for git/tooling parity?
- Should team semantics be part of the near-term parity target, or follow after single-user parity is complete?
- Is the long-term product target a hosted service that competes directly with exe.dev, a self-hosted clone, or a Ussyverse-specific fork of the product idea?
The highest-leverage next move is:
- Execute Phase 0 as a single focused stabilization milestone
Because exe.dev parity work will be noisy and misleading until these current contradictions are fixed:
- broken API wiring
- broken browser-login path
- failing gateway test
- stale top-level product docs
- incomplete agent join expectations
Once those are fixed, the repo will be in a state where parity work can be evaluated honestly.
README.mdspec.mddocs/getting-started.mddocs/self-hosting.mddocs/api.mddocs/architecture.mdcmd/ussycode/main.gocmd/ussyverse-agent/main.gointernal/api/handler.gointernal/admin/admin.gointernal/ssh/commands.gointernal/ssh/browser.gointernal/proxy/auth.gointernal/proxy/caddy.gointernal/gateway/metadata.gointernal/gateway/llm.gointernal/gateway/email.gointernal/gateway/email_send.gointernal/storage/zfs.gointernal/vm/manager.gointernal/controlplane/nodemanager.gointernal/scheduler/scheduler.gointernal/mesh/wireguard.goPROGRESS-A.mdPROGRESS-B.mdPROGRESS-C.mdPROGRESS-E.mdPROGRESS-F.mdPROGRESS-G.mdhandoff.md
ssh exe.dev helpssh exe.dev help newssh exe.dev help lsssh exe.dev help rmssh exe.dev help restartssh exe.dev help renamessh exe.dev help tagssh exe.dev help cpssh exe.dev help sharessh exe.dev help share showssh exe.dev help share portssh exe.dev help share set-publicssh exe.dev help share set-privatessh exe.dev help share addssh exe.dev help share removessh exe.dev help share add-linkssh exe.dev help share remove-linkssh exe.dev help share receive-emailssh exe.dev help share accessssh exe.dev help whoamissh exe.dev help ssh-keyssh exe.dev help ssh-key listssh exe.dev help ssh-key addssh exe.dev help ssh-key removessh exe.dev help ssh-key renamessh exe.dev help shelleyssh exe.dev help shelley installssh exe.dev help shelley promptssh exe.dev help browserssh exe.dev help ssh
https://exe.dev/https://exe.dev/docs/what-is-exehttps://exe.dev/docs/pricinghttps://exe.dev/docs/proxyhttps://exe.dev/docs/sharinghttps://exe.dev/docs/cnameshttps://exe.dev/docs/login-with-exehttps://exe.dev/docs/apihttps://exe.dev/docs/https-apihttps://exe.dev/docs/receive-emailhttps://exe.dev/docs/send-emailhttps://exe.dev/docs/shelley/introhttps://exe.dev/docs/shelley/byokhttps://exe.dev/docs/shelley/llm-gatewayhttps://exe.dev/docs/shelley/agents-mdhttps://exe.dev/docs/shelley/upgradinghttps://exe.dev/docs/faq/how-exedev-workshttps://exe.dev/docs/faq/cross-vm-networkinghttps://exe.dev/docs/use-case-openclawhttps://exe.dev/docs/use-case-agenthttps://exe.dev/docs/use-case-gh-action-runnerhttps://exe.dev/docs/use-case-marimohttps://exe.dev/docs/gutshttps://exe.dev/docs/why-exe
Do not try to "finish the whole spec" blindly.
Instead, use exe.dev as a product benchmark and execute in this order:
- stabilize truth
- ship the SSH → VM → HTTPS → share → API loop
- ship browser auth + identity headers + signed token parity
- ship email + agent-native image quality
- then decide how much hosted/team/cluster parity really matters
That path gets ussycode closer to feeling like exe.dev much faster than finishing every architectural ambition in parallel.