SLOP (Server-Local Operations for Physics) is a tool for analyzing latent biases in diffusion models by querying the model directly in latent space. It is designed to run in restricted HPC environments (Singularity containers, no root, no open ports) by tunneling a custom binary protocol over SSH.
- Server (
server/): Runs inside a container on a GPU node. Listens onstdin, writes tostdout.- Zero-dependency protocol (JSON over pipe).
- Stateful inference runner with support for Stable Diffusion and Flux.
- Hooks into model execution to capture latents, noise predictions, and embeddings.
- Client (
client/): Runs on your local machine.- Manages SSH tunnels transparently.
- Provides a CLI for deployment and health checks.
- Probes diffusion models at arbitrary latent positions, runs custom delta-driven rollouts, and saves latent-space artefacts for later analysis.
- Shared (
shared/)):- Protocol definitions and serialization.
- Core physics math (numpy-only).
- Local: Python 3.9+
- Remote: Python 3.9+, SSH access, GPU
# Sync running vast.ai instances to registry
python -m client.vastai --sync
# Check connection
python -m client.manage check vast-32582479
# Run a test generation
python experiments/sample.pyPush the code to your HPC node. This copies the necessary server and shared code.
# Syntax: python -m client.deploy user@host --path /remote/path --name alias
python -m client.deploy user@login.cluster.edu --path /scratch/user/slop --name cluster-aVerify the server is reachable and the GPU is detected.
python -m client.manage check cluster-aTo run a quick generation test (verifies CUDA and model loading):
python -m client.manage check cluster-a --verifyThe client provides two methods:
client.sample()- Generates latents (no image rendered, memory efficient)client.render(latents)- Decodes latents to PIL Images
from client.config import registry
from client.interface import SlopClient
# Get provider
providers = registry.list()
client = SlopClient(providers[-1])
client.connect()
# Generate latents (no image)
result = client.sample(prompt="a cat", num_steps=20, seed=42)
# Render latents to images
images = client.render(result.points[-1])
images[0].save("cat.png")
client.close()The current workflow does not reconstruct a global field from samples. Instead, it uses the model itself as the field oracle:
- pick a base prompt embedding
x - build a biased embedding
b = x + v_identity - evaluate
delta(z, t) = eps_b(z, t) - eps_x(z, t)directly from the model - save all sampled latent positions and all returned tensors for later analysis
Useful entry points:
python experiments/delta_map.py --base-prompt "in a city"
python experiments/delta_rollout.py --base-prompt "in a city"delta_map.py samples many latent positions and saves:
latents.npybase_noise_preds.npybiased_noise_preds.npydelta_noise_preds.npyforce.npy
delta_rollout.py runs an experimental denoising loop using the delta itself and saves:
- every visited latent position
- per-step base / biased / delta noise predictions
- the final decoded image
The rollout is intentionally experimental, especially in delta_only mode.
Treat it as a probe into what the delta field does, not as a standard sampling method.
├── client/ # Local tools (CLI, visualization, transport)
├── server/ # Remote code (inference engine, daemon)
├── shared/ # Common protocol and physics logic
├── containers/ # Singularity/Apptainer definition files
├── notebooks/ # Jupyter notebooks for exploration
└── experiments/ # Experiment scripts
The communication uses a length-prefixed JSON protocol over standard I/O.
- Request:
[4-byte Len] { "kind": "inference", ... } - Response:
[4-byte Len] { "kind": "result", "payload": { ... } } - Binary Data: Numpy arrays and images are zlib-compressed and Base64-encoded within the JSON payload.