Seamlessly transfer Latents, Images, and Tensors between separate ComfyUI instances over the internet.
ComfyUI-Tensor-Bridge allows you to break the VRAM barrier by distributing your workflow across multiple machines. Connect your local PC to a Cloud GPU (RunPod, Colab, Kaggle), or link two local GPUs together.
You have a PC with low VRAM, but you want to run massive models like Flux or Qwen Image Edit.
- Local: Run the lightweight text encoding, prompt logic, and simple previewing.
- Cloud: Send the lightweight Latent tensor to a Kaggle/RunPod instance to handle the heavy KSampler/Diffusion.
- Result: Get the image back on your local machine instantly.
Keep your prompts, LoRAs, and control logic on your private local machine. Only send the necessary tensors to the cloud for processing.
Run generation on Machine A and heavy Upscaling/Face-Restoration on Machine B simultaneously.
- Zero-Lag Polling: Receiver nodes wait intelligently for new data, consuming 0 resources while idle.
- Synchronization: Unique
triggerpin ensures your local workflow waits exactly until the remote worker is finished. - Auto-Cleaning: Data is consumed (deleted) upon reading, ensuring you never accidentally process old files.
- Ngrok Compatible: Works perfectly through Ngrok tunnels for easy Local-to-Cloud networking.
- Type Safety: Explicit nodes for Latents and Images to prevent workflow errors.
Navigate to your ComfyUI custom_nodes directory and run:
git clone https://github.com/YOUR_USERNAME/ComfyUI-Tensor-Bridge.git
cd ComfyUI-Tensor-Bridge
pip install -r requirements.txtComing soon to the ComfyUI Manager Registry. (For now, you can use "Install via Git URL" in the Manager).
To create a continuous loop where Instance A generates and Instance B processes, follow this pattern.
This instance starts the generation, sends data away, and waits for the result.
The Workflow:
- Generate: Create a Latent (Empty Latent / KSampler).
- Send: Connect to
Bridge Sender. Point URL to Instance B. - Wait: Connect the Sender to a
Bridge Receiver.- Note: Connect the Sender Output to the Receiver
triggerinput. This forces ComfyUI to wait for the upload to finish before trying to download.
- Note: Connect the Sender Output to the Receiver
- Save: Save the final image.
π Click to view Instance A (Controller) JSON
{
"id": "ddd225da-eb4c-4aca-9eab-60aef0a3cc0a",
"revision": 0,
"last_node_id": 6,
"last_link_id": 3,
"nodes": [
{
"id": 4,
"type": "EmptyLatentImage",
"pos": [
865.8529335937498,
42.672167968749264
],
"size": [
270,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
1
]
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
256,
256,
1
]
},
{
"id": 2,
"type": "BridgeSenderLatent",
"pos": [
1203.2094414062492,
40.30652343749932
],
"size": [
270,
82
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "latent",
"type": "LATENT",
"link": 1
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
2
]
}
],
"properties": {
"Node name for S&R": "BridgeSenderLatent"
},
"widgets_values": [
"http://127.0.0.1:8189/",
"latent_pipe"
]
},
{
"id": 5,
"type": "BridgeReceiverImage",
"pos": [
1561.1340585937496,
37.51454296874929
],
"size": [
270,
82
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "trigger",
"shape": 7,
"type": "*",
"link": 2
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
3
]
}
],
"properties": {
"Node name for S&R": "BridgeReceiverImage"
},
"widgets_values": [
"image_pipe",
60
]
},
{
"id": 6,
"type": "PreviewImage",
"pos": [
1905.9306484374993,
36.87503906249927
],
"size": [
140,
246
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 3
}
],
"outputs": [],
"properties": {
"Node name for S&R": "PreviewImage"
},
"widgets_values": []
}
],
"links": [
[
1,
4,
0,
2,
0,
"LATENT"
],
[
2,
2,
0,
5,
0,
"LATENT"
],
[
3,
5,
0,
6,
0,
"IMAGE"
]
],
"groups": [],
"config": {},
"extra": {
"ds": {
"scale": 0.7513148009015778,
"offset": [
-447.25343359374904,
317.70128125000076
]
},
"workflowRendererVersion": "LG",
"frontendVersion": "1.35.9"
},
"version": 0.4
}This instance sits in a loop, waiting for work to arrive.
The Workflow:
- Wait: Start with
Bridge Receiver. - Process: Do your heavy lifting (Upscaling, Sampling, etc.).
- Return: End with
Bridge Sender. Point URL back to Instance A.
Crucial Setup for Instance B: Set the run count (the number to the right of the Run button) to a very high value (e.g., 10,000). This ensures the workflow automatically re-runs after each execution, effectively acting as an auto-queue.
π Click to view Instance B (Worker) JSON
{
"id": "fa859a67-69e7-47d5-9993-33692ec4225e",
"revision": 0,
"last_node_id": 10,
"last_link_id": 10,
"nodes": [
{
"id": 1,
"type": "BridgeReceiverLatent",
"pos": [
689.5119570312503,
-184.52194492187488
],
"size": [
270.5328125,
82
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [
{
"name": "trigger",
"shape": 7,
"type": "*",
"link": null
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
1
]
}
],
"properties": {
"Node name for S&R": "BridgeReceiverLatent"
},
"widgets_values": [
"latent_pipe",
3600
]
},
{
"id": 4,
"type": "VAELoader",
"pos": [
693.0006328125003,
-30.694219531249892
],
"size": [
270,
58
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "VAE",
"type": "VAE",
"links": [
2
]
}
],
"properties": {
"Node name for S&R": "VAELoader"
},
"widgets_values": [
"pixel_space"
]
},
{
"id": 6,
"type": "PreviewImage",
"pos": [
1876.7953479758548,
-145.3724285866478
],
"size": [
210,
258
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 5
}
],
"outputs": [],
"properties": {
"Node name for S&R": "PreviewImage"
},
"widgets_values": []
},
{
"id": 3,
"type": "VAEDecode",
"pos": [
1038.4134822798299,
-117.34515337357945
],
"size": [
140,
46
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 1
},
{
"name": "vae",
"type": "VAE",
"link": 2
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
3,
9
]
}
],
"properties": {
"Node name for S&R": "VAEDecode"
}
},
{
"id": 2,
"type": "ImageScale",
"pos": [
1231.9175151633515,
-40.11736693892045
],
"size": [
270,
130
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "image",
"type": "IMAGE",
"link": 3
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
10
]
}
],
"properties": {
"Node name for S&R": "ImageScale"
},
"widgets_values": [
"nearest-exact",
512,
512,
"disabled"
]
},
{
"id": 5,
"type": "BridgeSenderImage",
"pos": [
1557.4281343039772,
-145.16733529829528
],
"size": [
270,
82
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "image",
"type": "IMAGE",
"link": 10
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
5
]
}
],
"properties": {
"Node name for S&R": "BridgeSenderImage"
},
"widgets_values": [
"http://127.0.0.1:8188",
"image_pipe"
]
},
{
"id": 10,
"type": "PreviewImage",
"pos": [
1271.16041295519,
-383.7716443956613
],
"size": [
210,
258
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {
"Node name for S&R": "PreviewImage"
},
"widgets_values": []
}
],
"links": [
[
1,
1,
0,
3,
0,
"LATENT"
],
[
2,
4,
0,
3,
1,
"VAE"
],
[
3,
3,
0,
2,
0,
"IMAGE"
],
[
5,
5,
0,
6,
0,
"IMAGE"
],
[
9,
3,
0,
10,
0,
"IMAGE"
],
[
10,
2,
0,
5,
0,
"IMAGE"
]
],
"groups": [],
"config": {},
"extra": {
"ds": {
"scale": 0.9090909090909091,
"offset": [
-555.9283817051913,
601.8724256456611
]
},
"workflowRendererVersion": "LG",
"frontendVersion": "1.35.9"
},
"version": 0.4
}Uploads data to a remote ComfyUI instance.
- Input: The Latent or Image to send.
- URL: The full address of the receiving instance (e.g.,
https://my-node.ngrok-free.app). - Channel: A unique name (ID) for this data stream (e.g.,
pipe_1). Must match the Receiver.
Waits for data to appear, loads it, and deletes the file.
- Channel: Must match the Sender.
- Timeout: How long to wait (in seconds) before failing.
- Trigger (Optional): Connect any output here. The Receiver will strictly wait for the connected node to finish before starting its polling loop. Essential for single-workflow loops.
- Unique Channels: If you run multiple distinct workflows, use different channel names (e.g.,
face_fix,background_gen) to avoid cross-talk. - Timeouts: If your Cloud GPU takes 5 minutes to generate, set the
timeout_secon your Local Receiver to600(10 minutes) to avoid premature timeout errors.
You are neither limited to a simple Send/Receive loop nor to just two instances. Since the architecture uses HTTP/Channels, you can build complex distributed pipelines involving 3, 4, or more machines.
Imagine a setup where you have specialized hardware for different tasks:
- Local PC: Handles Prompting & ControlNet Preprocessing.
- Cloud Instance 1 (T4 GPU): Handles standard SDXL Generation.
- Cloud Instance 2 (A100 GPU): Handles massive Video Generation or Ultimate Upscaling.
How to set it up:
- Node A (Mac): Sends Latent to
Instance 1(Channel:step_1). Waits forfinal_result. - Node B (Instance 1): Receives
step_1-> Generates -> Sends toInstance 2(Channel:step_2). - Node C (Instance 2): Receives
step_2-> Upscales -> Sends toMac(Channel:final_result).
All intermediate nodes (B and C) should be set to Auto Queue.
Your Local controller can dispatch tasks to multiple workers simultaneously.
- Sender 1: Sends to Worker B (Channel:
face_job). - Sender 2: Sends to Worker C (Channel:
background_job). - Receiver: Waits for both to return before compositing the final image.
This node uses torch.load and torch.save (Pickle) to transfer data.
- Risk: Pickle files can execute arbitrary code.
- Rule: Only receive data from sources you trust. Do not expose a Receiver node on a public URL without authentication if you don't trust the sender.


