Skip to content

Commit 1a88982

Browse files
bchavclaude
andcommitted
Add task queue guidance documentation
Explains the relationship between TemporalWorkerDeployment and Task Queues. Multiple task queues can be grouped in a single TWD by running multiple containers (each polling one queue) in the same pod. Splitting into separate TWDs is an optimization when queues have different scaling requirements or deployment cadences. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1 parent 9e5a646 commit 1a88982

1 file changed

Lines changed: 158 additions & 0 deletions

File tree

docs/task-queues.md

Lines changed: 158 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,158 @@
1+
# Task Queues and TemporalWorkerDeployment
2+
3+
This document explains how Task Queues relate to TemporalWorkerDeployment resources and provides guidance on structuring your deployments.
4+
5+
## Key Concept: Task Queue is Defined in Your Code
6+
7+
The Task Queue is **not** configured in the TemporalWorkerDeployment spec. Instead:
8+
9+
1. The controller injects environment variables into your pods:
10+
- `TEMPORAL_ADDRESS`
11+
- `TEMPORAL_NAMESPACE`
12+
- `TEMPORAL_DEPLOYMENT_NAME`
13+
- `TEMPORAL_WORKER_BUILD_ID`
14+
15+
2. Your worker code reads these variables and specifies which Task Queue to poll.
16+
17+
```
18+
┌──────────────────────────────────────────────────────────────────┐
19+
│ TemporalWorkerDeployment │
20+
│ │
21+
│ Manages: Does NOT manage: │
22+
│ - Replicas - Task Queue name(s) │
23+
│ - Rollout strategy - Workflows/Activities │
24+
│ - Version lifecycle - Worker business logic │
25+
│ - K8s Deployments │
26+
│ - Env var injection │
27+
│ │
28+
└──────────────────────────────────────────────────────────────────┘
29+
30+
│ Creates pods with env vars
31+
32+
┌──────────────────────────────────────────────────────────────────┐
33+
│ Pod │
34+
│ │
35+
│ ┌────────────────────┐ ┌────────────────────┐ │
36+
│ │ Container 1 │ │ Container 2 │ │
37+
│ │ (orders worker) │ │ (payments worker) │ ... │
38+
│ │ │ │ │ │
39+
│ │ Polls: "orders" │ │ Polls: "payments" │ │
40+
│ └────────────────────┘ └────────────────────┘ │
41+
│ │
42+
│ Each container runs a worker process polling ONE Task Queue │
43+
│ All containers share the same TEMPORAL_WORKER_BUILD_ID │
44+
│ │
45+
└──────────────────────────────────────────────────────────────────┘
46+
```
47+
48+
> **Note:** Each worker process polls exactly one Task Queue. To handle multiple queues, run multiple containers.
49+
50+
## Grouping Multiple Task Queues in a Single TWD
51+
52+
A single worker codebase can process tasks for multiple Task Queues. This can be done by bundling multiple containers (each running a worker process polling its own Task Queue) into the same pod, managed by a single TemporalWorkerDeployment:
53+
54+
```yaml
55+
apiVersion: temporal.io/v1alpha1
56+
kind: TemporalWorkerDeployment
57+
metadata:
58+
name: order-service-workers
59+
spec:
60+
replicas: 5
61+
template:
62+
spec:
63+
containers:
64+
- name: orders-worker
65+
image: order-service:v1.0
66+
env:
67+
- name: TASK_QUEUE
68+
value: "orders"
69+
- name: payments-worker
70+
image: order-service:v1.0
71+
env:
72+
- name: TASK_QUEUE
73+
value: "payments"
74+
- name: notifications-worker
75+
image: order-service:v1.0
76+
env:
77+
- name: TASK_QUEUE
78+
value: "notifications"
79+
```
80+
81+
Each container reads `TASK_QUEUE` to determine which queue to poll:
82+
83+
```go
84+
func main() {
85+
taskQueue := os.Getenv("TASK_QUEUE")
86+
87+
opts := worker.Options{
88+
DeploymentOptions: worker.DeploymentOptions{
89+
UseVersioning: true,
90+
Version: worker.Version{
91+
BuildId: os.Getenv("TEMPORAL_WORKER_BUILD_ID"),
92+
},
93+
},
94+
}
95+
96+
w := client.NewWorker(taskQueue, opts)
97+
w.Start()
98+
}
99+
```
100+
101+
This approach can work well when:
102+
103+
- Task Queues are part of the same logical service
104+
- You want to deploy and version workers for all queues together
105+
- The queues have similar resource and scaling requirements
106+
107+
## When to Split Task Queues into Separate TWDs
108+
109+
Consider creating separate TemporalWorkerDeployment resources when Task Queues have:
110+
111+
**Different scaling requirements** - One queue may need 10 replicas while another needs 2:
112+
113+
```yaml
114+
# High-volume order processing
115+
apiVersion: temporal.io/v1alpha1
116+
kind: TemporalWorkerDeployment
117+
metadata:
118+
name: orders-worker
119+
spec:
120+
replicas: 10
121+
template:
122+
spec:
123+
containers:
124+
- name: worker
125+
image: order-service:v1.0
126+
env:
127+
- name: TASK_QUEUE
128+
value: "orders"
129+
---
130+
# Low-volume notifications
131+
apiVersion: temporal.io/v1alpha1
132+
kind: TemporalWorkerDeployment
133+
metadata:
134+
name: notifications-worker
135+
spec:
136+
replicas: 2
137+
template:
138+
spec:
139+
containers:
140+
- name: worker
141+
image: order-service:v1.0
142+
env:
143+
- name: TASK_QUEUE
144+
value: "notifications"
145+
```
146+
147+
**Different deployment cadences** - You want to roll out changes to one queue without affecting others, or test changes on a low-risk queue before rolling to critical queues.
148+
149+
**Different resource profiles** - One queue runs CPU-intensive activities while another is I/O-bound.
150+
151+
### Trade-offs of Splitting
152+
153+
| Benefit | Cost |
154+
|---------|------|
155+
| Independent scaling per queue | More Kubernetes resources to manage |
156+
| Independent rollouts | More TWD manifests to maintain |
157+
| Isolated failures | Coordination overhead for shared changes |
158+
| Queue-specific resource tuning | Potential duplication if queues are similar |

0 commit comments

Comments
 (0)