@@ -223,6 +223,134 @@ Following implementation requires MongoDB v4.2 or higher.
223223 ProxyPassReverse /fdsnws/availability/1 <HOST>:9001 timeout=600
224224 ```
225225
226+ ## Performance Tuning
227+
228+ ### Gunicorn Workers Configuration
229+
230+ The number of Gunicorn workers directly affects how many concurrent requests your service can handle. The default configuration uses **1 worker** for maximum stability on resource-constrained servers.
231+
232+ #### Current Configuration (docker-compose.yml)
233+ ```yaml
234+ command: gunicorn --bind 0.0.0.0:9001 --workers 1 start:app
235+ ```
236+
237+ #### Adjusting Worker Count
238+
239+ **For servers with limited resources or thread creation issues:**
240+ ```yaml
241+ # Minimum configuration (most stable)
242+ command: gunicorn --bind 0.0.0.0:9001 --workers 1 --timeout 600 start:app
243+ ```
244+
245+ **For servers with moderate resources:**
246+ ```yaml
247+ # 2-3 workers (recommended for most deployments)
248+ command: gunicorn --bind 0.0.0.0:9001 --workers 2 --timeout 600 start:app
249+ ```
250+
251+ **For high-performance servers:**
252+ ```yaml
253+ # Formula: (2 × CPU cores) + 1
254+ # Example for 4-core server: --workers 9
255+ command: gunicorn --bind 0.0.0.0:9001 --workers 4 --timeout 600 start:app
256+ ```
257+
258+ #### Important Notes
259+
260+ 1. **Each worker is a separate process** with its own memory footprint
261+ 2. **More workers ≠ always better** - too many workers can exhaust system resources
262+ 3. **Monitor for errors** after increasing workers:
263+ ```bash
264+ docker logs -f fdsnws-availability-api
265+ # Watch for "pthread_create failed" or similar errors
266+ ```
267+
268+ 4. **Resource usage check:**
269+ ```bash
270+ docker stats fdsnws-availability-api
271+ # If CPU < 80% and memory available, you can add more workers
272+ ```
273+
274+ ### MongoDB Connection Pool
275+
276+ The MongoDB connection pool is configured in `apps/wfcatalog_client.py`:
277+
278+ ```python
279+ maxPoolSize=1 # Connections per worker
280+ ```
281+
282+ #### How It Works
283+
284+ - **Each Gunicorn worker** has its own MongoDB client
285+ - **Total connections** = `workers × maxPoolSize`
286+ - **Example:** 2 workers × 1 pool = 2 total MongoDB connections
287+
288+ #### When to Adjust
289+
290+ **Keep `maxPoolSize=1` if:**
291+ - ✅ Using sync workers (default Gunicorn configuration)
292+ - ✅ Each worker handles one request at a time
293+ - ✅ Server has resource constraints
294+
295+ **Increase `maxPoolSize` only if:**
296+ - Using async workers (gevent/eventlet)
297+ - Using threading within workers
298+ - MongoDB is a bottleneck (check with profiling)
299+
300+ #### Example Configurations
301+
302+ | Workers | maxPoolSize | Total Connections | Use Case |
303+ |---------|-------------|-------------------|----------|
304+ | 1 | 1 | 1 | Minimal (default) |
305+ | 2 | 1 | 2 | Recommended |
306+ | 4 | 1 | 4 | High performance |
307+ | 2 | 5 | 10 | Async workers |
308+
309+ ### Thread Limiting (Important!)
310+
311+ The configuration includes thread limits to prevent `pthread_create failed` errors on restricted servers:
312+
313+ ```yaml
314+ environment:
315+ OPENBLAS_NUM_THREADS: 1
316+ MKL_NUM_THREADS: 1
317+ NUMEXPR_NUM_THREADS: 1
318+ OMP_NUM_THREADS: 1
319+ ```
320+
321+ **Do not remove these** unless you' re certain your server can handle multiple threads per process. These prevent NumPy/ObsPy from spawning excessive threads.
322+
323+ # ## Troubleshooting
324+
325+ ** Problem:** Service crashes with " pthread_create failed"
326+ - ** Solution:** Reduce workers to 1, keep thread limits in place
327+
328+ ** Problem:** Slow response times under load
329+ - ** Solution:** Increase workers (if resources allow), monitor with ` docker stats`
330+
331+ ** Problem:** High memory usage
332+ - ** Solution:** Reduce workers, check for memory leaks with profiling
333+
334+ ** Problem:** MongoDB connection errors
335+ - ** Solution:** Check total connections (workers × maxPoolSize) against MongoDB limits
336+
337+ # ## Performance Monitoring
338+
339+ See ` tests/performance/` for profiling and benchmarking tools:
340+
341+ ` ` ` bash
342+ # Quick performance test
343+ bash tests/performance/quick_test.sh
344+
345+ # Detailed profiling
346+ python tests/performance/profiler.py
347+
348+ # Load testing
349+ locust -f tests/performance/locustfile.py --host=http://localhost:9001
350+ ` ` `
351+
352+ For more details, see [Performance Analysis Plan](tests/performance/README.md).
353+
226354# # Running in development environment
227355
2283561. Go to the root directory.
0 commit comments