-
Notifications
You must be signed in to change notification settings - Fork 4k
Description
What version of gRPC-Java are you using?
Master branch (Source Code Analysis)
Current version in source: 1.79.0-SNAPSHOT
What is your environment?
N/A (This is a Static Analysis finding, independent of OS/JDK version)
What did you expect to see?
I expected the internal shared executor (SHARED_CHANNEL_EXECUTOR) to use a bounded thread pool or have a mechanism to reject tasks when under extreme load, to prevent the application from crashing due to resource exhaustion.
What did you see instead?
The SHARED_CHANNEL_EXECUTOR is initialized using Executors.newCachedThreadPool().
This creates a thread pool with maximumPoolSize set to Integer.MAX_VALUE.
Steps to reproduce the bug
This is a design risk identified via Static Analysis. It can be verified by inspecting the source code:
- Go to file
io/grpc/internal/GrpcUtil.java. - Locate the field
SHARED_CHANNEL_EXECUTOR(around line 433). - Observe the implementation:
public static final Resource<Executor> SHARED_CHANNEL_EXECUTOR = new Resource<Executor>() { // ... @Override public Executor create() { return Executors.newCachedThreadPool(getThreadFactory(NAME + "-%d", true)); } // ... };
Risk Analysis: In a microservices environment, if this shared executor is used (which is the default behavior if no executor is provided) and the service experiences a "Traffic Spike" combined with blocking operations, the CachedThreadPool will attempt to create a new thread for every request. This can rapidly lead to java.lang.OutOfMemoryError: unable to create new native thread or hit OS ulimit, causing a service outage.
Suggestion: Consider replacing newCachedThreadPool with a bounded ThreadPoolExecutor or adding documentation/warnings about the risks of using the default shared executor in high-concurrency blocking scenarios.