perf: optimize hot paths with caching and O(1) operations#1816
perf: optimize hot paths with caching and O(1) operations#1816herniqeu wants to merge 1 commit intomodelcontextprotocol:mainfrom
Conversation
- Replace list.pop(0) with deque.popleft() for O(1) queue dequeue - Cache compiled regex patterns in ResourceTemplate for URI matching - Cache field info mapping in FuncMetadata via lazy property - Throttle expired task cleanup with interval-based execution These optimizations target high-frequency operations in message queuing, resource lookups, tool calls, and task store access.
| from mcp.shared.experimental.tasks.store import TaskStore | ||
| from mcp.types import Result, Task, TaskMetadata, TaskStatus | ||
|
|
||
| CLEANUP_INTERVAL_SECONDS = 1.0 |
There was a problem hiding this comment.
I guess this would be a breaking change? Setting a throttling limit on the cleanup may not resolve all expired tasks and is timing-dependent, but this makes sense if the tasks are accessed very frequently (though I'm not sure what the benchmarks on this look like). How often does this happen/are there use cases that would be addressed with throttling cleanup?
There was a problem hiding this comment.
nit: iiuc, these changes optimize repeated calls to pre_parse_json. I'm not sure what the protocol is for optimization-related tests, but it could be useful to add a small benchmark in the description on a ~20 field pydantic model and call pre_parse_json repeatedly (50k-100k before vs after).
|
Thanks for the contribution, but closing this PR for now:
Happy to reconsider if you open an issue with benchmarks showing the problem. |
These optimizations target high-frequency operations in message queuing, resource lookups, tool calls, and task store access.
Motivation and Context
How Has This Been Tested?
Breaking Changes
Types of changes
Checklist
Additional context