Conversation
Implements the Multi-Provider as specified in OpenFeature Appendix A.
The Multi-Provider wraps multiple underlying providers in a unified interface,
allowing a single client to interact with multiple flag sources simultaneously.
Key features implemented:
- MultiProvider class extending AbstractProvider
- FirstMatchStrategy (sequential evaluation, stops at first success)
- EvaluationStrategy protocol for custom strategies
- Provider name uniqueness (explicit, metadata-based, or auto-indexed)
- Parallel initialization of all providers with error aggregation
- Support for all flag types (boolean, string, integer, float, object)
- Hook aggregation from all providers
Use cases:
- Migration: Run old and new providers in parallel
- Multiple data sources: Combine env vars, files, and SaaS providers
- Fallback: Primary provider with backup sources
Example usage:
provider_a = SomeProvider()
provider_b = AnotherProvider()
multi = MultiProvider([
ProviderEntry(provider_a, name="primary"),
ProviderEntry(provider_b, name="fallback")
])
api.set_provider(multi)
Closes #511
Signed-off-by: vikasrao23 <vikasrao23@users.noreply.github.com>
…hancements Address Gemini code review feedback: - Update initialize() docstring to reflect sequential (not parallel) initialization - Add documentation notes to all async methods explaining they currently delegate to sync - Clarify that parallel evaluation mode is planned but not yet implemented - Update EvaluationStrategy protocol docs to set correct expectations This brings documentation in line with actual implementation. True async and parallel execution will be added in follow-up PRs. Refs: #511 Signed-off-by: vikasrao23 <vikasrao23@users.noreply.github.com>
CRITICAL FIXES: - Fix FlagResolutionDetails initialization - remove invalid flag_key parameter - Add error_code (ErrorCode.GENERAL) to all error results per spec HIGH PRIORITY: - Implement true async evaluation using _evaluate_with_providers_async - All async methods now properly await provider async methods (no blocking) - Implement parallel provider initialization using ThreadPoolExecutor IMPROVEMENTS: - Remove unused imports (asyncio, ProviderEvent, ProviderEventDetails, ProviderStatus) - Add ErrorCode import for proper error handling - Cache provider hooks to avoid re-aggregating on every evaluation - Update docstrings to clarify current implementation status
HIGH PRIORITY FIXES: - Fix name resolution logic to prevent collisions between explicit and auto-generated names - Check used_names set for metadata names before using them - Use while loop to find next available indexed name if collision detected - Implement event propagation (spec requirement) - Override attach() and detach() methods to forward events to all providers - Import ProviderEvent and ProviderEventDetails - Enables cache invalidation and other event-driven features MEDIUM PRIORITY IMPROVEMENTS: - Parallel shutdown with proper error logging - Use ThreadPoolExecutor for concurrent shutdown - Add logging for shutdown failures - Optimize ThreadPoolExecutor max_workers - Set to len(providers) for both initialize() and shutdown() - Ensures all providers can start immediately - Improve type hints for better type safety - Add generic type parameters to FlagResolutionDetails in resolve_fn signatures - Specify Awaitable return type for async resolve_fn - Add generic types to results list declarations All critical and high-priority feedback addressed. Ready for re-review. Refs: #511
This is more consistent with the other type imports in the file.
Co-authored-by: jonathan <jonathan@taplytics.com>
|
Cursor Agent can help with this pull request. Just |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the OpenFeature Python SDK by introducing robust multi-provider support. It allows developers to configure and manage multiple feature flag providers simultaneously, offering flexible evaluation strategies and ensuring proper isolation of provider-specific logic. This change improves the SDK's adaptability to complex feature management architectures and provides more control over how flags are resolved and events are handled across diverse provider landscapes. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #569 +/- ##
==========================================
- Coverage 98.18% 94.89% -3.29%
==========================================
Files 41 43 +2
Lines 1982 2705 +723
==========================================
+ Hits 1946 2567 +621
- Misses 36 138 +102
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant new functionality for multi-provider evaluation, including different evaluation strategies, parallel execution, and robust status and event handling. The implementation is comprehensive and includes a good set of tests, with well-integrated changes to the client and registry for MultiProvider internal hooks. A critical security issue was identified in the parallel synchronous evaluation mode: contextvars are not propagated to worker threads in the ThreadPoolExecutor, which bypasses child provider hooks and requires remediation. Additionally, it is suggested to improve the robustness of the MultiProvider shutdown process by better parallelizing detach and shutdown calls for child providers.
| with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor: | ||
| futures = [ | ||
| executor.submit( | ||
| self._evaluate_provider_sync, | ||
| provider_name, | ||
| provider, | ||
| flag_type, | ||
| flag_key, | ||
| default_value, | ||
| evaluation_context, | ||
| resolve_fn, | ||
| ) | ||
| for provider_name, provider in self._registeredProviders | ||
| ] | ||
| evaluations = [future.result() for future in futures] |
There was a problem hiding this comment.
The MultiProvider implementation uses ThreadPoolExecutor for parallel evaluation in synchronous mode, but it does not propagate contextvars to the worker threads. The _evaluate_provider_sync method relies on self._hookRuntime, which is a contextvars.ContextVar, to manage hook execution. Since ThreadPoolExecutor does not automatically propagate context, self._hookRuntime.get() will return None in worker threads, causing child provider hooks to be bypassed when run_mode is set to "parallel". This could lead to security vulnerabilities if hooks are used for authorization, auditing, or other security-sensitive tasks.
if self.strategy.run_mode == "parallel":
ctx = contextvars.copy_context()
with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
futures = [
executor.submit(
ctx.run,
self._evaluate_provider_sync,
provider_name,
provider,
flag_type,
flag_key,
default_value,
evaluation_context,
resolve_fn,
)
for provider_name, provider in self._registeredProviders
]
evaluations = [future.result() for future in futures]| for _, provider in self._registeredProviders: | ||
| provider.detach() | ||
|
|
||
| def shutdown_provider(entry: tuple[str, FeatureProvider]) -> None: | ||
| provider_name, provider = entry | ||
| try: | ||
| provider.shutdown() | ||
| except Exception: | ||
| logger.exception("Provider '%s' shutdown failed", provider_name) | ||
|
|
||
| with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor: | ||
| list(executor.map(shutdown_provider, self._registeredProviders)) |
There was a problem hiding this comment.
The detach calls for child providers are currently performed sequentially before their shutdown is called in parallel. If any provider's detach method is slow or blocking, it will delay the entire shutdown process. To improve robustness and fully parallelize the shutdown, consider moving the detach call inside the shutdown_provider function. This ensures that detach and shutdown for each provider run together in the thread pool.
def shutdown_provider(entry: tuple[str, FeatureProvider]) -> None:
provider_name, provider = entry
try:
provider.detach()
provider.shutdown()
except Exception:
logger.exception("Provider '%s' shutdown failed", provider_name)
with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
list(executor.map(shutdown_provider, self._registeredProviders))
Implement multi-provider functionality and close parity gaps outlined in #568.
This PR addresses several missing aspects of multi-provider functionality, including:
PROVIDER_CONFIGURATION_CHANGEDevents.FirstMatchStrategyto fall through only onFLAG_NOT_FOUND.FirstSuccessfulStrategyandComparisonStrategy.