Skip to content

Provider shutdown on exit#569

Draft
jonathannorris wants to merge 6 commits intomainfrom
cursor/provider-shutdown-on-exit-5d57
Draft

Provider shutdown on exit#569
jonathannorris wants to merge 6 commits intomainfrom
cursor/provider-shutdown-on-exit-5d57

Conversation

@jonathannorris
Copy link
Member

Implement multi-provider functionality and close parity gaps outlined in #568.

This PR addresses several missing aspects of multi-provider functionality, including:

  • Aggregate child-provider status tracking with worst-wins precedence and deduplicated aggregate event emission.
  • Forwarding of PROVIDER_CONFIGURATION_CHANGED events.
  • Isolation of child provider hooks to run only for the provider being evaluated, with full before/after/error/finally lifecycle.
  • Corrected FirstMatchStrategy to fall through only on FLAG_NOT_FOUND.
  • Introduction of FirstSuccessfulStrategy and ComparisonStrategy.
  • Implementation of actual parallel evaluation for sync and async multi-provider execution.
  • Updates to client/registry integration for correct internal multi-provider hooks and status reporting.

Open in Web Open in Cursor 

vikasrao23 and others added 6 commits March 6, 2026 10:08
Implements the Multi-Provider as specified in OpenFeature Appendix A.

The Multi-Provider wraps multiple underlying providers in a unified interface,
allowing a single client to interact with multiple flag sources simultaneously.

Key features implemented:
- MultiProvider class extending AbstractProvider
- FirstMatchStrategy (sequential evaluation, stops at first success)
- EvaluationStrategy protocol for custom strategies
- Provider name uniqueness (explicit, metadata-based, or auto-indexed)
- Parallel initialization of all providers with error aggregation
- Support for all flag types (boolean, string, integer, float, object)
- Hook aggregation from all providers

Use cases:
- Migration: Run old and new providers in parallel
- Multiple data sources: Combine env vars, files, and SaaS providers
- Fallback: Primary provider with backup sources

Example usage:
    provider_a = SomeProvider()
    provider_b = AnotherProvider()

    multi = MultiProvider([
        ProviderEntry(provider_a, name="primary"),
        ProviderEntry(provider_b, name="fallback")
    ])

    api.set_provider(multi)

Closes #511

Signed-off-by: vikasrao23 <vikasrao23@users.noreply.github.com>
…hancements

Address Gemini code review feedback:
- Update initialize() docstring to reflect sequential (not parallel) initialization
- Add documentation notes to all async methods explaining they currently delegate to sync
- Clarify that parallel evaluation mode is planned but not yet implemented
- Update EvaluationStrategy protocol docs to set correct expectations

This brings documentation in line with actual implementation. True async and parallel
execution will be added in follow-up PRs.

Refs: #511
Signed-off-by: vikasrao23 <vikasrao23@users.noreply.github.com>
CRITICAL FIXES:
- Fix FlagResolutionDetails initialization - remove invalid flag_key parameter
- Add error_code (ErrorCode.GENERAL) to all error results per spec

HIGH PRIORITY:
- Implement true async evaluation using _evaluate_with_providers_async
- All async methods now properly await provider async methods (no blocking)
- Implement parallel provider initialization using ThreadPoolExecutor

IMPROVEMENTS:
- Remove unused imports (asyncio, ProviderEvent, ProviderEventDetails, ProviderStatus)
- Add ErrorCode import for proper error handling
- Cache provider hooks to avoid re-aggregating on every evaluation
- Update docstrings to clarify current implementation status
HIGH PRIORITY FIXES:
- Fix name resolution logic to prevent collisions between explicit and auto-generated names
  - Check used_names set for metadata names before using them
  - Use while loop to find next available indexed name if collision detected
- Implement event propagation (spec requirement)
  - Override attach() and detach() methods to forward events to all providers
  - Import ProviderEvent and ProviderEventDetails
  - Enables cache invalidation and other event-driven features

MEDIUM PRIORITY IMPROVEMENTS:
- Parallel shutdown with proper error logging
  - Use ThreadPoolExecutor for concurrent shutdown
  - Add logging for shutdown failures
- Optimize ThreadPoolExecutor max_workers
  - Set to len(providers) for both initialize() and shutdown()
  - Ensures all providers can start immediately
- Improve type hints for better type safety
  - Add generic type parameters to FlagResolutionDetails in resolve_fn signatures
  - Specify Awaitable return type for async resolve_fn
  - Add generic types to results list declarations

All critical and high-priority feedback addressed. Ready for re-review.

Refs: #511
This is more consistent with the other type imports in the file.
Co-authored-by: jonathan <jonathan@taplytics.com>
@cursor
Copy link

cursor bot commented Mar 6, 2026

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the OpenFeature Python SDK by introducing robust multi-provider support. It allows developers to configure and manage multiple feature flag providers simultaneously, offering flexible evaluation strategies and ensuring proper isolation of provider-specific logic. This change improves the SDK's adaptability to complex feature management architectures and provides more control over how flags are resolved and events are handled across diverse provider landscapes.

Highlights

  • Multi-Provider Functionality: Implemented core multi-provider capabilities, allowing the OpenFeature client to manage and evaluate flags across multiple feature flag providers.
  • Evaluation Strategies: Introduced FirstMatchStrategy, FirstSuccessfulStrategy, and ComparisonStrategy to define how flags are resolved when multiple providers are configured, supporting both sequential and parallel execution.
  • Provider Hook Isolation: Ensured that provider-specific hooks are isolated and run only for the provider being evaluated, with a complete before/after/error/finally lifecycle.
  • Aggregate Status Tracking: Developed a mechanism for tracking and aggregating the status of child providers, using a 'worst-wins' precedence and deduplicating event emissions.
  • Event Forwarding: Enabled forwarding of PROVIDER_CONFIGURATION_CHANGED events from individual child providers through the MultiProvider.
  • Client/Registry Integration: Updated client and provider registry logic to correctly integrate with the new internal multi-provider hooks and status reporting.
Changelog
  • openfeature/client.py
    • Added internal methods _provider_uses_internal_hooks, _set_internal_provider_hook_runtime, and _reset_internal_provider_hook_runtime to manage provider-specific hook contexts.
    • Modified flag evaluation methods to utilize these new internal hook runtime management functions, ensuring proper hook lifecycle for multi-providers.
  • openfeature/provider/init.py
    • Updated the __all__ export list to include new multi-provider related classes.
    • Imported new multi-provider strategy and core classes from the multi_provider module.
  • openfeature/provider/_registry.py
    • Modified provider initialization and error handling to prevent redundant event dispatches if a provider's status is not NOT_READY.
    • Updated get_provider_status to check for an internal get_status method on providers, allowing MultiProvider to report its aggregate status.
  • openfeature/provider/multi_provider.py
    • Added MultiProvider class to manage multiple feature flag providers.
    • Implemented EvaluationStrategy protocol with FirstMatchStrategy, FirstSuccessfulStrategy, and ComparisonStrategy for flag resolution.
    • Included logic for aggregating provider statuses with precedence and emitting deduplicated events.
    • Provided mechanisms for parallel and sequential flag evaluation across registered providers.
    • Integrated internal hook runtime management to ensure isolated hook execution for each child provider.
  • tests/test_multi_provider.py
    • Added new test cases to validate the functionality of MultiProvider.
    • Included tests for various evaluation strategies, hook isolation, parallel execution, and event aggregation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Mar 6, 2026

Codecov Report

❌ Patch coverage is 85.85165% with 103 lines in your changes missing coverage. Please review.
✅ Project coverage is 94.89%. Comparing base (05382aa) to head (df940f3).

Files with missing lines Patch % Lines
openfeature/provider/multi_provider.py 78.88% 87 Missing ⚠️
tests/test_multi_provider.py 94.64% 15 Missing ⚠️
openfeature/client.py 96.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #569      +/-   ##
==========================================
- Coverage   98.18%   94.89%   -3.29%     
==========================================
  Files          41       43       +2     
  Lines        1982     2705     +723     
==========================================
+ Hits         1946     2567     +621     
- Misses         36      138     +102     
Flag Coverage Δ
unittests 94.89% <85.85%> (-3.29%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality for multi-provider evaluation, including different evaluation strategies, parallel execution, and robust status and event handling. The implementation is comprehensive and includes a good set of tests, with well-integrated changes to the client and registry for MultiProvider internal hooks. A critical security issue was identified in the parallel synchronous evaluation mode: contextvars are not propagated to worker threads in the ThreadPoolExecutor, which bypasses child provider hooks and requires remediation. Additionally, it is suggested to improve the robustness of the MultiProvider shutdown process by better parallelizing detach and shutdown calls for child providers.

Comment on lines +854 to +868
with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
futures = [
executor.submit(
self._evaluate_provider_sync,
provider_name,
provider,
flag_type,
flag_key,
default_value,
evaluation_context,
resolve_fn,
)
for provider_name, provider in self._registeredProviders
]
evaluations = [future.result() for future in futures]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The MultiProvider implementation uses ThreadPoolExecutor for parallel evaluation in synchronous mode, but it does not propagate contextvars to the worker threads. The _evaluate_provider_sync method relies on self._hookRuntime, which is a contextvars.ContextVar, to manage hook execution. Since ThreadPoolExecutor does not automatically propagate context, self._hookRuntime.get() will return None in worker threads, causing child provider hooks to be bypassed when run_mode is set to "parallel". This could lead to security vulnerabilities if hooks are used for authorization, auditing, or other security-sensitive tasks.

        if self.strategy.run_mode == "parallel":
            ctx = contextvars.copy_context()
            with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
                futures = [
                    executor.submit(
                        ctx.run,
                        self._evaluate_provider_sync,
                        provider_name,
                        provider,
                        flag_type,
                        flag_key,
                        default_value,
                        evaluation_context,
                        resolve_fn,
                    )
                    for provider_name, provider in self._registeredProviders
                ]
                evaluations = [future.result() for future in futures]

Comment on lines +473 to +484
for _, provider in self._registeredProviders:
provider.detach()

def shutdown_provider(entry: tuple[str, FeatureProvider]) -> None:
provider_name, provider = entry
try:
provider.shutdown()
except Exception:
logger.exception("Provider '%s' shutdown failed", provider_name)

with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
list(executor.map(shutdown_provider, self._registeredProviders))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The detach calls for child providers are currently performed sequentially before their shutdown is called in parallel. If any provider's detach method is slow or blocking, it will delay the entire shutdown process. To improve robustness and fully parallelize the shutdown, consider moving the detach call inside the shutdown_provider function. This ensures that detach and shutdown for each provider run together in the thread pool.

        def shutdown_provider(entry: tuple[str, FeatureProvider]) -> None:
            provider_name, provider = entry
            try:
                provider.detach()
                provider.shutdown()
            except Exception:
                logger.exception("Provider '%s' shutdown failed", provider_name)

        with ThreadPoolExecutor(max_workers=len(self._registeredProviders)) as executor:
            list(executor.map(shutdown_provider, self._registeredProviders))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants