Add @Flow.model functional API#206
Open
NeejWeej wants to merge 1 commit intoPoint72:mainfrom
Open
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #206 +/- ##
==========================================
+ Coverage 95.22% 95.41% +0.19%
==========================================
Files 140 145 +5
Lines 10354 14430 +4076
Branches 599 913 +314
==========================================
+ Hits 9860 13769 +3909
- Misses 369 516 +147
- Partials 125 145 +20 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
dddfb5b to
9f1755c
Compare
Signed-off-by: Nijat K <nijat.khanbabayev@gmail.com>
9f1755c to
a074f8f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Summary:
@Flow.modelReplaces #171. Reopened from a personal fork.
This PR adds
@Flow.model, an authoring API that turns a typed Python function into a realCallableModelfactory.The intent is to make common DAG stages easier to write while keeping execution inside the existing ccflow machinery. Generated models still use the existing
CallableModel, evaluator, cache, dependency graph, registry, Hydra, and serialization paths.Core API
@Flow.modelsplits function parameters into two categories:CallableModeldependencies.FromContext[T]. These are runtime inputs supplied by context,.flow.compute(...), construction-time contextual defaults, or.flow.with_context(...).Example:
When a function returns a non-
ResultBasevalue, the generated model wraps it inGenericResult[value]. ExplicitResultBasereturns are preserved.Dependency Wiring
Regular parameters can be bound to upstream models:
Generated
__deps__methods expose non-lazy upstream model dependencies to the existing graph evaluator.Lazy[T]remains supported for dependency thunks when a dependency should only be evaluated if user code calls it.Context Rewrites
This PR adds
.flow.with_context(...)plus@Flow.context_transform.with_contextrewrites runtime context for one dependency edge without mutating the wrapped model. This supports fanout patterns where the same model is evaluated against different contextual inputs in different branches.Raw callables are intentionally rejected in
with_context; reusable transforms should be defined with@Flow.context_transformso they can be validated and serialized.Execution Helpers
Every
CallableModelnow exposesmodel.flow.For generated models,
model.flowprovides:compute(...): ergonomic execution from a context object or contextual kwargs.with_context(...): edge-local context rewrites.context_inputs: contextual fields the model may consume.bound_inputs: construction-time fields and static context bindings.unbound_inputs: required contextual fields not yet satisfied.compute()deliberately does not bind regular parameters. If a kwarg matches a regular parameter, it raises instead of silently treating runtime context as model configuration.The PR also adds
Flow.call(auto_context=...)as a narrow opt-in for hand-writtenCallableModel.__call__methods that want to declare context fields as keyword-only parameters. It is not the main@Flow.modelauthoring path and does not addFromContext[...], dependency wiring, or.flow.with_context(...)semantics by itself.Serialization
Importable module-level
@Flow.modelfunctions produce generated classes with stable module import paths, so JSON/config-style round trips can work across processes when the defining module is importable.Only importable module-level
@Flow.modelfunctions are durable across JSON/config-style round trips. Local, nested, and__main__definitions are best-effort for pickle/cloudpickle object transport, not stable config artifacts.Cache And Graph Identity
Public
cache_key(...)remains structural by default.Generated and bound models also support effective identity for model evaluations. Effective identity describes the parts of an invocation that actually affect the result, so unused ambient
FlowContextfields do not split built-in cache entries or graph nodes.The built-in
MemoryCacheEvaluatornow uses:Custom evaluators can use the same public API if they want generated-model-aware keys:
The default remains structural:
Ordinary
CallableModelclasses continue to use structural identity unless they explicitly opt into the internal identity hook. This is intentional: arbitrary handwrittenCallableModel.__call__implementations can inspect context in ways ccflow cannot infer safely.Opaque evaluators also use structural identity, since they could access arbitrary fields on the context that differ from the signature of a given
@Flow.modeldecorated function.Why Effective Identity Matters
The existing structural key can over-split cache entries for ordinary
CallableModels when callers pass a richer context than the model semantically uses. With structural context identity, adding or changing an ambient field for one branch of a DAG can invalidate cache reuse in another branch that does not use that field.Minimal ordinary-
CallableModelexample:Handwritten
CallableModels can opt into effective identity by overriding the internal identity hook and returning only the semantic fields that affect the result:With that opt-in, the built-in cache can reuse results across
DayRequestContext(day=..., request_id="a")andDayRequestContext(day=..., request_id="b")because the model has explicitly declared that onlydayaffects the result.This is not the default for handwritten
CallableModels because ccflow cannot safely infer what arbitrary Python code uses. A normal__call__implementation might inspecttype(context), callcontext.model_dump(), read subclass-only fields, or otherwise depend on the full runtime context object. Automatically projecting context for every handwritten model would risk incorrect cache hits for existing users.@Flow.modelimproves this case because consumed contextual inputs are explicit viaFromContext[...], so generated models can safely ignore unused ambient fields in effective cache/graph identity:Compatibility
The PR is additive:
CallableModelimplementations continue to work.Flow.callbehavior is preserved.cache_key(...)remains structural unlesseffective=Trueis explicitly requested.CallableModelcache keys and graph keys remain structural.FlowContextis an open runtime carrier for generated models, but declaredcontext_type=...can still be used to validateFromContext[...]fields against an existing nominal context.Test Coverage
The test suite covers:
with_contextfield and patch transforms,cache_key(..., effective=True)behavior,CallableModelcompatibility.