You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Code review findings addressed:
- Move per-call imports of _openai_embedding_common to module-level (was in
hot path of every embedding call).
- Extract build_embedding_step_kwargs into _openai_embedding_common so that
sync and async OpenAI handlers each become ~10 lines instead of ~50, and
LiteLLM reuses the same kwargs assembly.
- Drop LiteLLM's local _parse_embedding_response and
_get_embedding_model_parameters; both now delegate to the shared helpers
(LiteLLM-specific timeout/api_base/api_version/cost/metadata are layered
on top of the common kwargs).
- Type Bedrock _parse_embedding_output return as
Tuple[Union[List[float], List[List[float]]], int, int] instead of bare
tuple.
Net: -34 lines across the 5 touched source files. Tests unchanged, all
77 embedding tests + 448 lib tests still green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
0 commit comments