You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixed inconsistency between compile and forward for PromptedTemplatedGenerator (issue #363).
Added resume_step, enabling pipelines to resume execution from any operator — a major improvement for long pipelines.
Thanks to @SunnyHaze.
🤖 Expanded Serving Capabilities: Local VLM Serving
Introduced local VLM model serving with several new features, enabling more complete local inference workflows.
Thanks to @fatty-belly.
🧪 Eval Pipeline Upgrade
Added a new EvalPipeline with more robust and flexible evaluation support.
Thanks to @YalinFeng01.
🌐 Google VertexAI Serving Debug & Improvements
Improved debugging logic and reliability for Google VertexAI model serving.
Thanks to @wongzhenhao.
🈶 Chinese Language Support for N-gram Filters
Both the reasoning N-gram Filter and general N-gram Filter now support Chinese, improving evaluation accuracy for Chinese LLM tasks.
Thanks to @zzy1127 and @scuuy.
🧩 Additional Improvements
🔧 Key Bug Fixes & Behavior Improvements
Fixed Linux compatibility issues for text2vecsql. Thanks to @yaodongwen.
Fixed incorrect invocation of GetFilterFinalScorerPrompt. Thanks to @DKAMX.
Removed redundant operators and improved description logic. Thanks to @SunnyHaze.
Fixed pipeline.compile() incorrectly skipping operators without llm_serving. Thanks to @fatty-belly.
📚 Prompt Registry Added
Added a prompt registry mechanism across core_text operators, making prompt management more systematic.
Thanks to @scuuy.
🔍 Structural and Codebase Improvements
Minor fixes across Serving, VertexAI integration, and compile routines.