Skip to content

feat: rehydrate image context in multi-turn conversations for vision models#334

Open
nap-liu wants to merge 1 commit intodataelement:mainfrom
nap-liu:pr/image-context-rehydration
Open

feat: rehydrate image context in multi-turn conversations for vision models#334
nap-liu wants to merge 1 commit intodataelement:mainfrom
nap-liu:pr/image-context-rehydration

Conversation

@nap-liu
Copy link
Copy Markdown

@nap-liu nap-liu commented Apr 8, 2026

Summary

  • New image_context.py: scans history for [file:xxx.jpg] markers, reads from disk, injects base64
  • Limits: max 3 images, max 5MB each
  • Integrated into websocket.py, gated on supports_vision

Test plan

  • DingTalk E2E: Re-hydrated 2-3 images confirmed in logs

…models

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@nap-liu
Copy link
Copy Markdown
Author

nap-liu commented Apr 14, 2026

Now that #392 (DingTalk media support) is merged without image context rehydration, this PR is the remaining piece needed for multi-turn conversations with vision models — images sent in earlier turns are re-attached to subsequent LLM calls so the model retains visual context.

Verified clean-merge against current upstream/main. Could we get a review here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant