Skip to content

Integrate litellm for multi-provider LLM support#168

Open
KylinMountain wants to merge 4 commits intoVectifyAI:mainfrom
KylinMountain:feat/litellm
Open

Integrate litellm for multi-provider LLM support#168
KylinMountain wants to merge 4 commits intoVectifyAI:mainfrom
KylinMountain:feat/litellm

Conversation

@KylinMountain
Copy link
Collaborator

Summary

  • Replace openai + tiktoken with litellm for unified multi-provider LLM support (OpenAI, Azure, Gemini, etc.)
  • Rename ChatGPT_APIllm_complete, ChatGPT_API_asyncallm_complete (using litellm.acompletion for native async), ChatGPT_API_streamllm_complete_stream
  • Add backward compatibility: CHATGPT_API_KEY auto-aliased to OPENAI_API_KEY
  • Remove tiktoken dependency; use litellm.token_counter instead

Test plan

  • Set OPENAI_API_KEY and run existing PDF indexing — should behave identically to before
  • Set CHATGPT_API_KEY (legacy) — should still work via alias
  • Try a non-OpenAI provider (e.g. gemini/gemini-pro) by setting the appropriate env var

🤖 Generated with Claude Code

@claude
Copy link

claude bot commented Mar 16, 2026

Claude Code Review

This repository is configured for manual code reviews. Comment @claude review to trigger a review.

@KylinMountain
Copy link
Collaborator Author

@claude review

@claude
Copy link

claude bot commented Mar 16, 2026

⚠️ Code review skipped — your organization's overage spend limit has been reached.

Code review is billed via overage credits. To resume reviews, an organization admin can raise the monthly limit in Settings → Usage.

Once credits are available, comment @claude review on this pull request to trigger a review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant