Skip to content

docs: add Atomic Chat to Local LLMs guide#504

Open
yanalialiuk wants to merge 1 commit into
OpenHands:mainfrom
yanalialiuk:docs/atomic-chat-local-llms
Open

docs: add Atomic Chat to Local LLMs guide#504
yanalialiuk wants to merge 1 commit into
OpenHands:mainfrom
yanalialiuk:docs/atomic-chat-local-llms

Conversation

@yanalialiuk
Copy link
Copy Markdown

Summary

Documents Atomic Chat as another OpenAI-compatible local backend for OpenHands, alongside LM Studio, Ollama, vLLM, and SGLang.

Atomic Chat runs a desktop local inference stack and exposes a single HTTP API (default http://127.0.0.1:1337/v1). OpenHands already supports this pattern via advanced LLM settings; this PR makes discovery and copy/paste configuration explicit in Local LLMs.

Changes

  • Mention Atomic Chat in the Advanced: Alternative LLM Backends intro.
  • Add Create an OpenAI-Compatible Endpoint with Atomic Chat: install, enable local API, large context guidance, discover model id via GET /v1/models, Docker (host.docker.internal:1337/v1) vs same-host base URL, placeholder API key, troubleshooting.
  • Extend Configure OpenHands (Alternative Backends) with Atomic Chat model string, port 1337, and API key note (grouped with Ollama for placeholder keys).

References

Notes for reviewers

  • No new screenshots (Atomic Chat UI changes across versions); steps stay generic and link to upstream README.
  • Anchors in the new section point to existing headings on the same page.

Document OpenAI-compatible local API (default port 1337), model discovery
via /v1/models, OpenHands UI settings for Docker and same-host setups,
and troubleshooting aligned with other backends on this page.
@yanalialiuk yanalialiuk requested a review from mamoodi as a code owner May 13, 2026 12:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant