-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
I am trying to use models via Openrouter with Android Studio. The beta version supports local models, so I thought it might work with toolbridge.
Below you see the configuration, as well as traffic log excluding the actual prompt. Android Studio recognizes the setup, models can be selected, but responses return empty with tokens used. Any ideas?
> openai-proxy@1.0.0 start
> node --no-deprecation index.js
Note: Using BACKEND_LLM_BASE_URL (https://openrouter.ai/api) for Ollama API URL
Backend Mode: OPENAI
Backend URL: https://openrouter.ai/api (used for both OpenAI and Ollama formats)
Ollama Default Context Length (for synthetic /api/show): 131072
Configuration loaded and validated successfully.
Backend Mode: OPENAI
Configured Host (PROXY_HOST): 0.0.0.0
Configured Port (PROXY_PORT): 11434
Backend Base URL: https://openrouter.ai/api
Chat Completions Endpoint: https://openrouter.ai/api/v1/chat/completions
OpenAI API Key: [CONFIGURED]
Ollama API URL: https://openrouter.ai/api
Max Stream Buffer Size: 1048576 bytes
Stream Connection Timeout: 120 seconds
Debug Mode: Enabled
HTTP Referer: Android Studio
X-Title: toolbridge
╭─────────────────────────────────────────────────╮
│ 🚀 OpenAI Tool Proxy Server Started │
├─────────────────────────────────────────────────┤
│ ➤ Listening on: http://localhost:11434 │
│ Binding address: 0.0.0.0 │
│ ➤ Proxying to: https://openrouter.ai/api │
├─────────────────────────────────────────────────┤
│ Available at: │
│ • http://localhost:11434 │
│ • http://192.168.178.20:11434 │
╰─────────────────────────────────────────────────╯
➤ 2025-10-22T15:13:08.762Z POST /v1/chat/completions CHAT COMPLETIONS
stream: enabled
--- New Chat Completions Request ---
[CLIENT REQUEST] Headers: {
"authorization": "Bearer",
"user-agent": "OpenAIClientAsyncImpl/Java unknown",
"x-stainless-arch": "x64",
"x-stainless-lang": "java",
"x-stainless-os": "Windows",
"x-stainless-os-version": "10.0",
"x-stainless-package-version": "unknown",
"x-stainless-retry-count": "0",
"x-stainless-runtime": "JRE",
"x-stainless-runtime-version": "21.0.8",
"x-stainless-read-timeout": "600",
"x-stainless-timeout": "600",
"content-type": "application/json",
"content-length": "280133",
"host": "localhost:11434",
"connection": "Keep-Alive",
"accept-encoding": "gzip"
}
[CLIENT REQUEST] Body: {
"messages": [
`
```
[..]
` {
[STREAM PROCESSOR] Processing chunk (546 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (613 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (611 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (359 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (25 bytes)
[STREAM PROCESSOR] Received non-SSE line: : OPENROUTER PROCESSING
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (573 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (496 bytes)
[STREAM PARSER] Incomplete JSON, waiting for more data
[STREAM PROCESSOR] Processing chunk (14 bytes)
[STREAM PROCESSOR] Received [DONE] signal
[STREAM PROCESSOR] Processing [DONE] signal
[STREAM PARSER] Discarding incomplete JSON at end of stream: : OPENROUTER PROCESSING: OPENROUTER PROCESSING: OP...
[STREAM PROCESSOR] OpenAI backend stream ended normally.
⮑ 200 OK CHAT COMPLETIONS (stream) in 5468ms
[STREAM PROCESSOR] Client stream closed.
`
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels