You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-11Lines changed: 15 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ oneping.reply('Give me a ping, Vasily. One ping only, please.', provider='anthro
6
6
7
7

8
8
9
-
This is a Python library for querying LLM providers such as OpenAI or Anthropic, as well as local models. The main goal is to create an abstraction layer that makes switching between them seamless. Currently the following providers are supported: `openai`, `anthropic`, `fireworks`, `groq`, `deepseek`, and `local` (local models).
9
+
This is a Python library for querying LLM providers such as OpenAI or Anthropic, as well as local models. The main goal is to create an abstraction layer that makes switching between them seamless. Currently the following external providers are supported: `openai`, `anthropic`, `google`, `xai`, `fireworks`, `groq`, `deepseek`, and `azure`. You can also use local providers such as `llama-cpp` (llama.cpp), `tei` (text-embedding-inference), `vllm` (vllm), and `oneping` (oneping router).
10
10
11
-
There is also a `Chat` interface that automatically tracks the message history. Kind of departing from the "one ping" notion, but oh well. Additionally, there is a `textual` powered console interface and a `fasthtml` powered web interface. Both are components that can be embedded in other applications.
11
+
There is a `Chat` interface that automatically tracks the message history. Kind of departing from the "one ping" notion, but oh well. Additionally, there is a `textual` powered console interface and a `fasthtml` powered web interface. Both are components that can be embedded in other applications.
12
12
13
-
Requesting the `local` provider will target `localhost` and use an OpenAI-compatible API as in `llama.cpp` or `llama-cpp-python`. The various native libraries are soft dependencies and the library can still partially function with or without any or all of them. The native packages for these providers are: `openai`, `anthropic`, `fireworks-ai`, `groq`, and `deepseek`.
13
+
Requesting the `default` provider will target `localhost` and use an OpenAI-compatible API as in `llama.cpp` or `llama-cpp-python`. The various native libraries are soft dependencies and the library can still partially function with or without any or all of them. The native packages for these providers are: `openai`, `anthropic`, `google`, `xai`, `fireworks-ai`, `groq`, and `deepseek`.
14
14
15
15
## Installation
16
16
@@ -20,9 +20,9 @@ For standard usage, install with:
20
20
pip install oneping
21
21
```
22
22
23
-
To install the native provider dependencies add `"[native]"` after `oneping` in the command above. The same goes for the chat interface dependencies with `"[chat]"`.
23
+
To install the native major provider dependencies add `"[native]"` after `oneping` in the command above. The same goes for the chat interface dependencies with `"[chat]"`.
24
24
25
-
The easiest way to handle authentication is to set an API key environment variable such as: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `FIREWORKS_API_KEY`, etc. You can also pass the `api_key` argument to any of the functions directly.
25
+
The easiest way to handle authentication is to set an API key environment variable such as: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `XAI_API_KEY`, etc. You can also pass the `api_key` argument to any of the functions directly.
You can add your own providers by creating a TOML file called `providers.toml` in the `~/.config/oneping` directory. Please consult the provider definitions in `oneping/providers.toml` from this repository for the available options.
99
+
96
100
## Server
97
101
98
102
The `server` module includes a simple function to start a `llama-cpp-python` server on the fly (`oneping.server.start` in Python or `oneping server` from the command line).
0 commit comments