Skip to content

Commit 79ef56c

Browse files
committed
pretty stable after config change
1 parent b2363cc commit 79ef56c

2 files changed

Lines changed: 19 additions & 15 deletions

File tree

README.md

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ oneping.reply('Give me a ping, Vasily. One ping only, please.', provider='anthro
66

77
![One ping only, please.](demo/oneping.png)
88

9-
This is a Python library for querying LLM providers such as OpenAI or Anthropic, as well as local models. The main goal is to create an abstraction layer that makes switching between them seamless. Currently the following providers are supported: `openai`, `anthropic`, `fireworks`, `groq`, `deepseek`, and `local` (local models).
9+
This is a Python library for querying LLM providers such as OpenAI or Anthropic, as well as local models. The main goal is to create an abstraction layer that makes switching between them seamless. Currently the following external providers are supported: `openai`, `anthropic`, `google`, `xai`, `fireworks`, `groq`, `deepseek`, and `azure`. You can also use local providers such as `llama-cpp` (llama.cpp), `tei` (text-embedding-inference), `vllm` (vllm), and `oneping` (oneping router).
1010

11-
There is also a `Chat` interface that automatically tracks the message history. Kind of departing from the "one ping" notion, but oh well. Additionally, there is a `textual` powered console interface and a `fasthtml` powered web interface. Both are components that can be embedded in other applications.
11+
There is a `Chat` interface that automatically tracks the message history. Kind of departing from the "one ping" notion, but oh well. Additionally, there is a `textual` powered console interface and a `fasthtml` powered web interface. Both are components that can be embedded in other applications.
1212

13-
Requesting the `local` provider will target `localhost` and use an OpenAI-compatible API as in `llama.cpp` or `llama-cpp-python`. The various native libraries are soft dependencies and the library can still partially function with or without any or all of them. The native packages for these providers are: `openai`, `anthropic`, `fireworks-ai`, `groq`, and `deepseek`.
13+
Requesting the `default` provider will target `localhost` and use an OpenAI-compatible API as in `llama.cpp` or `llama-cpp-python`. The various native libraries are soft dependencies and the library can still partially function with or without any or all of them. The native packages for these providers are: `openai`, `anthropic`, `google`, `xai`, `fireworks-ai`, `groq`, and `deepseek`.
1414

1515
## Installation
1616

@@ -20,9 +20,9 @@ For standard usage, install with:
2020
pip install oneping
2121
```
2222

23-
To install the native provider dependencies add `"[native]"` after `oneping` in the command above. The same goes for the chat interface dependencies with `"[chat]"`.
23+
To install the native major provider dependencies add `"[native]"` after `oneping` in the command above. The same goes for the chat interface dependencies with `"[chat]"`.
2424

25-
The easiest way to handle authentication is to set an API key environment variable such as: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `FIREWORKS_API_KEY`, etc. You can also pass the `api_key` argument to any of the functions directly.
25+
The easiest way to handle authentication is to set an API key environment variable such as: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `XAI_API_KEY`, etc. You can also pass the `api_key` argument to any of the functions directly.
2626

2727
## Library Usage
2828

@@ -34,15 +34,15 @@ response = oneping.reply(query, provider='anthropic')
3434
The `reply` function accepts a number of arguments including (some of these have per-provider defaults):
3535

3636
- `query` (required): The query to send to the LLM (required)
37-
- `provider` = `local`: The provider to use: `openai`, `anthropic`, `fireworks`, or `local`
37+
- `provider` = `local`: The provider to use: `openai`, `anthropic`, `google`, etc
3838
- `system` = `None`: The system prompt to use (not required, but recommended)
3939
- `prefill` = `None`: Start "assistant" response with a string (Anthropic doesn't like newlines in this)
4040
- `model` = `None`: Indicate the desired model for the provider (provider default)
41-
- `max_tokens` = `1024`: The maximum number of tokens to return
42-
- `history` = `None`: List of prior messages or `True` to request full history as return value
43-
- `native` = `False`: Use the native provider libraries
44-
- `url` = `None`: Override the default URL for the provider (provider default)
45-
- `port` = `8000`: Which port to use for local or custom provider
41+
- `native` = `False`: Use the native provider libraries when available
42+
- `history` = `None`: List of prior messages in the conversation history
43+
- `max_tokens` = `None`: The maximum number of tokens to return (provider default)
44+
- `base_url` = `None`: Override the default base URL for the provider (provider default)
45+
- `path` = `None`: Override the default endpoint for the provider (provider default)
4646
- `api_key` = `None`: The API key to use for non-local providers
4747

4848
For example, to use the OpenAI API with a custom `system` prompt:
@@ -93,6 +93,10 @@ There is also a `textual` powered console interface and a `fasthtml` powered web
9393
<img src="demo/fasthtml.png" alt="FastHTML Chat" width="49%">
9494
</p>
9595

96+
## Custom Providers
97+
98+
You can add your own providers by creating a TOML file called `providers.toml` in the `~/.config/oneping` directory. Please consult the provider definitions in `oneping/providers.toml` from this repository for the available options.
99+
96100
## Server
97101

98102
The `server` module includes a simple function to start a `llama-cpp-python` server on the fly (`oneping.server.start` in Python or `oneping server` from the command line).

pyproject.toml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,11 @@
11
[project]
22
name = 'oneping'
3-
version = '0.6'
3+
version = '0.6.1'
44
description = 'LLM provider abstraction layer.'
55
readme = { file = 'README.md' , content-type = 'text/markdown' }
66
authors = [{ name = 'Doug Hanley', email = 'doug@compendiumlabs.ai' }]
7-
license = { text = 'MIT' }
7+
license = 'MIT'
88
classifiers = [
9-
'License :: OSI Approved :: MIT License',
109
'Programming Language :: Python',
1110
'Programming Language :: Python :: 3',
1211
]
@@ -18,7 +17,7 @@ requires-python = '>=3.7'
1817
oneping = 'oneping.__main__:main'
1918

2019
[project.optional-dependencies]
21-
native = ['openai', 'anthropic', 'fireworks-ai']
20+
native = ['openai', 'anthropic', 'google', 'xai']
2221
chat = ['asyncstdlib', 'textual', 'python-fasthtml']
2322

2423
[project.urls]
@@ -28,4 +27,5 @@ Homepage = 'http://github.com/CompendiumLabs/oneping'
2827
packages = ['oneping', 'oneping.native', 'oneping.interface']
2928

3029
[tool.setuptools.package-data]
30+
"oneping" = ["providers.toml"]
3131
"oneping.interface" = ["web/*"]

0 commit comments

Comments
 (0)