Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions docs/en_US/ai_tools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,18 @@ button and select *AI*).
Select your preferred LLM provider from the dropdown:

**Anthropic**
Use Claude models from Anthropic. Requires an Anthropic API key.
Use Claude models from Anthropic, or any Anthropic-compatible API provider.

* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/).
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.anthropic.com/v1).
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/). Optional when using a custom URL with a provider that does not require authentication.
* **Model**: Select from available Claude models (e.g., claude-sonnet-4-20250514).

**OpenAI**
Use GPT models from OpenAI. Requires an OpenAI API key.
Use GPT models from OpenAI, or any OpenAI-compatible API provider (e.g.,
LiteLLM, LM Studio, EXO, or other local inference servers).

* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/).
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.openai.com/v1). Include the ``/v1`` path prefix if required by your provider.
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/). Optional when using a custom URL with a provider that does not require authentication.
* **Model**: Select from available GPT models (e.g., gpt-4).

**Ollama**
Expand All @@ -72,6 +75,10 @@ Select your preferred LLM provider from the dropdown:
* **API URL**: The URL of the Docker Model Runner API (default: http://localhost:12434).
* **Model**: Select from available models or enter a custom model name.

.. note:: You can also use the *OpenAI* provider with a custom API URL for any
OpenAI-compatible endpoint, including Docker Model Runner and other local
inference servers.

After configuring your provider, click *Save* to apply the changes.


Expand Down
27 changes: 23 additions & 4 deletions docs/en_US/preferences.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,19 +47,33 @@ Use the fields on the *AI* panel to configure your LLM provider:

**Anthropic Settings:**

* Use the *API URL* field to set a custom API endpoint URL. Leave empty to use
the default Anthropic API (``https://api.anthropic.com/v1``). Set a custom URL
to use an Anthropic-compatible API provider.

* Use the *API Key File* field to specify the path to a file containing your
Anthropic API key.
Anthropic API key. The API key may be optional when using a custom API URL
with a provider that does not require authentication.

* Use the *Model* field to select from the available Claude models. Click the
refresh button to fetch the latest available models from Anthropic.
refresh button to fetch the latest available models from your configured
endpoint.

**OpenAI Settings:**

* Use the *API URL* field to set a custom API endpoint URL. Leave empty to use
the default OpenAI API (``https://api.openai.com/v1``). Set a custom URL to
use any OpenAI-compatible API provider (e.g., LiteLLM, LM Studio, EXO).
Include the ``/v1`` path prefix if required by your provider
(e.g., ``http://localhost:1234/v1``).

* Use the *API Key File* field to specify the path to a file containing your
OpenAI API key.
OpenAI API key. The API key may be optional when using a custom API URL
with a provider that does not require authentication.

* Use the *Model* field to select from the available GPT models. Click the
refresh button to fetch the latest available models from OpenAI.
refresh button to fetch the latest available models from your configured
endpoint.

**Ollama Settings:**

Expand All @@ -79,6 +93,11 @@ Use the fields on the *AI* panel to configure your LLM provider:
model name. Click the refresh button to fetch the latest available models
from your Docker Model Runner.

.. note:: You can also use the *OpenAI* provider with a custom API URL for any
OpenAI-compatible endpoint, including Docker Model Runner, LM Studio, EXO,
and other local inference servers. This can be useful when you want to use
a provider that isn't explicitly listed but supports the OpenAI API format.

The Browser Node
****************

Expand Down
1 change: 1 addition & 0 deletions docs/en_US/release_notes_9_14.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ New features
************

| `Issue #4011 <https://github.com/pgadmin-org/pgadmin4/issues/4011>`_ - Added support to download binary data from result grid.
| `Issue #9703 <https://github.com/pgadmin-org/pgadmin4/issues/9703>`_ - Added support for custom LLM provider URLs for OpenAI and Anthropic, allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM.

Housekeeping
************
Expand Down
18 changes: 18 additions & 0 deletions web/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -987,19 +987,35 @@
DEFAULT_LLM_PROVIDER = ''

# Anthropic Configuration
# URL for the Anthropic API endpoint. Leave empty to use the default
# (https://api.anthropic.com/v1). Set a custom URL to use an
# Anthropic-compatible API provider.
ANTHROPIC_API_URL = ''

# Path to a file containing the Anthropic API key. The file should contain
# only the API key with no additional whitespace or formatting.
# Default: ~/.anthropic-api-key
# Note: The API key may be optional when using a custom API URL with a
# provider that does not require authentication.
ANTHROPIC_API_KEY_FILE = '~/.anthropic-api-key'

# The Anthropic model to use for AI features.
# Examples: claude-sonnet-4-20250514, claude-3-5-haiku-20241022
ANTHROPIC_API_MODEL = ''

# OpenAI Configuration
# URL for the OpenAI API endpoint. Leave empty to use the default
# (https://api.openai.com/v1). Set a custom URL to use any
# OpenAI-compatible API provider (e.g., LiteLLM, LM Studio, EXO).
# Include the /v1 path prefix if required by your provider
# (e.g., http://localhost:1234/v1).
OPENAI_API_URL = ''

# Path to a file containing the OpenAI API key. The file should contain
# only the API key with no additional whitespace or formatting.
# Default: ~/.openai-api-key
# Note: The API key may be optional when using a custom API URL with a
# provider that does not require authentication.
OPENAI_API_KEY_FILE = '~/.openai-api-key'

# The OpenAI model to use for AI features.
Expand All @@ -1020,6 +1036,8 @@
# OpenAI-compatible API. No API key is required.
# URL for the Docker Model Runner API endpoint. Leave empty to disable.
# Typical value: http://localhost:12434
# Tip: You can also use the OpenAI provider with a custom API URL for any
# OpenAI-compatible endpoint, including Docker Model Runner.
DOCKER_API_URL = ''

# The Docker Model Runner model to use for AI features.
Expand Down
Loading
Loading