Skip to content

Commit 157cece

Browse files
committed
feat: add openweb ui doc
1 parent 1d56f46 commit 157cece

1 file changed

Lines changed: 142 additions & 0 deletions

File tree

Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
---
2+
products:
3+
- Alauda AI
4+
kind:
5+
- Solution
6+
ProductsVersion:
7+
- 4.x
8+
---
9+
10+
# OpenWebUI
11+
12+
## Overview
13+
OpenWebUI is an open-source AI Web interface that supports docking with multiple OpenAI protocol-compatible inference backends (such as vLLM, MLServer, XInference, etc.) through a unified entry point. It is used for scenarios such as text generation, multimodal input, and voice input. It provides an extensible external tool mechanism to facilitate the integration of retrieval, function calling, and third-party services. It is suitable for deployment in containers locally or in the cloud, supporting persistent data and Ingress-based HTTPS access.
14+
15+
## Basic Features
16+
- **Conversation & Text Generation**: Support system prompts, adjustable parameters (temperature, length, etc.), and session management.
17+
- **Multimodal & Voice**: Images/documents as context, voice input/transcription (dependent on backend capabilities).
18+
- **External Tool Extension**: Can call retrieval, databases, HTTP APIs, etc., to build tool-enhanced workflows.
19+
- **Data & Security**: Sessions and configurations can be persisted; can integrate with authentication, rate limiting, logging/monitoring.
20+
21+
## Backend Integration
22+
- **Protocol Compatibility**: Support OpenAI API style backends (such as vLLM, MLServer, XInference, TGI, etc.).
23+
- **Connection Parameters**: Base URL (e.g., `http(s)://{backend}/v1`), API Key, model name, and default inference parameters.
24+
- **Multiple Backends**: configured in the UI, allowing switching between different inference service backends.
25+
26+
## Deployment Scheme
27+
Create the following resources in order. In this case, choose an independent `open-webui-ns` namespace. You can choose an available namespace as needed.
28+
29+
### Namespace
30+
```bash
31+
kubectl create ns open-webui-ns
32+
```
33+
34+
### Create the specific deployment
35+
```yaml
36+
apiVersion: apps/v1
37+
kind: Deployment
38+
metadata:
39+
labels:
40+
app: open-webui
41+
name: open-webui
42+
namespace: open-webui-ns
43+
spec:
44+
replicas: 1
45+
selector:
46+
matchLabels:
47+
app: open-webui
48+
template:
49+
metadata:
50+
labels:
51+
app: open-webui
52+
spec:
53+
volumes:
54+
- name: webui-data
55+
emptyDir: {}
56+
containers:
57+
- image: ghcr.io/open-webui/open-webui
58+
name: open-webui
59+
ports:
60+
- containerPort: 8080
61+
env:
62+
- name: ENABLE_DIRECT_CONNECTIONS
63+
value: "true"
64+
- name: OPENAI_API_BASE_URL
65+
value: http://example-predictor/v1 # REPLACE with actual inference service URL
66+
- name: PORT
67+
value: "8080"
68+
volumeMounts:
69+
- name: webui-data
70+
mountPath: /app/backend/data
71+
resources:
72+
requests:
73+
cpu: 1000m
74+
memory: 128Mi
75+
limits:
76+
cpu: 2000m
77+
memory: 1Gi
78+
```
79+
80+
## Important environment values
81+
82+
Relative environment values should be configured.
83+
84+
### ENABLE_DIRECT_CONNECTIONS
85+
* Set to true to enable external connections.
86+
* Purpose: Allows adding additional external inference service backends within OpenWebUI.
87+
88+
### OPENAI_API_BASE_URL
89+
* Specifies the default inference service endpoint.
90+
* If OpenWebUI and the inference service are deployed in the same cluster, use the service’s internal cluster address.
91+
* For the address details, refer to: **AML Business View / Inference Service / Inference Service Details / Access Method**.
92+
* Value format: `{{Cluster Internal URL}}/v1`.
93+
94+
95+
### Verification
96+
```bash
97+
kubectl get deployment open-webui -n open-webui-ns -w
98+
```
99+
Wait until the deployment status is `1/1 Ready`.
100+
101+
## Access OpenWebUI
102+
103+
### 1. View OpenWebUI via NodePort Service
104+
Create the following resource:
105+
106+
```yaml
107+
apiVersion: v1
108+
kind: Service
109+
metadata:
110+
labels:
111+
app: open-webui
112+
name: svc-open-webui
113+
namespace: open-webui-ns
114+
spec:
115+
type: NodePort
116+
ports:
117+
- port: 8080
118+
protocol: TCP
119+
targetPort: 8080
120+
selector:
121+
app: open-webui
122+
```
123+
Check the relevant port and node IP to access the page.
124+
125+
### 2. Initial Settings
126+
When accessing OpenWebUI for the first time, you need to register. Choose a strong password for the administrator account.
127+
128+
### 3. Add Inference Service
129+
Go to **Settings -> Connections -> Add Connection**.
130+
Here you will be required to add the inference service address.
131+
You can obtain the cluster external access methods via **AML Business View / Inference Service / Inference Service Details / Access Method**.
132+
Fill it in afterwards. Please use the cluster **external** access method.
133+
In the **Add Connection** popup, fill in:
134+
`{{Cluster External URL}}/v1`
135+
136+
Click the icon on the right to verify connectivity. After success, click save. Return to the chat page to select the existing inference service for use.
137+
138+
### 4. Use Inference Service
139+
Enter the chat page, select the uploaded inference service, and explore more features, such as:
140+
- Voice input
141+
- Multimodal input
142+
- External tools

0 commit comments

Comments
 (0)