Skip to content

Commit b70e383

Browse files
authored
Merge pull request #1132 from Classic298/dev
2 parents 4fe6f3a + b3b3b80 commit b70e383

13 files changed

Lines changed: 429 additions & 48 deletions

File tree

docs/features/chat-conversations/chat-features/code-execution/index.md

Lines changed: 25 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -25,17 +25,33 @@ Open WebUI supports multiple code execution backends, each suited to different u
2525

2626
### Pyodide (Default)
2727

28-
Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with constraints:
28+
Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with some constraints:
2929

30-
- **No persistent storage** — the filesystem resets between executions.
30+
- **Persistent file storage** — the virtual filesystem at `/mnt/uploads/` is backed by IndexedDB (IDBFS). Files persist across code executions within the same session and survive page reloads.
31+
- **Built-in file browser** — when Code Interpreter is enabled, a file browser panel appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files in the Pyodide filesystem — no terminal needed.
32+
- **User file access** — files attached to messages are automatically placed in `/mnt/uploads/` before code execution, so the model (and your code) can read them directly.
3133
- **Limited library support** — only a subset of Python packages are available. Libraries that rely on C extensions or system calls may not work.
3234
- **No shell access** — cannot run shell commands, install packages, or interact with the OS.
3335

3436
:::tip
35-
Pyodide works well for **text analysis, hash computation, chart generation**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
37+
Pyodide works well for **text analysis, hash computation, chart generation, file processing**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
3638
:::
3739

38-
### Jupyter
40+
:::warning Best for basic analysis only
41+
Pyodide runs Python via WebAssembly inside the browser. The AI **cannot install additional libraries** beyond the small fixed set listed below — any code that imports an unsupported package will fail. Execution is also **significantly slower** than native Python, and large datasets or CPU-intensive tasks may hit browser memory limits. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For anything more demanding, use **Open Terminal** instead, which provides full native performance and unrestricted package access inside a Docker container.
42+
43+
Available libraries: micropip, requests, beautifulsoup4, numpy, pandas, matplotlib, seaborn, scikit-learn, scipy, regex, sympy, tiktoken, pytz, and the Python standard library. **Nothing else can be installed at runtime.**
44+
:::
45+
46+
:::note Mutually exclusive with Open Terminal
47+
The Code Interpreter toggle and the Open Terminal toggle cannot be active at the same time. Activating one will deactivate the other — they serve similar purposes but use different execution backends.
48+
:::
49+
50+
### Jupyter (Legacy)
51+
52+
:::caution Legacy Engine
53+
Jupyter is now considered a **legacy** code execution engine. The Pyodide engine is recommended for most use cases, and Open Terminal is recommended when you need full server-side execution. Jupyter support may be deprecated in a future release.
54+
:::
3955

4056
Jupyter provides a full Python environment and can handle virtually any task — file creation, package installation, and complex library usage. However, it has significant drawbacks in shared deployments:
4157

@@ -58,15 +74,17 @@ If you are running a multi-user or organizational deployment, **Jupyter is not r
5874

5975
### Comparison
6076

61-
| Consideration | Pyodide | Jupyter | Open Terminal |
77+
| Consideration | Pyodide | Jupyter (Legacy) | Open Terminal |
6278
| :--- | :--- | :--- | :--- |
6379
| **Runs in** | Browser (WebAssembly) | Server (Python kernel) | Server (Docker container) |
6480
| **Library support** | Limited subset | Full Python ecosystem | Full OS — any language, any tool |
6581
| **Shell access** | ❌ None | ⚠️ Limited | ✅ Full shell |
66-
| **File persistence** | ❌ Resets each execution | ✅ Shared filesystem | ✅ Container filesystem (until removal) |
82+
| **File persistence** | ✅ IDBFS (persists across executions & reloads) | ✅ Shared filesystem | ✅ Container filesystem (until removal) |
83+
| **File browser** | ✅ Built-in sidebar panel | ❌ None | ✅ Built-in sidebar panel |
84+
| **User file access** | ✅ Attached files placed in `/mnt/uploads/` | ❌ Manual | ✅ Attached files available |
6785
| **Isolation** | ✅ Browser sandbox | ❌ Shared environment | ✅ Container-level (when using Docker) |
6886
| **Multi-user safety** | ✅ Per-user by design | ⚠️ Not isolated | ⚠️ Single instance (per-user containers planned) |
69-
| **File generation** | ❌ Very limited | ✅ Full support | ✅ Full support with upload/download |
87+
| **File generation** | ✅ Write to `/mnt/uploads/`, download via file browser | ✅ Full support | ✅ Full support with upload/download |
7088
| **Setup** | None (built-in) | Admin configures globally | Native integration via Settings → Integrations, or as a Tool Server |
7189
| **Recommended for orgs** | ✅ Safe default | ❌ Not without isolation | ✅ Per-user by design |
7290
| **Enterprise scalability** | ✅ Client-side, no server load | ❌ Single shared instance | ⚠️ Manual per-user instances |

docs/features/chat-conversations/chat-features/code-execution/python.md

Lines changed: 64 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Open WebUI provides two ways to execute Python code:
1212
1. **Manual Code Execution**: Run Python code blocks generated by LLMs using a "Run" button in the browser (uses Pyodide/WebAssembly).
1313
2. **Code Interpreter**: An AI capability that allows models to automatically write and execute Python code as part of their response (uses Pyodide or Jupyter).
1414

15-
Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat.
15+
Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat. When using the Pyodide engine, a **persistent virtual filesystem** at `/mnt/uploads/` is available — files survive across code executions and page reloads, and files attached to messages are automatically placed there for your code to access.
1616

1717
## Code Interpreter Capability
1818

@@ -35,7 +35,7 @@ The Code Interpreter is a model capability that enables LLMs to write and execut
3535

3636
These settings can be configured at **Admin Panel → Settings → Code Execution**:
3737
- Enable/disable code interpreter
38-
- Select engine (Pyodide or Jupyter)
38+
- Select engine: **Pyodide** (recommended) or **Jupyter (Legacy)**
3939
- Configure Jupyter connection settings
4040
- Set blocked modules
4141

@@ -44,12 +44,16 @@ These settings can be configured at **Admin Panel → Settings → Code Executio
4444
| Variable | Default | Description |
4545
|----------|---------|-------------|
4646
| `ENABLE_CODE_INTERPRETER` | `true` | Enable/disable code interpreter globally |
47-
| `CODE_INTERPRETER_ENGINE` | `pyodide` | Engine to use: `pyodide` (browser) or `jupyter` (server) |
47+
| `CODE_INTERPRETER_ENGINE` | `pyodide` | Engine to use: `pyodide` (browser, recommended) or `jupyter` (server, legacy) |
4848
| `CODE_INTERPRETER_PROMPT_TEMPLATE` | (built-in) | Custom prompt template for code interpreter |
4949
| `CODE_INTERPRETER_BLACKLISTED_MODULES` | `""` | Comma-separated list of blocked Python modules |
5050

5151
For Jupyter configuration, see the [Jupyter Notebook Integration](/tutorials/integrations/dev-tools/jupyter) tutorial.
5252

53+
:::note Filesystem Prompt Injection
54+
When the Pyodide engine is selected, Open WebUI automatically appends a filesystem-awareness prompt to the code interpreter instructions. This tells the model about `/mnt/uploads/` and how to discover user-uploaded files. When using Jupyter, this filesystem prompt is not appended (since Jupyter has its own filesystem). You do not need to include filesystem instructions in your custom `CODE_INTERPRETER_PROMPT_TEMPLATE` — they are added automatically.
55+
:::
56+
5357
### Native Function Calling (Native Mode)
5458

5559
When using **Native function calling mode** with a capable model (e.g., GPT-5, Claude 4.5, MiniMax M2.5), the code interpreter is available as a builtin tool called `execute_code`. This provides a more integrated experience:
@@ -126,6 +130,8 @@ If you see raw base64 text appearing in chat responses, the model is incorrectly
126130

127131
Open WebUI includes a browser-based Python environment using [Pyodide](https://pyodide.org/) (WebAssembly). This allows running Python scripts directly in your browser with no server-side setup.
128132

133+
The Pyodide worker is **persistent** — it is created once and reused across code executions. This means variables, imported modules, and files written to the virtual filesystem are retained between executions within the same session.
134+
129135
### Running Code Manually
130136

131137
1. Ask an LLM to write Python code
@@ -135,21 +141,63 @@ Open WebUI includes a browser-based Python environment using [Pyodide](https://p
135141

136142
### Supported Libraries
137143

138-
Pyodide includes the following pre-configured packages:
144+
Pyodide includes the following packages, which are auto-detected from import statements and loaded on demand:
145+
146+
| Package | Use case |
147+
|---------|----------|
148+
| micropip | Package installer (internal use) |
149+
| requests | HTTP requests |
150+
| beautifulsoup4 | HTML/XML parsing |
151+
| numpy | Numerical computing |
152+
| pandas | Data analysis and manipulation |
153+
| matplotlib | Chart and plot generation |
154+
| seaborn | Statistical data visualization |
155+
| scikit-learn | Machine learning |
156+
| scipy | Scientific computing |
157+
| regex | Advanced regular expressions |
158+
| sympy | Symbolic mathematics |
159+
| tiktoken | Token counting for LLMs |
160+
| pytz | Timezone handling |
161+
162+
The Python standard library is also fully available (json, csv, math, datetime, os, io, etc.).
163+
164+
:::warning No runtime installation
165+
The AI **cannot install additional libraries** beyond the list above. Any code that imports an unsupported package will fail with an import error. Packages that require C extensions, system calls, or native binaries (e.g., torch, tensorflow, opencv, psycopg2) are **not available** and cannot be made available in Pyodide. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For full Python package access, use **[Open Terminal](/features/chat-conversations/chat-features/code-execution#open-terminal)** instead.
166+
:::
167+
168+
## Persistent File System
139169

140-
- micropip
141-
- packaging
142-
- requests
143-
- beautifulsoup4
144-
- numpy
145-
- pandas
146-
- matplotlib
147-
- scikit-learn
148-
- scipy
149-
- regex
170+
When using the Pyodide engine, a persistent virtual filesystem is mounted at `/mnt/uploads/`. This filesystem is backed by the browser's IndexedDB via [IDBFS](https://emscripten.org/docs/api_reference/Filesystem-API.html#filesystem-api-idbfs) and provides:
150171

151-
:::note
152-
Packages not pre-compiled in Pyodide cannot be installed at runtime. For additional packages, consider using the Jupyter integration or forking Pyodide to add custom packages.
172+
- **Cross-execution persistence** — files written by one code execution are accessible in subsequent executions.
173+
- **Cross-reload persistence** — files survive page reloads (stored in IndexedDB).
174+
- **Automatic upload mounting** — files attached to messages are fetched from the server and placed in `/mnt/uploads/` before code execution, so the model can read them directly.
175+
- **File browser panel** — when Code Interpreter is enabled, a file browser appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files — no terminal needed.
176+
177+
### Working with Files in Code
178+
179+
```python
180+
import os
181+
182+
# List uploaded files
183+
print(os.listdir('/mnt/uploads'))
184+
185+
# Read a user-uploaded CSV
186+
import pandas as pd
187+
df = pd.read_csv('/mnt/uploads/data.csv')
188+
print(df.head())
189+
190+
# Write output to the persistent filesystem (downloadable via file browser)
191+
df.to_csv('/mnt/uploads/result.csv', index=False)
192+
print('Saved result.csv to /mnt/uploads/')
193+
```
194+
195+
:::tip
196+
The file browser panel lets you download any file the model creates. Ask the model to save its output to `/mnt/uploads/` and it will appear in the file browser for download.
197+
:::
198+
199+
:::note Jupyter Engine
200+
The persistent filesystem prompt and `/mnt/uploads/` integration are **Pyodide-only**. When using the Jupyter engine, files are managed through Jupyter's own filesystem. The file browser panel is not available for Jupyter.
153201
:::
154202

155203
## Example: Creating a Chart

docs/features/chat-conversations/chat-features/conversation-organization.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,11 @@ Organize existing chats by moving them into folders:
3636

3737
Folders can be nested within other folders to create hierarchical organization:
3838

39-
- Drag a folder onto another folder to make it a subfolder.
40-
- Use the right-click menu to move folders between parent folders.
39+
- **Create subfolder from menu**: Right-click (or click the three-dot menu ⋯) on any folder and select **"Create Folder"** to create a new subfolder directly inside it.
40+
- **Drag and drop**: Drag a folder onto another folder to make it a subfolder.
41+
- **Move via context menu**: Right-click on a folder and use the move option to relocate it under a different parent.
4142
- Folders can be expanded or collapsed to show/hide their contents.
43+
- Subfolder names must be unique within the same parent folder. If a duplicate name is entered, a number is automatically appended (e.g., "Notes 1").
4244

4345
### Starting a Chat in a Folder
4446

docs/features/chat-conversations/chat-features/reasoning-models.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ If your model uses different tags, you can provide a list of tag pairs in the `r
3838
## Configuration & Behavior
3939

4040
- **Stripping from Payload**: The `reasoning_tags` parameter itself is an Open WebUI-specific control and is **stripped** from the payload before being sent to the LLM backend (OpenAI, Ollama, etc.). This ensures compatibility with providers that do not recognize this parameter.
41-
- **Chat History**: Thinking tags are **not** stripped from the chat history. If previous messages in a conversation contain thinking blocks, they are sent back to the model as part of the context, allowing the model to "remember" its previous reasoning steps.
41+
- **Chat History**: Reasoning content is preserved in chat history and **sent back to the model** across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to "remember" its previous reasoning steps across the entire conversation.
4242
- **UI Rendering**: Internally, reasoning blocks are processed and rendered using a specialized UI component. When saved or exported, they may be represented as HTML `<details type="reasoning">` tags.
4343

4444
---
@@ -153,8 +153,8 @@ Open WebUI follows the **OpenAI Chat Completions API standard**. Reasoning conte
153153

154154
### Important Notes
155155

156-
- **Within-turn preservation**: Reasoning is preserved and sent back to the API only within the same turn (while tool calls are being processed)
157-
- **Cross-turn behavior**: Between separate user messages, reasoning is **not** sent back to the API. The thinking content is displayed in the UI but stripped from the message content that gets sent in subsequent requests.
156+
- **Within-turn preservation**: Reasoning is preserved and sent back to the API within the same turn (while tool calls are being processed).
157+
- **Cross-turn behavior**: Reasoning content **is** sent back to the API across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to maintain context of its previous reasoning throughout the conversation.
158158
- **Text-based serialization**: Reasoning is sent as text wrapped in tags (e.g., `<think>thinking content</think>`), not as structured content blocks. This works with most OpenAI-compatible APIs but may not align with provider-specific formats like Anthropic's extended thinking content blocks.
159159

160160
---
@@ -373,11 +373,11 @@ If the model uses tags that are not in the default list and have not been config
373373

374374
### Does the model see its own thinking?
375375

376-
**It depends on the context:**
376+
**Yes.** Reasoning content is preserved and sent back to the model in both scenarios:
377377

378378
- **Within the same turn (during tool calls)**: **Yes**. When a model makes tool calls, Open WebUI preserves the reasoning content and sends it back to the API as part of the assistant message. This enables the model to maintain context about what it was thinking when it made the tool call.
379379

380-
- **Across different turns**: **No**. When a user message starts a fresh turn, the reasoning from previous turns is **not** sent back to the API. The thinking content is extracted and displayed in the UI but stripped from the message content before being sent in subsequent requests. This follows the design of reasoning models like OpenAI's `o1`, where the "chain of thought" is intended to be internal and ephemeral.
380+
- **Across different turns**: **Yes**. When building messages for subsequent requests, Open WebUI serializes reasoning content from previous turns with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to reference its previous reasoning throughout the conversation.
381381

382382
### How is reasoning sent during tool calls?
383383

0 commit comments

Comments
 (0)