You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/code-execution/index.md
+25-7Lines changed: 25 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,17 +25,33 @@ Open WebUI supports multiple code execution backends, each suited to different u
25
25
26
26
### Pyodide (Default)
27
27
28
-
Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with constraints:
28
+
Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with some constraints:
29
29
30
-
-**No persistent storage** — the filesystem resets between executions.
30
+
-**Persistent file storage** — the virtual filesystem at `/mnt/uploads/` is backed by IndexedDB (IDBFS). Files persist across code executions within the same session and survive page reloads.
31
+
-**Built-in file browser** — when Code Interpreter is enabled, a file browser panel appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files in the Pyodide filesystem — no terminal needed.
32
+
-**User file access** — files attached to messages are automatically placed in `/mnt/uploads/` before code execution, so the model (and your code) can read them directly.
31
33
-**Limited library support** — only a subset of Python packages are available. Libraries that rely on C extensions or system calls may not work.
32
34
-**No shell access** — cannot run shell commands, install packages, or interact with the OS.
33
35
34
36
:::tip
35
-
Pyodide works well for **text analysis, hash computation, chart generation**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
37
+
Pyodide works well for **text analysis, hash computation, chart generation, file processing**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
36
38
:::
37
39
38
-
### Jupyter
40
+
:::warning Best for basic analysis only
41
+
Pyodide runs Python via WebAssembly inside the browser. The AI **cannot install additional libraries** beyond the small fixed set listed below — any code that imports an unsupported package will fail. Execution is also **significantly slower** than native Python, and large datasets or CPU-intensive tasks may hit browser memory limits. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For anything more demanding, use **Open Terminal** instead, which provides full native performance and unrestricted package access inside a Docker container.
42
+
43
+
Available libraries: micropip, requests, beautifulsoup4, numpy, pandas, matplotlib, seaborn, scikit-learn, scipy, regex, sympy, tiktoken, pytz, and the Python standard library. **Nothing else can be installed at runtime.**
44
+
:::
45
+
46
+
:::note Mutually exclusive with Open Terminal
47
+
The Code Interpreter toggle and the Open Terminal toggle cannot be active at the same time. Activating one will deactivate the other — they serve similar purposes but use different execution backends.
48
+
:::
49
+
50
+
### Jupyter (Legacy)
51
+
52
+
:::caution Legacy Engine
53
+
Jupyter is now considered a **legacy** code execution engine. The Pyodide engine is recommended for most use cases, and Open Terminal is recommended when you need full server-side execution. Jupyter support may be deprecated in a future release.
54
+
:::
39
55
40
56
Jupyter provides a full Python environment and can handle virtually any task — file creation, package installation, and complex library usage. However, it has significant drawbacks in shared deployments:
41
57
@@ -58,15 +74,17 @@ If you are running a multi-user or organizational deployment, **Jupyter is not r
58
74
59
75
### Comparison
60
76
61
-
| Consideration | Pyodide | Jupyter | Open Terminal |
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/code-execution/python.md
+64-16Lines changed: 64 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Open WebUI provides two ways to execute Python code:
12
12
1.**Manual Code Execution**: Run Python code blocks generated by LLMs using a "Run" button in the browser (uses Pyodide/WebAssembly).
13
13
2.**Code Interpreter**: An AI capability that allows models to automatically write and execute Python code as part of their response (uses Pyodide or Jupyter).
14
14
15
-
Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat.
15
+
Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat. When using the Pyodide engine, a **persistent virtual filesystem** at `/mnt/uploads/` is available — files survive across code executions and page reloads, and files attached to messages are automatically placed there for your code to access.
16
16
17
17
## Code Interpreter Capability
18
18
@@ -35,7 +35,7 @@ The Code Interpreter is a model capability that enables LLMs to write and execut
35
35
36
36
These settings can be configured at **Admin Panel → Settings → Code Execution**:
37
37
- Enable/disable code interpreter
38
-
- Select engine (Pyodideor Jupyter)
38
+
- Select engine: **Pyodide** (recommended) or **Jupyter (Legacy)**
39
39
- Configure Jupyter connection settings
40
40
- Set blocked modules
41
41
@@ -44,12 +44,16 @@ These settings can be configured at **Admin Panel → Settings → Code Executio
|`CODE_INTERPRETER_ENGINE`|`pyodide`| Engine to use: `pyodide` (browser) or `jupyter` (server) |
47
+
|`CODE_INTERPRETER_ENGINE`|`pyodide`| Engine to use: `pyodide` (browser, recommended) or `jupyter` (server, legacy) |
48
48
|`CODE_INTERPRETER_PROMPT_TEMPLATE`| (built-in) | Custom prompt template for code interpreter |
49
49
|`CODE_INTERPRETER_BLACKLISTED_MODULES`|`""`| Comma-separated list of blocked Python modules |
50
50
51
51
For Jupyter configuration, see the [Jupyter Notebook Integration](/tutorials/integrations/dev-tools/jupyter) tutorial.
52
52
53
+
:::note Filesystem Prompt Injection
54
+
When the Pyodide engine is selected, Open WebUI automatically appends a filesystem-awareness prompt to the code interpreter instructions. This tells the model about `/mnt/uploads/` and how to discover user-uploaded files. When using Jupyter, this filesystem prompt is not appended (since Jupyter has its own filesystem). You do not need to include filesystem instructions in your custom `CODE_INTERPRETER_PROMPT_TEMPLATE` — they are added automatically.
55
+
:::
56
+
53
57
### Native Function Calling (Native Mode)
54
58
55
59
When using **Native function calling mode** with a capable model (e.g., GPT-5, Claude 4.5, MiniMax M2.5), the code interpreter is available as a builtin tool called `execute_code`. This provides a more integrated experience:
@@ -126,6 +130,8 @@ If you see raw base64 text appearing in chat responses, the model is incorrectly
126
130
127
131
Open WebUI includes a browser-based Python environment using [Pyodide](https://pyodide.org/) (WebAssembly). This allows running Python scripts directly in your browser with no server-side setup.
128
132
133
+
The Pyodide worker is **persistent** — it is created once and reused across code executions. This means variables, imported modules, and files written to the virtual filesystem are retained between executions within the same session.
134
+
129
135
### Running Code Manually
130
136
131
137
1. Ask an LLM to write Python code
@@ -135,21 +141,63 @@ Open WebUI includes a browser-based Python environment using [Pyodide](https://p
135
141
136
142
### Supported Libraries
137
143
138
-
Pyodide includes the following pre-configured packages:
144
+
Pyodide includes the following packages, which are auto-detected from import statements and loaded on demand:
145
+
146
+
| Package | Use case |
147
+
|---------|----------|
148
+
| micropip | Package installer (internal use) |
149
+
| requests | HTTP requests |
150
+
| beautifulsoup4 | HTML/XML parsing |
151
+
| numpy | Numerical computing |
152
+
| pandas | Data analysis and manipulation |
153
+
| matplotlib | Chart and plot generation |
154
+
| seaborn | Statistical data visualization |
155
+
| scikit-learn | Machine learning |
156
+
| scipy | Scientific computing |
157
+
| regex | Advanced regular expressions |
158
+
| sympy | Symbolic mathematics |
159
+
| tiktoken | Token counting for LLMs |
160
+
| pytz | Timezone handling |
161
+
162
+
The Python standard library is also fully available (json, csv, math, datetime, os, io, etc.).
163
+
164
+
:::warning No runtime installation
165
+
The AI **cannot install additional libraries** beyond the list above. Any code that imports an unsupported package will fail with an import error. Packages that require C extensions, system calls, or native binaries (e.g., torch, tensorflow, opencv, psycopg2) are **not available** and cannot be made available in Pyodide. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For full Python package access, use **[Open Terminal](/features/chat-conversations/chat-features/code-execution#open-terminal)** instead.
166
+
:::
167
+
168
+
## Persistent File System
139
169
140
-
- micropip
141
-
- packaging
142
-
- requests
143
-
- beautifulsoup4
144
-
- numpy
145
-
- pandas
146
-
- matplotlib
147
-
- scikit-learn
148
-
- scipy
149
-
- regex
170
+
When using the Pyodide engine, a persistent virtual filesystem is mounted at `/mnt/uploads/`. This filesystem is backed by the browser's IndexedDB via [IDBFS](https://emscripten.org/docs/api_reference/Filesystem-API.html#filesystem-api-idbfs) and provides:
150
171
151
-
:::note
152
-
Packages not pre-compiled in Pyodide cannot be installed at runtime. For additional packages, consider using the Jupyter integration or forking Pyodide to add custom packages.
172
+
-**Cross-execution persistence** — files written by one code execution are accessible in subsequent executions.
173
+
-**Cross-reload persistence** — files survive page reloads (stored in IndexedDB).
174
+
-**Automatic upload mounting** — files attached to messages are fetched from the server and placed in `/mnt/uploads/` before code execution, so the model can read them directly.
175
+
-**File browser panel** — when Code Interpreter is enabled, a file browser appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files — no terminal needed.
176
+
177
+
### Working with Files in Code
178
+
179
+
```python
180
+
import os
181
+
182
+
# List uploaded files
183
+
print(os.listdir('/mnt/uploads'))
184
+
185
+
# Read a user-uploaded CSV
186
+
import pandas as pd
187
+
df = pd.read_csv('/mnt/uploads/data.csv')
188
+
print(df.head())
189
+
190
+
# Write output to the persistent filesystem (downloadable via file browser)
191
+
df.to_csv('/mnt/uploads/result.csv', index=False)
192
+
print('Saved result.csv to /mnt/uploads/')
193
+
```
194
+
195
+
:::tip
196
+
The file browser panel lets you download any file the model creates. Ask the model to save its output to `/mnt/uploads/` and it will appear in the file browser for download.
197
+
:::
198
+
199
+
:::note Jupyter Engine
200
+
The persistent filesystem prompt and `/mnt/uploads/` integration are **Pyodide-only**. When using the Jupyter engine, files are managed through Jupyter's own filesystem. The file browser panel is not available for Jupyter.
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/conversation-organization.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,9 +36,11 @@ Organize existing chats by moving them into folders:
36
36
37
37
Folders can be nested within other folders to create hierarchical organization:
38
38
39
-
- Drag a folder onto another folder to make it a subfolder.
40
-
- Use the right-click menu to move folders between parent folders.
39
+
-**Create subfolder from menu**: Right-click (or click the three-dot menu ⋯) on any folder and select **"Create Folder"** to create a new subfolder directly inside it.
40
+
-**Drag and drop**: Drag a folder onto another folder to make it a subfolder.
41
+
-**Move via context menu**: Right-click on a folder and use the move option to relocate it under a different parent.
41
42
- Folders can be expanded or collapsed to show/hide their contents.
43
+
- Subfolder names must be unique within the same parent folder. If a duplicate name is entered, a number is automatically appended (e.g., "Notes 1").
Copy file name to clipboardExpand all lines: docs/features/chat-conversations/chat-features/reasoning-models.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ If your model uses different tags, you can provide a list of tag pairs in the `r
38
38
## Configuration & Behavior
39
39
40
40
-**Stripping from Payload**: The `reasoning_tags` parameter itself is an Open WebUI-specific control and is **stripped** from the payload before being sent to the LLM backend (OpenAI, Ollama, etc.). This ensures compatibility with providers that do not recognize this parameter.
41
-
-**Chat History**: Thinking tags are **not** stripped from the chat history. If previous messages in a conversation contain thinking blocks, they are sent back to the model as part of the context, allowing the model to "remember" its previous reasoning steps.
41
+
-**Chat History**: Reasoning content is preserved in chat history and **sent back to the model** across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to "remember" its previous reasoning steps across the entire conversation.
42
42
-**UI Rendering**: Internally, reasoning blocks are processed and rendered using a specialized UI component. When saved or exported, they may be represented as HTML `<details type="reasoning">` tags.
43
43
44
44
---
@@ -153,8 +153,8 @@ Open WebUI follows the **OpenAI Chat Completions API standard**. Reasoning conte
153
153
154
154
### Important Notes
155
155
156
-
-**Within-turn preservation**: Reasoning is preserved and sent back to the API only within the same turn (while tool calls are being processed)
157
-
-**Cross-turn behavior**: Between separate user messages, reasoning is **not** sent back to the API. The thinking content is displayed in the UI but stripped from the messagecontent that gets sent in subsequent requests.
156
+
-**Within-turn preservation**: Reasoning is preserved and sent back to the API within the same turn (while tool calls are being processed).
157
+
-**Cross-turn behavior**: Reasoning content **is** sent back to the API across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to maintain context of its previous reasoning throughout the conversation.
158
158
-**Text-based serialization**: Reasoning is sent as text wrapped in tags (e.g., `<think>thinking content</think>`), not as structured content blocks. This works with most OpenAI-compatible APIs but may not align with provider-specific formats like Anthropic's extended thinking content blocks.
159
159
160
160
---
@@ -373,11 +373,11 @@ If the model uses tags that are not in the default list and have not been config
373
373
374
374
### Does the model see its own thinking?
375
375
376
-
**It depends on the context:**
376
+
**Yes.** Reasoning content is preserved and sent back to the model in both scenarios:
377
377
378
378
-**Within the same turn (during tool calls)**: **Yes**. When a model makes tool calls, Open WebUI preserves the reasoning content and sends it back to the API as part of the assistant message. This enables the model to maintain context about what it was thinking when it made the tool call.
379
379
380
-
-**Across different turns**: **No**. When a user message starts a fresh turn, the reasoning from previous turns is **not** sent back to the API. The thinking content is extracted and displayed in the UI but stripped from the messagecontent before being sent in subsequent requests. This follows the design of reasoning models like OpenAI's `o1`, where the "chain of thought" is intended to be internal and ephemeral.
380
+
-**Across different turns**: **Yes**. When building messages for subsequent requests, Open WebUI serializes reasoning content from previous turns with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to reference its previous reasoning throughout the conversation.
0 commit comments