Skip to content
Merged

Dev #1132

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -25,17 +25,33 @@ Open WebUI supports multiple code execution backends, each suited to different u

### Pyodide (Default)

Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with constraints:
Pyodide runs Python in the browser via WebAssembly. It is sandboxed and safe for multi-user environments, but comes with some constraints:

- **No persistent storage** — the filesystem resets between executions.
- **Persistent file storage** — the virtual filesystem at `/mnt/uploads/` is backed by IndexedDB (IDBFS). Files persist across code executions within the same session and survive page reloads.
- **Built-in file browser** — when Code Interpreter is enabled, a file browser panel appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files in the Pyodide filesystem — no terminal needed.
- **User file access** — files attached to messages are automatically placed in `/mnt/uploads/` before code execution, so the model (and your code) can read them directly.
- **Limited library support** — only a subset of Python packages are available. Libraries that rely on C extensions or system calls may not work.
- **No shell access** — cannot run shell commands, install packages, or interact with the OS.

:::tip
Pyodide works well for **text analysis, hash computation, chart generation**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
Pyodide works well for **text analysis, hash computation, chart generation, file processing**, and other self-contained tasks. Chart libraries like matplotlib produce base64-encoded images that Open WebUI automatically captures, uploads as files, and injects direct image links into the output — so models can display charts directly in chat without any extra setup.
:::

### Jupyter
:::warning Best for basic analysis only
Pyodide runs Python via WebAssembly inside the browser. The AI **cannot install additional libraries** beyond the small fixed set listed below — any code that imports an unsupported package will fail. Execution is also **significantly slower** than native Python, and large datasets or CPU-intensive tasks may hit browser memory limits. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For anything more demanding, use **Open Terminal** instead, which provides full native performance and unrestricted package access inside a Docker container.

Available libraries: micropip, requests, beautifulsoup4, numpy, pandas, matplotlib, seaborn, scikit-learn, scipy, regex, sympy, tiktoken, pytz, and the Python standard library. **Nothing else can be installed at runtime.**
:::

:::note Mutually exclusive with Open Terminal
The Code Interpreter toggle and the Open Terminal toggle cannot be active at the same time. Activating one will deactivate the other — they serve similar purposes but use different execution backends.
:::

### Jupyter (Legacy)

:::caution Legacy Engine
Jupyter is now considered a **legacy** code execution engine. The Pyodide engine is recommended for most use cases, and Open Terminal is recommended when you need full server-side execution. Jupyter support may be deprecated in a future release.
:::

Jupyter provides a full Python environment and can handle virtually any task — file creation, package installation, and complex library usage. However, it has significant drawbacks in shared deployments:

Expand All @@ -58,15 +74,17 @@ If you are running a multi-user or organizational deployment, **Jupyter is not r

### Comparison

| Consideration | Pyodide | Jupyter | Open Terminal |
| Consideration | Pyodide | Jupyter (Legacy) | Open Terminal |
| :--- | :--- | :--- | :--- |
| **Runs in** | Browser (WebAssembly) | Server (Python kernel) | Server (Docker container) |
| **Library support** | Limited subset | Full Python ecosystem | Full OS — any language, any tool |
| **Shell access** | ❌ None | ⚠️ Limited | ✅ Full shell |
| **File persistence** | ❌ Resets each execution | ✅ Shared filesystem | ✅ Container filesystem (until removal) |
| **File persistence** | ✅ IDBFS (persists across executions & reloads) | ✅ Shared filesystem | ✅ Container filesystem (until removal) |
| **File browser** | ✅ Built-in sidebar panel | ❌ None | ✅ Built-in sidebar panel |
| **User file access** | ✅ Attached files placed in `/mnt/uploads/` | ❌ Manual | ✅ Attached files available |
| **Isolation** | ✅ Browser sandbox | ❌ Shared environment | ✅ Container-level (when using Docker) |
| **Multi-user safety** | ✅ Per-user by design | ⚠️ Not isolated | ⚠️ Single instance (per-user containers planned) |
| **File generation** | ❌ Very limited | ✅ Full support | ✅ Full support with upload/download |
| **File generation** | ✅ Write to `/mnt/uploads/`, download via file browser | ✅ Full support | ✅ Full support with upload/download |
| **Setup** | None (built-in) | Admin configures globally | Native integration via Settings → Integrations, or as a Tool Server |
| **Recommended for orgs** | ✅ Safe default | ❌ Not without isolation | ✅ Per-user by design |
| **Enterprise scalability** | ✅ Client-side, no server load | ❌ Single shared instance | ⚠️ Manual per-user instances |
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Open WebUI provides two ways to execute Python code:
1. **Manual Code Execution**: Run Python code blocks generated by LLMs using a "Run" button in the browser (uses Pyodide/WebAssembly).
2. **Code Interpreter**: An AI capability that allows models to automatically write and execute Python code as part of their response (uses Pyodide or Jupyter).

Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat.
Both methods support visual outputs like matplotlib charts that can be displayed inline in your chat. When using the Pyodide engine, a **persistent virtual filesystem** at `/mnt/uploads/` is available — files survive across code executions and page reloads, and files attached to messages are automatically placed there for your code to access.

## Code Interpreter Capability

Expand All @@ -35,7 +35,7 @@ The Code Interpreter is a model capability that enables LLMs to write and execut

These settings can be configured at **Admin Panel → Settings → Code Execution**:
- Enable/disable code interpreter
- Select engine (Pyodide or Jupyter)
- Select engine: **Pyodide** (recommended) or **Jupyter (Legacy)**
- Configure Jupyter connection settings
- Set blocked modules

Expand All @@ -44,12 +44,16 @@ These settings can be configured at **Admin Panel → Settings → Code Executio
| Variable | Default | Description |
|----------|---------|-------------|
| `ENABLE_CODE_INTERPRETER` | `true` | Enable/disable code interpreter globally |
| `CODE_INTERPRETER_ENGINE` | `pyodide` | Engine to use: `pyodide` (browser) or `jupyter` (server) |
| `CODE_INTERPRETER_ENGINE` | `pyodide` | Engine to use: `pyodide` (browser, recommended) or `jupyter` (server, legacy) |
| `CODE_INTERPRETER_PROMPT_TEMPLATE` | (built-in) | Custom prompt template for code interpreter |
| `CODE_INTERPRETER_BLACKLISTED_MODULES` | `""` | Comma-separated list of blocked Python modules |

For Jupyter configuration, see the [Jupyter Notebook Integration](/tutorials/integrations/dev-tools/jupyter) tutorial.

:::note Filesystem Prompt Injection
When the Pyodide engine is selected, Open WebUI automatically appends a filesystem-awareness prompt to the code interpreter instructions. This tells the model about `/mnt/uploads/` and how to discover user-uploaded files. When using Jupyter, this filesystem prompt is not appended (since Jupyter has its own filesystem). You do not need to include filesystem instructions in your custom `CODE_INTERPRETER_PROMPT_TEMPLATE` — they are added automatically.
:::

### Native Function Calling (Native Mode)

When using **Native function calling mode** with a capable model (e.g., GPT-5, Claude 4.5, MiniMax M2.5), the code interpreter is available as a builtin tool called `execute_code`. This provides a more integrated experience:
Expand Down Expand Up @@ -126,6 +130,8 @@ If you see raw base64 text appearing in chat responses, the model is incorrectly

Open WebUI includes a browser-based Python environment using [Pyodide](https://pyodide.org/) (WebAssembly). This allows running Python scripts directly in your browser with no server-side setup.

The Pyodide worker is **persistent** — it is created once and reused across code executions. This means variables, imported modules, and files written to the virtual filesystem are retained between executions within the same session.

### Running Code Manually

1. Ask an LLM to write Python code
Expand All @@ -135,21 +141,63 @@ Open WebUI includes a browser-based Python environment using [Pyodide](https://p

### Supported Libraries

Pyodide includes the following pre-configured packages:
Pyodide includes the following packages, which are auto-detected from import statements and loaded on demand:

| Package | Use case |
|---------|----------|
| micropip | Package installer (internal use) |
| requests | HTTP requests |
| beautifulsoup4 | HTML/XML parsing |
| numpy | Numerical computing |
| pandas | Data analysis and manipulation |
| matplotlib | Chart and plot generation |
| seaborn | Statistical data visualization |
| scikit-learn | Machine learning |
| scipy | Scientific computing |
| regex | Advanced regular expressions |
| sympy | Symbolic mathematics |
| tiktoken | Token counting for LLMs |
| pytz | Timezone handling |

The Python standard library is also fully available (json, csv, math, datetime, os, io, etc.).

:::warning No runtime installation
The AI **cannot install additional libraries** beyond the list above. Any code that imports an unsupported package will fail with an import error. Packages that require C extensions, system calls, or native binaries (e.g., torch, tensorflow, opencv, psycopg2) are **not available** and cannot be made available in Pyodide. Pyodide is best suited for **basic file analysis, simple calculations, text processing, and chart generation**. For full Python package access, use **[Open Terminal](/features/chat-conversations/chat-features/code-execution#open-terminal)** instead.
:::

## Persistent File System

- micropip
- packaging
- requests
- beautifulsoup4
- numpy
- pandas
- matplotlib
- scikit-learn
- scipy
- regex
When using the Pyodide engine, a persistent virtual filesystem is mounted at `/mnt/uploads/`. This filesystem is backed by the browser's IndexedDB via [IDBFS](https://emscripten.org/docs/api_reference/Filesystem-API.html#filesystem-api-idbfs) and provides:

:::note
Packages not pre-compiled in Pyodide cannot be installed at runtime. For additional packages, consider using the Jupyter integration or forking Pyodide to add custom packages.
- **Cross-execution persistence** — files written by one code execution are accessible in subsequent executions.
- **Cross-reload persistence** — files survive page reloads (stored in IndexedDB).
- **Automatic upload mounting** — files attached to messages are fetched from the server and placed in `/mnt/uploads/` before code execution, so the model can read them directly.
- **File browser panel** — when Code Interpreter is enabled, a file browser appears in the chat controls sidebar. You can browse, preview, upload, download, and delete files — no terminal needed.

### Working with Files in Code

```python
import os

# List uploaded files
print(os.listdir('/mnt/uploads'))

# Read a user-uploaded CSV
import pandas as pd
df = pd.read_csv('/mnt/uploads/data.csv')
print(df.head())

# Write output to the persistent filesystem (downloadable via file browser)
df.to_csv('/mnt/uploads/result.csv', index=False)
print('Saved result.csv to /mnt/uploads/')
```

:::tip
The file browser panel lets you download any file the model creates. Ask the model to save its output to `/mnt/uploads/` and it will appear in the file browser for download.
:::

:::note Jupyter Engine
The persistent filesystem prompt and `/mnt/uploads/` integration are **Pyodide-only**. When using the Jupyter engine, files are managed through Jupyter's own filesystem. The file browser panel is not available for Jupyter.
:::

## Example: Creating a Chart
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,11 @@ Organize existing chats by moving them into folders:

Folders can be nested within other folders to create hierarchical organization:

- Drag a folder onto another folder to make it a subfolder.
- Use the right-click menu to move folders between parent folders.
- **Create subfolder from menu**: Right-click (or click the three-dot menu ⋯) on any folder and select **"Create Folder"** to create a new subfolder directly inside it.
- **Drag and drop**: Drag a folder onto another folder to make it a subfolder.
- **Move via context menu**: Right-click on a folder and use the move option to relocate it under a different parent.
- Folders can be expanded or collapsed to show/hide their contents.
- Subfolder names must be unique within the same parent folder. If a duplicate name is entered, a number is automatically appended (e.g., "Notes 1").

### Starting a Chat in a Folder

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ If your model uses different tags, you can provide a list of tag pairs in the `r
## Configuration & Behavior

- **Stripping from Payload**: The `reasoning_tags` parameter itself is an Open WebUI-specific control and is **stripped** from the payload before being sent to the LLM backend (OpenAI, Ollama, etc.). This ensures compatibility with providers that do not recognize this parameter.
- **Chat History**: Thinking tags are **not** stripped from the chat history. If previous messages in a conversation contain thinking blocks, they are sent back to the model as part of the context, allowing the model to "remember" its previous reasoning steps.
- **Chat History**: Reasoning content is preserved in chat history and **sent back to the model** across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to "remember" its previous reasoning steps across the entire conversation.
- **UI Rendering**: Internally, reasoning blocks are processed and rendered using a specialized UI component. When saved or exported, they may be represented as HTML `<details type="reasoning">` tags.

---
Expand Down Expand Up @@ -153,8 +153,8 @@ Open WebUI follows the **OpenAI Chat Completions API standard**. Reasoning conte

### Important Notes

- **Within-turn preservation**: Reasoning is preserved and sent back to the API only within the same turn (while tool calls are being processed)
- **Cross-turn behavior**: Between separate user messages, reasoning is **not** sent back to the API. The thinking content is displayed in the UI but stripped from the message content that gets sent in subsequent requests.
- **Within-turn preservation**: Reasoning is preserved and sent back to the API within the same turn (while tool calls are being processed).
- **Cross-turn behavior**: Reasoning content **is** sent back to the API across turns. When building messages for subsequent requests, Open WebUI serializes the reasoning content with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to maintain context of its previous reasoning throughout the conversation.
- **Text-based serialization**: Reasoning is sent as text wrapped in tags (e.g., `<think>thinking content</think>`), not as structured content blocks. This works with most OpenAI-compatible APIs but may not align with provider-specific formats like Anthropic's extended thinking content blocks.

---
Expand Down Expand Up @@ -373,11 +373,11 @@ If the model uses tags that are not in the default list and have not been config

### Does the model see its own thinking?

**It depends on the context:**
**Yes.** Reasoning content is preserved and sent back to the model in both scenarios:

- **Within the same turn (during tool calls)**: **Yes**. When a model makes tool calls, Open WebUI preserves the reasoning content and sends it back to the API as part of the assistant message. This enables the model to maintain context about what it was thinking when it made the tool call.

- **Across different turns**: **No**. When a user message starts a fresh turn, the reasoning from previous turns is **not** sent back to the API. The thinking content is extracted and displayed in the UI but stripped from the message content before being sent in subsequent requests. This follows the design of reasoning models like OpenAI's `o1`, where the "chain of thought" is intended to be internal and ephemeral.
- **Across different turns**: **Yes**. When building messages for subsequent requests, Open WebUI serializes reasoning content from previous turns with its original tags (e.g., `<think>...</think>`) and includes it in the assistant message's `content` field. This allows the model to reference its previous reasoning throughout the conversation.

### How is reasoning sent during tool calls?

Expand Down
Loading