From 4fd81c8161cf16319635094dbb2e99f955d2f991 Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sat, 9 May 2026 16:55:29 +0200 Subject: [PATCH 01/15] calendar --- docs/features/authentication-access/rbac/permissions.md | 1 + docs/features/calendar/index.md | 5 +++++ docs/reference/env-configuration.mdx | 7 +++++++ 3 files changed, 13 insertions(+) diff --git a/docs/features/authentication-access/rbac/permissions.md b/docs/features/authentication-access/rbac/permissions.md index 3c42803cd..6c9f07820 100644 --- a/docs/features/authentication-access/rbac/permissions.md +++ b/docs/features/authentication-access/rbac/permissions.md @@ -78,6 +78,7 @@ Controls what users can share with the community or make public. | **Share Notes** | **(Parent)** Ability to share Notes. | | **Public Notes** | *(Requires Share Notes)* Ability to make Notes public. | | **Chats Public Sharing** | *(Requires Share Chat)* Ability to make a chat share link reachable by anyone (including unauthenticated visitors). When disabled, users can still share chats with specific users or groups via the access-control selector, but the "Public" option is hidden for non-admins. Admins are always exempt. | +| **Calendars Public Sharing** | *(Requires Features > Calendar)* Ability to make a calendar publicly readable or writable by every user with the Calendar feature. When disabled, wildcard access grants are stripped from calendar create/update payloads — owners can still share with specific users or groups. Admins are always exempt. | ### 3. Chat Permissions Controls the features available to the user inside the chat interface. diff --git a/docs/features/calendar/index.md b/docs/features/calendar/index.md index 747c2e6f3..bd9d0bbe3 100644 --- a/docs/features/calendar/index.md +++ b/docs/features/calendar/index.md @@ -195,6 +195,10 @@ Calendars support the same access grant system used by knowledge bases, models, Only the calendar **owner** (or an admin) can manage access grants and delete the calendar itself. +:::info Public sharing is permission-gated +Wildcard access grants (calendar readable or writable by every user with the Calendar feature) are gated by the **Calendars Public Sharing** permission. When disabled for a non-admin owner, public principals are silently stripped from the access grant list on calendar create/update — per-user and per-group grants remain unaffected. Admins always retain the ability to share publicly. Configurable per-group in **Admin Panel → Users → Groups → Permissions** or via [`USER_PERMISSIONS_CALENDAR_ALLOW_PUBLIC_SHARING`](/reference/env-configuration#user_permissions_calendar_allow_public_sharing). +::: + --- ## Attendees and RSVP @@ -243,6 +247,7 @@ The global alert polling window is configurable via [`CALENDAR_ALERT_LOOKAHEAD_M |----------|---------|-------------| | [`ENABLE_CALENDAR`](/reference/env-configuration#enable_calendar) | `True` | Enable or disable the Calendar feature globally | | [`USER_PERMISSIONS_FEATURES_CALENDAR`](/reference/env-configuration#user_permissions_features_calendar) | `True` | Enable or disable Calendar access for non-admin users by default | +| [`USER_PERMISSIONS_CALENDAR_ALLOW_PUBLIC_SHARING`](/reference/env-configuration#user_permissions_calendar_allow_public_sharing) | `False` | Allow non-admin owners to attach wildcard read/write access grants to a calendar | | [`SCHEDULER_POLL_INTERVAL`](/reference/env-configuration#scheduler_poll_interval) | `10` | Seconds between scheduler ticks (shared with automations) | | [`CALENDAR_ALERT_LOOKAHEAD_MINUTES`](/reference/env-configuration#calendar_alert_lookahead_minutes) | `10` | Default alert window in minutes for upcoming events | diff --git a/docs/reference/env-configuration.mdx b/docs/reference/env-configuration.mdx index cded5657e..5ff5f459c 100644 --- a/docs/reference/env-configuration.mdx +++ b/docs/reference/env-configuration.mdx @@ -6105,6 +6105,13 @@ These settings control whether users can share workspace items **publicly**. - Description: Enables or disables **public sharing** of chat conversations. When disabled, the access-control selector in the chat share modal hides the "Public" option for non-admin users — they can still create share links scoped to specific users or groups, but cannot make a chat reachable by anyone with the link. Admins always retain the ability to share chats publicly. Requires `USER_PERMISSIONS_CHAT_SHARE` (Share Chat) to be enabled for the user. Configurable per-group in **Admin Panel → Users → Groups → Permissions → Chats Public Sharing**. - Persistence: This environment variable is a `PersistentConfig` variable. +#### `USER_PERMISSIONS_CALENDAR_ALLOW_PUBLIC_SHARING` + +- Type: `bool` +- Default: `False` +- Description: Enables or disables **public sharing** of calendars. When disabled, non-admin owners cannot attach a wildcard `read` or `write` access grant to a calendar on create or update — public principals are silently filtered out of the access grant list, so a calendar cannot be made readable or writable by every user with the Calendar feature without an admin-granted sharing permission. Per-user and per-group grants remain unaffected. Admins always retain the ability to share calendars publicly. Configurable per-group in **Admin Panel → Users → Groups → Permissions → Calendars Public Sharing**. +- Persistence: This environment variable is a `PersistentConfig` variable. + ### Access Grants #### `USER_PERMISSIONS_ACCESS_GRANTS_ALLOW_USERS` From a1aedd8b1e044af8d4777f103a8c194483f5a561 Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sat, 9 May 2026 17:13:00 +0200 Subject: [PATCH 02/15] 0.9.5 --- docs/getting-started/quick-start/tab-docker/ManualDocker.md | 6 +++--- docs/getting-started/updating.mdx | 6 +++--- docs/reference/database-schema.md | 2 +- docs/reference/env-configuration.mdx | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/getting-started/quick-start/tab-docker/ManualDocker.md b/docs/getting-started/quick-start/tab-docker/ManualDocker.md index 80b79d7bb..b944625d4 100644 --- a/docs/getting-started/quick-start/tab-docker/ManualDocker.md +++ b/docs/getting-started/quick-start/tab-docker/ManualDocker.md @@ -49,9 +49,9 @@ Visit [http://localhost:3000](http://localhost:3000). For production environments, pin a specific version instead of using floating tags: ```bash -docker pull ghcr.io/open-webui/open-webui:v0.9.0 -docker pull ghcr.io/open-webui/open-webui:v0.9.0-cuda -docker pull ghcr.io/open-webui/open-webui:v0.9.0-ollama +docker pull ghcr.io/open-webui/open-webui:v0.9.5 +docker pull ghcr.io/open-webui/open-webui:v0.9.5-cuda +docker pull ghcr.io/open-webui/open-webui:v0.9.5-ollama ``` --- diff --git a/docs/getting-started/updating.mdx b/docs/getting-started/updating.mdx index 9796b610b..68a118ccd 100644 --- a/docs/getting-started/updating.mdx +++ b/docs/getting-started/updating.mdx @@ -31,9 +31,9 @@ The `:main` tag always points to the **latest build**. It's convenient but can i For stability, pin a specific release tag: ``` -ghcr.io/open-webui/open-webui:v0.9.0 -ghcr.io/open-webui/open-webui:v0.9.0-cuda -ghcr.io/open-webui/open-webui:v0.9.0-ollama +ghcr.io/open-webui/open-webui:v0.9.5 +ghcr.io/open-webui/open-webui:v0.9.5-cuda +ghcr.io/open-webui/open-webui:v0.9.5-ollama ``` Browse all available tags on the [GitHub releases page](https://github.com/open-webui/open-webui/releases). diff --git a/docs/reference/database-schema.md b/docs/reference/database-schema.md index 482e4a4f5..8b5ab256e 100644 --- a/docs/reference/database-schema.md +++ b/docs/reference/database-schema.md @@ -10,7 +10,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI ::: > [!WARNING] -> This documentation reflects schema changes up to Open WebUI v0.9.0. +> This documentation reflects schema changes up to Open WebUI v0.9.5. ## Open-WebUI Internal SQLite Database diff --git a/docs/reference/env-configuration.mdx b/docs/reference/env-configuration.mdx index 5ff5f459c..bf92d1fe6 100644 --- a/docs/reference/env-configuration.mdx +++ b/docs/reference/env-configuration.mdx @@ -12,7 +12,7 @@ As new variables are introduced, this page will be updated to reflect the growin :::info -This page is up-to-date with Open WebUI release version [v0.9.0](https://github.com/open-webui/open-webui/releases/tag/v0.9.0), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions. +This page is up-to-date with Open WebUI release version [v0.9.5](https://github.com/open-webui/open-webui/releases/tag/v0.9.5), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions. ::: From 1ed3734f7b7a09394802849b6fa55050c4c8f35c Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sat, 9 May 2026 23:44:46 +0200 Subject: [PATCH 03/15] docs(env-configuration): clarify BYPASS_ADMIN_ACCESS_CONTROL is a UI/posture flag, not a tenant-isolation primitive MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The previous copy ("admins are treated like regular users for workspace access ... only see items they have explicit permission to access") read to multiple security-report submitters as a hard access-control enforcement at every API endpoint, including a tenant-isolation primitive between admins. It isn't, and was never designed to be. Rewrite the description to make the actual scope explicit: - Lists the three converging reasons the flag exists (performance, UI clutter, compliance posture for jurisdictions with stronger labour-protection law) — none of which is tenant isolation. - Calls out by name that per-id direct-access endpoints are intentionally not gated by this flag and were never designed to be, to pre-empt the recurring "missed migration" misreading. - Restates the architectural invariant that Open WebUI is single-tenant and admin is root-equivalent (DB / env / server / Functions / Tools), with the explicit note that for genuine cross-tenant isolation the supported pattern is separate instances. - Anchors the analogy to the analytics-page visibility toggle, which follows the same "hide from admin's UI surfaces, do not change the underlying data semantics" pattern. No code change, no behavioural change — only documentation copy. Closes the doc side of the recurring confusion that produced GHSA-8h93-446x-834j (and the earlier related reads). --- docs/reference/env-configuration.mdx | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/docs/reference/env-configuration.mdx b/docs/reference/env-configuration.mdx index bf92d1fe6..60673bcb5 100644 --- a/docs/reference/env-configuration.mdx +++ b/docs/reference/env-configuration.mdx @@ -384,7 +384,30 @@ is also being used and set to `True`. **Never disable this if OAUTH/SSO is not b - Type: `bool` - Default: `True` -- Description: When disabled, admin users are treated like regular users for workspace access (models, knowledge, prompts, tools, and notes) and only see items they have **explicit permission to access** through the existing access control system. This also applies to the visibility of models in the model selector - admins will be treated as regular users: base models and custom models they do not have **explicit permission to access**, will be hidden. If set to `True` (Default), admins have access to **all created items** in the workspace area (including other users' notes) and all models in the model selector, **regardless of access permissions**. This environment variable deprecates `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`. If you are still using `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS` you should switch to `BYPASS_ADMIN_ACCESS_CONTROL`. +- Description: Controls whether admin users see other users' workspace items in **list and selector UI surfaces** (workspace tabs for models / knowledge / prompts / tools / notes / skills, the chat model selector, the file browser list, etc.). When set to `True` (default), those surfaces show every item from every user — convenient for single-admin / small-team deployments. When set to `False`, those same surfaces show only the admin's own items plus items explicitly shared with them, matching what a regular user would see. + + **Why this exists.** Three converging reasons, all UX/posture, none of them tenant isolation: + + 1. **Performance.** On large multi-user deployments (think 5,000 users, 50,000 user-created items), loading every user's items into the admin's selector and workspace lists is a hard performance problem before any UI even renders. Enterprise admins typically use their admin account as their day-to-day account — they don't have a separate user account to switch into — so this gate keeps their normal product surfaces fast. + 2. **UI clutter reduction.** Even if loading were free, an admin's selector and workspace lists become unusable when populated with everyone else's items. + 3. **Compliance posture.** Many enterprise deployments — especially in jurisdictions with stronger labour-protection law (Germany, Austria, the broader EU) — require that admins not have *easy* access to other users' data even when they technically *could* reach it via direct database access. This flag is part of that posture: the admin is not casually presented with other users' content in their normal product surfaces. + + **What this is *not*.** This flag is **not** a hard access-control enforcement across every API endpoint, and it is **not** a tenant-isolation primitive. Specifically: + + - Per-id direct-access endpoints (`GET /api/v1//id/{id}`, the corresponding update / delete / access-update routes) are **intentionally not gated** by this flag. They were never designed to be — gating them would protect against nothing (see below) while breaking legitimate flows where an admin operates on a known item ID. + - The flag controls the admin's **casual surface area** on the data, not the data's underlying access semantics. The same pattern applies to other admin-facing UI toggles in the product (e.g. the analytics-page visibility toggle hides the page from the admin's UI but does not stop analytics from being collected). + + **Admin is root-equivalent on the deployment.** Open WebUI is single-tenant by architecture. There is no cryptographic per-tenant isolation, no multi-database split, no per-admin scope on the underlying data store. Admins inherently retain full access to all data via: + + - Direct database access on the deployment. + - Environment-variable inspection and the ability to rotate this flag back on. + - Server access (filesystem, processes, secrets). + - Admin-only **Functions** (execute arbitrary Python at module-import time inside the application process). + - **Tools** with the `workspace.tools` permission, which the admin can grant to themselves (treat as root-equivalent — see [Plugin Security documentation](/features/extensibility/plugin)). + + This flag changes none of that, and it is not advertised to. **For genuine cross-tenant data isolation between admins, the supported deployment pattern is separate Open WebUI instances per tenant.** + + This environment variable deprecates `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`. If you are still using `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`, switch to `BYPASS_ADMIN_ACCESS_CONTROL`. #### `ENABLE_USER_WEBHOOKS` From cfdc5d5cfad153b52e3dc4c788424f2ed9163434 Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sat, 9 May 2026 23:49:24 +0200 Subject: [PATCH 04/15] docs(env-configuration): consolidate admin posture toggle clarification across the cluster --- docs/reference/env-configuration.mdx | 48 +++++++++++++++------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/docs/reference/env-configuration.mdx b/docs/reference/env-configuration.mdx index 60673bcb5..d2f2f3b1c 100644 --- a/docs/reference/env-configuration.mdx +++ b/docs/reference/env-configuration.mdx @@ -362,23 +362,44 @@ is also being used and set to `True`. **Never disable this if OAUTH/SSO is not b - Description: Sets a webhook for integration with Discord/Slack/Microsoft Teams. - Persistence: This environment variable is a `PersistentConfig` variable. +:::note Admin posture toggles vs. security boundaries + +The `ENABLE_ADMIN_*` and `BYPASS_ADMIN_ACCESS_CONTROL` toggles in this section control what the admin sees and does **through Open WebUI's admin product surfaces** (admin panel pages, the chat model selector, workspace lists, the export action). They do **not** establish a security boundary against the admin themselves. + +Open WebUI is single-tenant by architecture and the `admin` role is root-equivalent on the deployment by deliberate design. Admins inherently retain unconstrained access to all data via direct database access, environment-variable inspection, server access, admin-only **Functions** (which execute arbitrary Python at module-import inside the application process), and **Tools** with the `workspace.tools` permission (which the admin can grant to themselves and which run `exec()` on the server). + +These toggles are appropriate for: + +- **Performance** — keeping admin-facing list/selector surfaces fast on large multi-user deployments where loading every user's items is a hard performance problem. +- **UI clutter reduction** — keeping those same surfaces usable when populated with thousands of items from other users. +- **Compliance posture** — meeting requirements (especially in jurisdictions with stronger labour-protection law, e.g. DE / AT / EU) that admins not be *casually* presented with other users' data, even though they remain technically able to reach it via the routes above. + +These toggles are **not** appropriate for: + +- Cross-tenant data isolation between admins. There is no cryptographic per-tenant isolation, no multi-database split, and no per-admin scope on the underlying data store. For genuine tenant separation, the supported pattern is **separate Open WebUI instances per tenant**. +- A security boundary against an admin who is determined to read data they aren't shown in their UI surfaces. + +Treat anything in this cluster as *what the admin sees and does in the product UI and API*, not *what the admin technically can reach on the deployment*. + +::: + #### `ENABLE_ADMIN_EXPORT` - Type: `bool` - Default: `True` -- Description: Controls whether admins can export data, chats and the database in the admin panel. Database exports only work for SQLite databases for now. +- Description: Controls whether the admin-panel **export** action is available (data, chats, and database export). When disabled, the export endpoints reject requests. Database exports only work for SQLite databases for now. Note that admin retains the underlying ability to dump the database directly via deployment access — this toggle controls the in-product export surface, see the admin-posture-toggles note above. Requires a restart to take effect. #### `ENABLE_ADMIN_CHAT_ACCESS` - Type: `bool` - Default: `True` -- Description: Enables admin users to directly access the chats of other users. When disabled, admins can no longer accesss user's chats in the admin panel. If you disable this, consider disabling `ENABLE_ADMIN_EXPORT` too, if you are using SQLite, as the exports also contain user chats. +- Description: Controls whether the admin-panel **other-users-chats** access surface is available. When disabled, admins can no longer access other users' chats in the admin panel and the corresponding endpoints reject the request. If you disable this, consider also disabling `ENABLE_ADMIN_EXPORT` (especially on SQLite), since exports include user chats and would re-open the same data on a different surface. Note that admin retains underlying database access regardless — this toggle controls the in-product surface, see the admin-posture-toggles note above. #### `ENABLE_ADMIN_ANALYTICS` - Type: `bool` - Default: `True` -- Description: Controls whether the **Analytics** tab is visible and accessible in the admin panel. When set to `False`, the analytics API router is not mounted and the tab is hidden from the admin navigation. Useful for deployments where analytics data collection or display is not desired. Requires a restart to take effect. +- Description: Controls whether the admin-panel **Analytics** tab is visible and the analytics API router is mounted. When set to `False`, the tab is hidden and the corresponding endpoints are not registered. Disabling does not stop the underlying data being collected, and admin retains the ability to query that data directly from the database — this toggle controls the in-product surface, see the admin-posture-toggles note above. Requires a restart to take effect. #### `BYPASS_ADMIN_ACCESS_CONTROL` @@ -386,26 +407,7 @@ is also being used and set to `True`. **Never disable this if OAUTH/SSO is not b - Default: `True` - Description: Controls whether admin users see other users' workspace items in **list and selector UI surfaces** (workspace tabs for models / knowledge / prompts / tools / notes / skills, the chat model selector, the file browser list, etc.). When set to `True` (default), those surfaces show every item from every user — convenient for single-admin / small-team deployments. When set to `False`, those same surfaces show only the admin's own items plus items explicitly shared with them, matching what a regular user would see. - **Why this exists.** Three converging reasons, all UX/posture, none of them tenant isolation: - - 1. **Performance.** On large multi-user deployments (think 5,000 users, 50,000 user-created items), loading every user's items into the admin's selector and workspace lists is a hard performance problem before any UI even renders. Enterprise admins typically use their admin account as their day-to-day account — they don't have a separate user account to switch into — so this gate keeps their normal product surfaces fast. - 2. **UI clutter reduction.** Even if loading were free, an admin's selector and workspace lists become unusable when populated with everyone else's items. - 3. **Compliance posture.** Many enterprise deployments — especially in jurisdictions with stronger labour-protection law (Germany, Austria, the broader EU) — require that admins not have *easy* access to other users' data even when they technically *could* reach it via direct database access. This flag is part of that posture: the admin is not casually presented with other users' content in their normal product surfaces. - - **What this is *not*.** This flag is **not** a hard access-control enforcement across every API endpoint, and it is **not** a tenant-isolation primitive. Specifically: - - - Per-id direct-access endpoints (`GET /api/v1//id/{id}`, the corresponding update / delete / access-update routes) are **intentionally not gated** by this flag. They were never designed to be — gating them would protect against nothing (see below) while breaking legitimate flows where an admin operates on a known item ID. - - The flag controls the admin's **casual surface area** on the data, not the data's underlying access semantics. The same pattern applies to other admin-facing UI toggles in the product (e.g. the analytics-page visibility toggle hides the page from the admin's UI but does not stop analytics from being collected). - - **Admin is root-equivalent on the deployment.** Open WebUI is single-tenant by architecture. There is no cryptographic per-tenant isolation, no multi-database split, no per-admin scope on the underlying data store. Admins inherently retain full access to all data via: - - - Direct database access on the deployment. - - Environment-variable inspection and the ability to rotate this flag back on. - - Server access (filesystem, processes, secrets). - - Admin-only **Functions** (execute arbitrary Python at module-import time inside the application process). - - **Tools** with the `workspace.tools` permission, which the admin can grant to themselves (treat as root-equivalent — see [Plugin Security documentation](/features/extensibility/plugin)). - - This flag changes none of that, and it is not advertised to. **For genuine cross-tenant data isolation between admins, the supported deployment pattern is separate Open WebUI instances per tenant.** + Per-id direct-access endpoints (`GET /api/v1//id/{id}` and the corresponding update / delete / access-update routes) are **intentionally not gated** by this flag and were never designed to be — gating them would protect against nothing the admin couldn't trivially do via the database query they used to obtain the resource ID in the first place, while breaking legitimate flows where an admin operates on a known item ID. See the admin-posture-toggles note above for the full architectural reasoning. This environment variable deprecates `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`. If you are still using `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`, switch to `BYPASS_ADMIN_ACCESS_CONTROL`. From cb39cc90cb50e435eb85a18fc15802d028ec9891 Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sat, 9 May 2026 23:51:52 +0200 Subject: [PATCH 05/15] Docs/clarify bypass admin access control (#8) --- docs/reference/env-configuration.mdx | 33 ++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/docs/reference/env-configuration.mdx b/docs/reference/env-configuration.mdx index bf92d1fe6..d2f2f3b1c 100644 --- a/docs/reference/env-configuration.mdx +++ b/docs/reference/env-configuration.mdx @@ -362,29 +362,54 @@ is also being used and set to `True`. **Never disable this if OAUTH/SSO is not b - Description: Sets a webhook for integration with Discord/Slack/Microsoft Teams. - Persistence: This environment variable is a `PersistentConfig` variable. +:::note Admin posture toggles vs. security boundaries + +The `ENABLE_ADMIN_*` and `BYPASS_ADMIN_ACCESS_CONTROL` toggles in this section control what the admin sees and does **through Open WebUI's admin product surfaces** (admin panel pages, the chat model selector, workspace lists, the export action). They do **not** establish a security boundary against the admin themselves. + +Open WebUI is single-tenant by architecture and the `admin` role is root-equivalent on the deployment by deliberate design. Admins inherently retain unconstrained access to all data via direct database access, environment-variable inspection, server access, admin-only **Functions** (which execute arbitrary Python at module-import inside the application process), and **Tools** with the `workspace.tools` permission (which the admin can grant to themselves and which run `exec()` on the server). + +These toggles are appropriate for: + +- **Performance** — keeping admin-facing list/selector surfaces fast on large multi-user deployments where loading every user's items is a hard performance problem. +- **UI clutter reduction** — keeping those same surfaces usable when populated with thousands of items from other users. +- **Compliance posture** — meeting requirements (especially in jurisdictions with stronger labour-protection law, e.g. DE / AT / EU) that admins not be *casually* presented with other users' data, even though they remain technically able to reach it via the routes above. + +These toggles are **not** appropriate for: + +- Cross-tenant data isolation between admins. There is no cryptographic per-tenant isolation, no multi-database split, and no per-admin scope on the underlying data store. For genuine tenant separation, the supported pattern is **separate Open WebUI instances per tenant**. +- A security boundary against an admin who is determined to read data they aren't shown in their UI surfaces. + +Treat anything in this cluster as *what the admin sees and does in the product UI and API*, not *what the admin technically can reach on the deployment*. + +::: + #### `ENABLE_ADMIN_EXPORT` - Type: `bool` - Default: `True` -- Description: Controls whether admins can export data, chats and the database in the admin panel. Database exports only work for SQLite databases for now. +- Description: Controls whether the admin-panel **export** action is available (data, chats, and database export). When disabled, the export endpoints reject requests. Database exports only work for SQLite databases for now. Note that admin retains the underlying ability to dump the database directly via deployment access — this toggle controls the in-product export surface, see the admin-posture-toggles note above. Requires a restart to take effect. #### `ENABLE_ADMIN_CHAT_ACCESS` - Type: `bool` - Default: `True` -- Description: Enables admin users to directly access the chats of other users. When disabled, admins can no longer accesss user's chats in the admin panel. If you disable this, consider disabling `ENABLE_ADMIN_EXPORT` too, if you are using SQLite, as the exports also contain user chats. +- Description: Controls whether the admin-panel **other-users-chats** access surface is available. When disabled, admins can no longer access other users' chats in the admin panel and the corresponding endpoints reject the request. If you disable this, consider also disabling `ENABLE_ADMIN_EXPORT` (especially on SQLite), since exports include user chats and would re-open the same data on a different surface. Note that admin retains underlying database access regardless — this toggle controls the in-product surface, see the admin-posture-toggles note above. #### `ENABLE_ADMIN_ANALYTICS` - Type: `bool` - Default: `True` -- Description: Controls whether the **Analytics** tab is visible and accessible in the admin panel. When set to `False`, the analytics API router is not mounted and the tab is hidden from the admin navigation. Useful for deployments where analytics data collection or display is not desired. Requires a restart to take effect. +- Description: Controls whether the admin-panel **Analytics** tab is visible and the analytics API router is mounted. When set to `False`, the tab is hidden and the corresponding endpoints are not registered. Disabling does not stop the underlying data being collected, and admin retains the ability to query that data directly from the database — this toggle controls the in-product surface, see the admin-posture-toggles note above. Requires a restart to take effect. #### `BYPASS_ADMIN_ACCESS_CONTROL` - Type: `bool` - Default: `True` -- Description: When disabled, admin users are treated like regular users for workspace access (models, knowledge, prompts, tools, and notes) and only see items they have **explicit permission to access** through the existing access control system. This also applies to the visibility of models in the model selector - admins will be treated as regular users: base models and custom models they do not have **explicit permission to access**, will be hidden. If set to `True` (Default), admins have access to **all created items** in the workspace area (including other users' notes) and all models in the model selector, **regardless of access permissions**. This environment variable deprecates `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`. If you are still using `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS` you should switch to `BYPASS_ADMIN_ACCESS_CONTROL`. +- Description: Controls whether admin users see other users' workspace items in **list and selector UI surfaces** (workspace tabs for models / knowledge / prompts / tools / notes / skills, the chat model selector, the file browser list, etc.). When set to `True` (default), those surfaces show every item from every user — convenient for single-admin / small-team deployments. When set to `False`, those same surfaces show only the admin's own items plus items explicitly shared with them, matching what a regular user would see. + + Per-id direct-access endpoints (`GET /api/v1//id/{id}` and the corresponding update / delete / access-update routes) are **intentionally not gated** by this flag and were never designed to be — gating them would protect against nothing the admin couldn't trivially do via the database query they used to obtain the resource ID in the first place, while breaking legitimate flows where an admin operates on a known item ID. See the admin-posture-toggles note above for the full architectural reasoning. + + This environment variable deprecates `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`. If you are still using `ENABLE_ADMIN_WORKSPACE_CONTENT_ACCESS`, switch to `BYPASS_ADMIN_ACCESS_CONTROL`. #### `ENABLE_USER_WEBHOOKS` From aa8a821f0f63a91da5b68db473a6d5ea4442066c Mon Sep 17 00:00:00 2001 From: Classic298 <27028174+Classic298@users.noreply.github.com> Date: Sun, 10 May 2026 20:29:19 +0200 Subject: [PATCH 06/15] 0.9.5 --- docs/features/channels/index.md | 14 +++++++++++ .../advanced-topics/hardening.md | 22 +++++++++++++++++ .../advanced-topics/scaling.md | 11 +++++++-- docs/reference/env-configuration.mdx | 24 +++++++++++++++++++ 4 files changed, 69 insertions(+), 2 deletions(-) diff --git a/docs/features/channels/index.md b/docs/features/channels/index.md index f206a96ee..0287854ee 100644 --- a/docs/features/channels/index.md +++ b/docs/features/channels/index.md @@ -70,6 +70,20 @@ Channels are **passive by default**. AI doesn't jump into every conversation. Wh This means your team can discuss freely without AI interrupting, and call on exactly the right model when it's needed. +### Full chat-completion pipeline + +Mentioning a model in a channel runs through the same chat-completion pipeline as a standard chat. The reply is **streamed in real time** as the model generates it, and the model has access to the full set of capabilities its configuration grants: + +| Capability | What it enables in a channel | +|------------|------------------------------| +| **Native and default function calling** | Tool calls resolve and execute mid-message | +| **Built-in tools** | Web search, image generation, code interpreter, calendar | +| **User tools and MCP tools** | Whatever the model is configured to call, it can call | +| **Filters** | Inlet/outlet/stream filters apply just like in chats | +| **Knowledge (RAG)** | Knowledge bases attached to the model are queried and injected | + +In other words, a channel-summoned model is a fully-equipped agent — not a one-shot completion. + ### Tagging people and linking channels Use `@username` to notify teammates. Use `#channel-name` to create clickable cross-references between conversations. diff --git a/docs/getting-started/advanced-topics/hardening.md b/docs/getting-started/advanced-topics/hardening.md index 59ac626b5..9dc6e3751 100644 --- a/docs/getting-started/advanced-topics/hardening.md +++ b/docs/getting-started/advanced-topics/hardening.md @@ -545,6 +545,12 @@ WEB_FETCH_FILTER_LIST=!internal.yourcompany.com,!10.0.0.0/8 Prefix entries with `!` to block them. +Outbound HTTP requests also do not follow `3xx` redirects by default. Without this gate, an attacker-supplied URL can pass the allowlist check on the originally-submitted host and then `302`-redirect to an internal address (RFC 1918, `127.0.0.1`, the cloud-metadata IP) that is reached without re-validation. The default closes that bypass across the RAG web loader, image loading, OAuth pre-flight, code-interpreter login, and tool-server execution. Keep the default unless you have a specific need (e.g. shortlink URLs) and other SSRF protections are in place: + +```bash +AIOHTTP_CLIENT_ALLOW_REDIRECTS=false +``` + ### Profile image URL forwarding The user and model profile-image endpoints can issue a `302 Found` redirect to whatever origin is stored in `profile_image_url` so that externally-hosted avatars (e.g. Gravatar via an upstream identity provider) display in the UI. That redirect causes the user's browser to make a request directly to the external origin, leaking client IP, User-Agent, and Referer headers — and an account whose `profile_image_url` was set to an attacker-controlled host can use that to deanonymize anyone who renders their avatar. @@ -557,6 +563,22 @@ ENABLE_PROFILE_IMAGE_URL_FORWARDING=false Default is `true` so existing deployments relying on external avatars keep working. Data URIs and same-origin/static images are unaffected by this flag — they continue to render normally. +Profile images stored as base64 `data:` URIs are also constrained to a MIME-type allowlist. The default is `image/png,image/jpeg,image/gif,image/webp`; SVG is intentionally excluded because it can carry inline ` + + + +# Open WebUI & AnythingLLM + +*Last updated: May 2026* + +[AnythingLLM](https://anythingllm.com/) by Mintplex Labs is one of our favorite projects in the local AI space. They've made private document Q&A genuinely accessible, the workspace-based approach to organizing knowledge is intuitive, and the team behind it is great. If you're looking for a straightforward way to chat with your documents locally, AnythingLLM is well worth a look. + +[GitHub](https://github.com/Mintplex-Labs/anything-llm) · [MIT License](https://github.com/Mintplex-Labs/anything-llm/blob/master/LICENSE) + +--- + +## What AnythingLLM Does Well + +- **Document Q&A made simple** so you can upload PDFs, code repos, and websites and start asking questions immediately +- **Workspace model** providing clean separation of different knowledge bases and conversations +- **Embedding customization** with control over chunking, overlap, and embedding model selection +- **Desktop app** for a standalone local experience without Docker or servers +- **Cloud deployment option** for teams that want hosted document Q&A +- **Privacy-first** with everything running locally so your documents stay on your machine +- **Multi-modal support** for handling images and other file types alongside text +- **Agent support** with built-in capabilities for tool use and web search +- **Active development** with a responsive team and frequent releases +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Platform breadth** including Chat, Notes, Channels, Automations, Open Terminal, voice/video calls, and image generation +- **Advanced RAG pipeline** with 9 vector databases, 5 extraction engines, hybrid BM25 + vector search with cross-encoder reranking, and agentic retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, OpenAPI integration, and a community marketplace +- **Team features** including Channels for real-time collaboration, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Model agents** that wrap any model with instructions, tools, knowledge, and parameters +- **Enterprise scale** with Kubernetes, horizontal scaling, Redis-backed sessions, OpenTelemetry, and analytics + +--- + +## At a Glance + +| | Open WebUI | AnythingLLM | +| :--- | :--- | :--- | +| **Focus** | Full AI platform with knowledge, tools, and team features | Document Q&A and workspace-based RAG | +| **RAG approach** | 9 vector DBs, 5 extraction engines, hybrid search, reranking | Built-in vector DB with straightforward document ingestion | +| **Organization** | Folders, tags, knowledge bases, notes, channels | Workspaces with dedicated knowledge | +| **Multi-provider** | Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and more | Ollama, OpenAI, Anthropic, and more | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Agent tools and web search | +| **Desktop app** | Yes | Yes | +| **Multi-user** | SSO/OIDC, LDAP, SCIM 2.0, RBAC, groups | Multi-user with permissions | +| **License** | Open WebUI License | MIT | + +--- + +## When to Use Each + +**Choose AnythingLLM if** you mostly want to chat with your documents. The workspace model keeps different projects cleanly separated, and the desktop app makes it easy to get started without any server setup. + +**Choose Open WebUI if** you need a broader platform with team collaboration, multi-provider support, extensibility, or enterprise features alongside document Q&A. + +**They solve different problems.** AnythingLLM focuses on making document Q&A as simple as possible. Open WebUI takes a wider view with chat, knowledge, collaboration, and tools. Both are good at what they do. + +--- + +*Two projects making private AI and document Q&A accessible. Different scope, same commitment to keeping your data under your control.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do AnythingLLM and Open WebUI compare?** +AnythingLLM leans into document Q&A with its workspace model. Open WebUI also has knowledge bases, team features, extensibility, and multi-provider support. They have different strengths. + +**Is AnythingLLM free?** +Yes. AnythingLLM is MIT licensed. There's a free desktop app and a self-hosted Docker version. + + +--- + +**Related:** [Open WebUI & LibreChat](./librechat) · [Open WebUI & Dify](./dify) · [Open WebUI & Ollama](./ollama) diff --git a/docs/alternatives/chatgpt.mdx b/docs/alternatives/chatgpt.mdx new file mode 100644 index 000000000..067488b25 --- /dev/null +++ b/docs/alternatives/chatgpt.mdx @@ -0,0 +1,116 @@ +--- +sidebar_position: 30 +title: "Self-Hosted ChatGPT Alternative" +sidebar_label: "Open WebUI & ChatGPT" +description: "Looking for a self-hosted ChatGPT alternative? Open WebUI connects to the OpenAI API and runs on your own infrastructure." +keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & ChatGPT + +*Last updated: May 2026* + +[ChatGPT](https://chat.openai.com/) by OpenAI introduced hundreds of millions of people to AI and set the bar for what a conversational AI experience should feel like. We use it daily. GPT-5.5, the o-series reasoning models, and the constant pace of innovation have kept it at the forefront, and honestly, it keeps pushing us to make Open WebUI better too. + +Commercial · Free tier available + +--- + +## What ChatGPT Does Well + +- **Frontier models** including GPT-5.5, o3, and the o-series reasoning models +- **Canvas** for collaborative document and code editing inside conversations +- **Projects** for organizing conversations with persistent context and custom instructions +- **Deep research** that synthesizes information across multiple sources into comprehensive reports +- **Refined experience** from years of iteration on the interface +- **GPT Store** with an ecosystem of custom GPTs built by the community +- **Multimodal** with vision, voice, image generation (DALL-E), and code execution +- **Memory** that remembers context across conversations +- **Operator** for agentic web browsing and task automation +- **Enterprise tier** with SSO, admin controls, and data privacy commitments +- **Zero setup** where you sign up and start, no installation needed + +--- + +## What Open WebUI Does Well + +- **Self-hosted** so the platform itself runs on your hardware +- **Any model, any provider** so you can use GPT-5.5 *and* Claude *and* Gemini *and* local models all in one interface +- **Knowledge & RAG** for building knowledge bases from your documents with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment +- **Free community edition** for unlimited users on your own infrastructure + +--- + +## At a Glance + +| | Open WebUI | ChatGPT | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | OpenAI models (GPT-5.5, o-series) | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by OpenAI | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | File upload with in-chat context | +| **Custom agents** | Model agents with tools, knowledge, and parameters | Custom GPTs via GPT Store | +| **Code execution** | Python in-browser + Open Terminal | Built-in code interpreter | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | GPT Actions and plugins | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Plus, Team, and Enterprise plans | + +--- + +## When to Use Each + +**Choose ChatGPT if** you want the simplest possible path to frontier AI. No installation, no configuration, just sign up and start. The native experience with Canvas, Projects, and deep research features is polished and constantly improving. + +**Choose Open WebUI if** you want to run on your own infrastructure, connect to multiple providers in one interface, build knowledge bases from your documents, or need team features like RBAC and SSO included in the free community edition. + +**Use both.** Many people do. Connect Open WebUI to the OpenAI API and use GPT-5.5 alongside Claude, Gemini, and local models, all in one place. Use ChatGPT directly when you want Canvas or deep research, and Open WebUI when you need your own knowledge bases or team workspace. + +--- + +## Use OpenAI Models Through Open WebUI + +Open WebUI connects to the OpenAI API, so all of OpenAI's models are available alongside Open WebUI's knowledge management, tools, and team features. + +**How to connect:** + +1. Get an API key from [platform.openai.com](https://platform.openai.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI connection with your API key +4. OpenAI models will appear in your model selector + +Many users run OpenAI models for complex reasoning alongside local models via Ollama for privacy-sensitive tasks, all in the same interface. + +--- + +*ChatGPT brought AI to the world. Open WebUI is one way to use those same models on your own infrastructure.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I self-host ChatGPT?** +Not ChatGPT itself, but Open WebUI connects to the OpenAI API so you can use the same models. Open WebUI runs on your infrastructure, though API calls still go to OpenAI. + +**Can I use OpenAI models in Open WebUI?** +Yes. Add your OpenAI API key in Settings and all OpenAI models appear in the model selector. + +**Is Open WebUI a ChatGPT alternative?** +Open WebUI can connect to the OpenAI API, so you can use the same models. Open WebUI runs on your own infrastructure, though API calls still go to the provider. It also supports connecting to other providers. + +**Can I use ChatGPT and local models together?** +Yes. Many users run OpenAI for complex reasoning alongside local models via Ollama for privacy-sensitive tasks, all in the same Open WebUI interface. + + +--- + +**Related:** [Open WebUI & Claude](./claude) · [Open WebUI & Gemini](./gemini) · [Open WebUI & Ollama](./ollama) diff --git a/docs/alternatives/claude.mdx b/docs/alternatives/claude.mdx new file mode 100644 index 000000000..9d299248b --- /dev/null +++ b/docs/alternatives/claude.mdx @@ -0,0 +1,115 @@ +--- +sidebar_position: 31 +title: "Self-Hosted Claude Alternative" +sidebar_label: "Open WebUI & Claude" +description: "Looking for a self-hosted Claude alternative? Use Claude models through Open WebUI on your own infrastructure." +keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Claude + +*Last updated: May 2026* + +[Claude](https://claude.ai/) by Anthropic has earned a loyal following for its strength in writing, reasoning, and long-context analysis, and we count ourselves among those fans. We use Claude daily. The extended thinking capabilities, massive context windows (up to 200k tokens), and Anthropic's focus on safety and helpfulness make it genuinely one of the best AI experiences available. + +Commercial · Free tier available + +--- + +## What Claude Does Well + +- **Writing and reasoning** with thoughtful, nuanced responses and careful analysis +- **Extended thinking** that shows step-by-step reasoning for complex problems in real time +- **Large context windows** up to 200k tokens for working with large documents and codebases +- **Artifacts** for interactive outputs (code, documents, visualizations) alongside the conversation +- **Claude Code** for agentic coding directly in your terminal +- **Computer use** that lets Claude interact with desktop applications and web interfaces +- **MCP (Model Context Protocol)** which Anthropic created to standardize how AI tools connect to data sources +- **Safety-first design** through Anthropic's constitutional AI approach +- **Strong at code** with excellent code generation, review, and debugging +- **Projects** for organizing conversations with persistent context and instructions + +--- + +## What Open WebUI Does Well + +- **Any model, one interface** so you can use Claude *alongside* GPT-5.5, Gemini, Llama, and local models +- **Self-hosted** so the platform itself runs on your infrastructure +- **Knowledge & RAG** for persistent knowledge bases with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment +- **Free community edition** for unlimited users on your own infrastructure + +--- + +## At a Glance + +| | Open WebUI | Claude | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | Anthropic's Claude model family | +| **Extended thinking** | Supported for models that offer it (including Claude via API) | Native extended thinking | +| **Context window** | Depends on the model you connect | Up to 200k tokens | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Projects with persistent context | +| **Code execution** | Python in-browser + Open Terminal | Artifacts with interactive code | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by Anthropic | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Pro, Team, and Enterprise plans | + +--- + +## When to Use Each + +**Choose Claude if** you want the best writing and reasoning experience available, especially for long-context work, code review, or nuanced analysis. The extended thinking mode is particularly strong for complex problems. Claude Code and computer use push the boundaries of what AI can do autonomously. + +**Choose Open WebUI if** you want to use Claude alongside other models in one interface, build persistent knowledge bases, or need team collaboration features. Open WebUI also supports Claude's extended thinking via the API. + +**Use both.** Connect Open WebUI to the Anthropic API and use Claude for deep analysis alongside GPT-5.5 for other tasks and local models for privacy-sensitive work. Use claude.ai directly when you want Artifacts, Projects, or computer use. + +--- + +## Use Claude Through Open WebUI + +Claude models are available through Open WebUI via the Anthropic API. Many Open WebUI users run Claude as their primary model, getting Claude's reasoning alongside Open WebUI's knowledge bases, tools, and team features. + +**How to connect:** + +1. Get an API key from [console.anthropic.com](https://console.anthropic.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new connection with your Anthropic API key and the base URL `https://api.anthropic.com/v1` +4. Claude models will appear in your model selector + +You can use Claude for complex analysis and writing while routing simpler tasks to local models via Ollama, all in the same interface. + +--- + +*Claude is exceptional AI. Open WebUI is one way to use those same models on your own infrastructure, alongside other models you rely on.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I self-host Claude?** +Not Claude itself, but you can use Claude models through Open WebUI via the Anthropic API. Open WebUI itself runs on your infrastructure, though API calls still go to Anthropic. + +**Can I use Claude in Open WebUI?** +Yes. Add your Anthropic API key in Settings and Claude models appear in the model selector. + +**Can I use Claude and ChatGPT together?** +Yes, Open WebUI supports connecting to multiple providers at once. + +**Does Open WebUI support Claude's extended thinking?** +Yes. Extended thinking is supported for models that offer it, including Claude via the Anthropic API. + + +--- + +**Related:** [Open WebUI & ChatGPT](./chatgpt) · [Open WebUI & Gemini](./gemini) · [Open WebUI & Ollama](./ollama) diff --git a/docs/alternatives/dify.mdx b/docs/alternatives/dify.mdx new file mode 100644 index 000000000..a085b1833 --- /dev/null +++ b/docs/alternatives/dify.mdx @@ -0,0 +1,114 @@ +--- +sidebar_position: 20 +title: "Open WebUI vs Dify" +sidebar_label: "Open WebUI & Dify" +description: "Open WebUI vs Dify compared. An AI chat platform and a visual workflow builder for different use cases." +keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Dify + +*Last updated: May 2026* + +[Dify](https://dify.ai/) by LangGenius takes a fundamentally different approach to AI tooling. Where most tools on this page focus on conversation, Dify focuses on *building*: visual workflow design, agent orchestration, prompt engineering, and deploying AI-powered applications. If you think of AI as a platform for building things rather than just chatting, Dify is worth a serious look. + +[GitHub](https://github.com/langgenius/dify) · [Source Available (modified Apache 2.0)](https://github.com/langgenius/dify/blob/main/LICENSE) + +--- + +## What Dify Does Well + +- **Visual workflow builder** with drag-and-drop interface for designing complex AI pipelines and logic +- **Agent framework** for building autonomous agents that reason, use tools, and take actions +- **Prompt engineering IDE** for crafting, versioning, testing, and comparing prompts in a dedicated environment +- **Workflow marketplace** for sharing and importing community-built workflows and templates +- **Model routing** with smart routing across multiple providers for cost and capability optimization +- **RAG pipeline** with document ingestion, processing, and retrieval built in +- **Batch processing** for running prompts and workflows against large datasets +- **Annotation and feedback** for collecting human feedback to improve AI outputs over time +- **Observability** including integrated monitoring, logging, and cost tracking for production use +- **Backend-as-a-Service** for deploying AI apps as APIs instantly +- **Embeddable widget** for adding AI chat to any website or application +- **Strong community** with a large and active contributor and user base + +--- + +## What Open WebUI Does Well + +- **Conversational AI platform** with Chat, Notes, Channels, Automations, voice/video calls, and more +- **Any model, any provider** including Ollama, OpenAI, Anthropic, Google, Azure, and Bedrock in one interface +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team collaboration** including Channels, model agents, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment for code execution +- **Simpler deployment** with a single Docker container to get started + +--- + +## At a Glance + +| | Open WebUI | Dify | +| :--- | :--- | :--- | +| **Primary focus** | AI chat platform with knowledge, tools, and team features | AI application builder with visual workflows | +| **Approach** | Conversation-first | Build-first | +| **Workflow building** | Python tools and pipelines | Visual drag-and-drop workflow designer | +| **RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Built-in RAG pipeline | +| **Agent capabilities** | Model agents with bound tools and knowledge | Agent framework with reasoning and tool use | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Multi-provider with model routing | +| **Observability** | OpenTelemetry, analytics dashboards | Built-in monitoring, logging, and cost tracking | +| **License** | Open WebUI License | Source Available (modified Apache 2.0 with commercial restrictions) | + +--- + +## When to Use Each + +**Choose Dify if** you want to build AI-powered applications with visual workflows. The drag-and-drop builder, prompt IDE, and agent framework are designed for developers and product teams who are creating AI features, not just chatting. + +**Choose Open WebUI if** your team needs a daily AI workspace for chat, knowledge management, and collaboration. Open WebUI focuses on using AI rather than building AI applications. + +**Use both.** Dify exposes an OpenAI-compatible API. Connect Open WebUI to Dify's API and your Dify workflows appear as models in Open WebUI. Build in Dify, use in Open WebUI. + +--- + +## Use Them Together + +Dify exposes an OpenAI-compatible API for any workflow or app you build. You can connect Open WebUI to Dify's API endpoint to use your Dify-built AI applications as models inside Open WebUI, combining Dify's workflow orchestration with Open WebUI's chat interface, knowledge management, and team features. + +**How to connect:** + +1. In Dify, publish your app and copy the API endpoint and key +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI-compatible connection with Dify's API URL and key +4. Your Dify apps will appear as models in Open WebUI + +--- + +*Dify is for building AI applications. Open WebUI is for using AI daily. The AI ecosystem needs both builders and users.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do Dify and Open WebUI compare?** +Dify takes a visual, workflow-first approach to building AI applications. Open WebUI leans more toward conversation and daily AI use. They come at AI from different angles, and many teams could use both. + +**Can I use Dify with Open WebUI?** +Yes. Dify exposes an OpenAI-compatible API. Connect Open WebUI to Dify's API to use your Dify workflows as models inside Open WebUI. + +**Is Dify free?** +The community edition is free to self-host. Dify is source available under a modified Apache 2.0 license. + +--- + +**Related:** [Open WebUI & Onyx](./onyx) · [Open WebUI & LibreChat](./librechat) · [Open WebUI & AnythingLLM](./anythingllm) diff --git a/docs/alternatives/gemini.mdx b/docs/alternatives/gemini.mdx new file mode 100644 index 000000000..9ebcea4c0 --- /dev/null +++ b/docs/alternatives/gemini.mdx @@ -0,0 +1,109 @@ +--- +sidebar_position: 32 +title: "Self-Hosted Gemini Alternative" +sidebar_label: "Open WebUI & Gemini" +description: "Looking for a self-hosted Gemini alternative? Use Google AI models through Open WebUI on your own terms." +keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Gemini + +*Last updated: May 2026* + +[Gemini](https://gemini.google.com/) brings Google's AI research into a consumer product with strong multimodal capabilities (text, vision, audio, code), generous context windows, and natural integration with Google Workspace. Gemini 3.1 Pro is among the strongest models available for code and complex tasks. + +Commercial · Free tier available + +--- + +## What Gemini Does Well + +- **Multimodal strength** across text, images, audio, video, and code in a single model +- **Google Workspace integration** that works naturally with Gmail, Docs, Drive, and other Google tools +- **Gems** for creating custom AI personas with specific instructions and behavior +- **NotebookLM** for turning documents into interactive study guides and audio overviews +- **Deep Research** that conducts multi-step research and produces comprehensive reports +- **Generous context windows** with Gemini 3.1 Pro handling large documents and codebases +- **Competitive API pricing** as one of the most cost-effective APIs for high-quality models +- **Code generation** where Gemini 3.1 Pro is among the strongest for code tasks +- **Google Search grounding** with responses backed by Google's search infrastructure +- **Zero setup** and available immediately through your Google account + +--- + +## What Open WebUI Does Well + +- **Any model, one interface** so you can use Gemini *alongside* Claude, GPT-5.5, Llama, and local models +- **Self-hosted** so the platform itself runs on your infrastructure +- **Knowledge & RAG** for persistent knowledge bases with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment + +--- + +## At a Glance + +| | Open WebUI | Gemini | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | Gemini model family | +| **Multimodal** | Depends on connected models | Native text, vision, audio, video, code | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Google Search grounding, file uploads | +| **Ecosystem** | Connects to any API via MCP/OpenAPI | Deep Google Workspace integration | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by Google | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Advanced (Google One AI Premium) | + +--- + +## When to Use Each + +**Choose Gemini if** you live in the Google ecosystem and want AI that integrates naturally with Gmail, Docs, Drive, and Search. NotebookLM and Deep Research are standout features with no direct equivalent elsewhere. The API pricing is also among the most competitive. + +**Choose Open WebUI if** you want to use Gemini alongside other providers, need persistent knowledge bases, or want to self-host. Open WebUI connects to the Google AI API so you still get Gemini's models. + +**Use both.** Use Gemini directly for Google Workspace integration and NotebookLM. Connect Open WebUI to the Google AI API for Gemini models alongside Claude, OpenAI, and local models in one interface. + +--- + +## Use Gemini Through Open WebUI + +Gemini models are available through Open WebUI via the Google AI API. You can use Gemini's multimodal capabilities alongside other models you connect. + +**How to connect:** + +1. Get an API key from [aistudio.google.com](https://aistudio.google.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new connection with the base URL `https://generativelanguage.googleapis.com/v1beta/openai` and your Google AI API key +4. Gemini models will appear in your model selector + +--- + +*Gemini brings strong multimodal AI to the Google ecosystem. Open WebUI is one way to use those models alongside others, on your own infrastructure.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use Gemini in Open WebUI?** +Yes. Add your Google AI API key in Settings and Gemini models appear in the model selector. + +**Can I self-host Gemini?** +Not Gemini itself, but you can use Gemini models through Open WebUI via the Google AI API. Open WebUI runs on your infrastructure, though API calls still go to Google. + +**Can I use Gemini and Claude together?** +Yes, Open WebUI supports connecting to multiple providers at once. + +--- + +**Related:** [Open WebUI & ChatGPT](./chatgpt) · [Open WebUI & Claude](./claude) · [Open WebUI & Ollama](./ollama) diff --git a/docs/alternatives/index.mdx b/docs/alternatives/index.mdx new file mode 100644 index 000000000..6b67aa609 --- /dev/null +++ b/docs/alternatives/index.mdx @@ -0,0 +1,32 @@ +--- +sidebar_position: 1500 +title: "🌍 Alternatives to Open WebUI" +description: "Looking for an Open WebUI alternative? Here are the AI tools we actually use and love." +keywords: ["open webui alternatives", "open webui alternative", "best open webui alternatives", "open webui vs", "open webui comparison", "tools like open webui"] +--- + +# Alternatives to Open WebUI + +The AI space is full of great projects, and we're genuinely happy about that. More tools means more people get access to AI in a way that works for them. + +We get asked "what else is out there?" a lot, so we put this list together. If Open WebUI isn't quite the right fit for your use case, these are the tools we'd point you to. We've actually used them, built alongside them, or just think they do something really well. Everything listed here is free to get started with. + +| Tool | What It's Great For | License | Works with Open WebUI | | +| :--- | :--- | :--- | :--- | :--- | +| **Ollama** | The most popular way to run local models | Open Source (MIT) | Native integration | [Learn more →](./ollama) | +| **llama.cpp** | The engine that made local AI possible | Open Source (MIT) | Via API | [Learn more →](./llama-cpp) | +| **LM Studio** | Beautiful desktop app for local model management | Proprietary (free) | Via API | [Learn more →](./lm-studio) | +| **Jan** | Simple, privacy-first local AI desktop app | Open Source (Apache 2.0) | Via API | [Learn more →](./jan) | +| **AnythingLLM** | Private document Q&A done right | Open Source (MIT) | | [Learn more →](./anythingllm) | +| **LibreChat** | Solid self-hosted multi-provider chat | Open Source (MIT) | | [Learn more →](./librechat) | +| **Msty** | Refined desktop hub for local and cloud models | Proprietary (free tier) | | [Learn more →](./msty) | +| **Onyx** | Enterprise search with 40+ connectors | Source Available (MIT core + Enterprise License for ee/) | | [Learn more →](./onyx) | +| **Dify** | Visual workflow builder for LLM applications | Source Available (modified Apache 2.0) | Via API | [Learn more →](./dify) | +| **ChatGPT / OpenAI** | The one that started it all, we use it daily | Commercial (free tier) | Via OpenAI API | [Learn more →](./chatgpt) | +| **Claude / Anthropic** | Exceptional writing and reasoning, we use it daily | Commercial (free tier) | Via Anthropic API | [Learn more →](./claude) | +| **Gemini / Google** | Multimodal AI with Google ecosystem integration | Commercial (free tier) | Via Google AI API | [Learn more →](./gemini) | + +Open WebUI itself is **source available** under the [Open WebUI License](/license). + +Every project on this list is built by people who care about making AI more accessible. That pushes all of us, Open WebUI included, to be better. We'd love to be your first choice, but we'd rather you have great options than no options. + diff --git a/docs/alternatives/jan.mdx b/docs/alternatives/jan.mdx new file mode 100644 index 000000000..bb8e9ba8b --- /dev/null +++ b/docs/alternatives/jan.mdx @@ -0,0 +1,104 @@ +--- +sidebar_position: 4 +title: "Open WebUI & Jan" +sidebar_label: "Open WebUI & Jan" +description: "How Open WebUI and Jan work together. Two local-first AI tools with different strengths." +keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Jan + +*Last updated: May 2026* + +[Jan](https://jan.ai/) by Homebrew (Menlo Research) is built on a clear vision: AI should run on your device, offline, completely under your control. The desktop app is clean, the model hub makes it easy to get started, and the commitment to privacy is genuine. + +[GitHub](https://github.com/janhq/jan) · [Apache 2.0 License](https://github.com/janhq/jan/blob/main/LICENSE) + +--- + +## What Jan Does Well + +- **Local-first** with everything running on your machine, 100% offline +- **Simple and focused** with a clean interface that avoids unnecessary complexity +- **Built-in model hub** for browsing and downloading models with one click +- **Cortex engine** powering the runtime with support for GGUF and TensorRT-LLM +- **Thread-based conversations** for organizing chats by topic +- **Extensions system** for adding capabilities through community plugins +- **Open source** under the Apache 2.0 license +- **Privacy by design** so your data never leaves your device +- **Lightweight** and runs well on modest hardware +- **Cross-platform** on macOS, Windows, and Linux + +--- + +## What Open WebUI Does Well + +- **Web-based platform** with multi-user access from any browser +- **Any model, any provider** using local models alongside OpenAI, Anthropic, Google, and others +- **Knowledge & RAG** with persistent knowledge bases and advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team features** including Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Scales up** from one person to thousands, Docker to Kubernetes + +--- + +## At a Glance + +| | Open WebUI | Jan | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform for individuals and teams | Desktop app for private, local AI | +| **Model management** | Connects to model runners and APIs | Built-in model hub with one-click downloads | +| **Multi-provider** | Local + cloud models | Focused on local models | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Focused on chat | +| **Multi-user** | SSO, RBAC, SCIM, teams | Personal desktop use | +| **Offline** | Fully offline with local models | 100% offline | +| **License** | Open WebUI License | Apache 2.0 | + +--- + +## When to Use Each + +**Choose Jan if** you want the simplest, most private way to run AI locally on your desktop. No servers, no configuration, no accounts. Just download, pick a model, and start chatting. + +**Choose Open WebUI if** you need web-based access, team collaboration, knowledge bases, or want to combine local models with cloud providers. Open WebUI runs as a web server that your whole team can use. + +**Use both.** Jan can serve models via its local API. Connect Open WebUI to Jan's API for web-based team access while keeping Jan as your model runner. + +--- + +## Works With Open WebUI + +Jan can serve models via a local API endpoint. If you're using Jan to manage your local models, you can connect Open WebUI to Jan's API for a web-based experience with multi-user support, knowledge bases, and tools. + +--- + +*Jan keeps local AI simple and private. Open WebUI adds a platform layer on top. Different approaches, same belief that AI should run on your hardware.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use Jan with Open WebUI?** +Yes. Jan can serve models via a local API endpoint. Connect Open WebUI to Jan's API for web-based access with team features. + +**How do Jan and Open WebUI work together?** +Jan handles running models locally on your desktop. Open WebUI can add web-based access, knowledge bases, and team features. You can connect Open WebUI to Jan's API and use them together. + +**Is Jan free?** +Yes. Jan is open source under the Apache 2.0 license. + +--- + +**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & llama.cpp](./llama-cpp) diff --git a/docs/alternatives/librechat.mdx b/docs/alternatives/librechat.mdx new file mode 100644 index 000000000..f4227f580 --- /dev/null +++ b/docs/alternatives/librechat.mdx @@ -0,0 +1,104 @@ +--- +sidebar_position: 10 +title: "Open WebUI vs LibreChat" +sidebar_label: "Open WebUI & LibreChat" +description: "Open WebUI vs LibreChat compared. Two respected self-hosted AI chat platforms with different strengths." +keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & LibreChat + +*Last updated: May 2026* + +[LibreChat](https://www.librechat.ai/) is one of the projects we genuinely respect in this space. It offers a multi-provider chat experience with strong authentication support, side-by-side model comparison, and a focused feature set that does the fundamentals well. The project is MIT-licensed, actively maintained, and Danny and the community behind it have built something solid. + +[GitHub](https://github.com/danny-avila/LibreChat) · [MIT License](https://github.com/danny-avila/LibreChat/blob/main/LICENSE) + +--- + +## What LibreChat Does Well + +- **Multi-provider chat** with a unified interface for OpenAI, Anthropic, Google, Azure, Ollama, and any OpenAI-compatible API +- **Model comparison** with side-by-side responses from different models in a single conversation +- **Presets system** for saving and quickly switching between model configurations and system prompts +- **Artifacts** for rendering code outputs, documents, and visualizations inline +- **Authentication** including LDAP, SSO, and social login support +- **Built-in code interpreter** for supported models +- **Prompt caching** for reducing API costs on repeated interactions +- **Focused scope** that does the chat interface well without overcomplicating things +- **Active development** with a responsive maintainer and engaged community +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Platform beyond chat** including Notes, Channels, Automations, Open Terminal, voice/video calls, image generation, and calendar +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, hybrid search with reranking, and agentic retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, OpenAPI integration, and a community marketplace +- **Model agents** that wrap any model with custom instructions, tools, knowledge, and parameters +- **Enterprise features** including RBAC, SSO/OIDC/LDAP, SCIM 2.0, analytics dashboards, and evaluation arena +- **Flexible deployment** via Docker, Kubernetes, pip, or desktop app, with horizontal scaling and OpenTelemetry + +--- + +## At a Glance + +| | Open WebUI | LibreChat | +| :--- | :--- | :--- | +| **Focus** | Full AI platform with knowledge, tools, and team features | Multi-provider AI chat interface | +| **Multi-provider** | Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and more | OpenAI, Anthropic, Google, Azure, Ollama, and more | +| **Model comparison** | Multi-model chats | Side-by-side comparison | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search, agentic retrieval | File attachment support | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Plugin system with presets | +| **Code execution** | Python in-browser + Open Terminal | Built-in code interpreter | +| **Team collaboration** | Channels, Notes, RBAC, SSO, SCIM | Multi-user with auth | +| **License** | Open WebUI License | MIT | + +--- + +## When to Use Each + +**Choose LibreChat if** you want a clean, focused multi-provider chat interface with strong model comparison features. The presets system makes it easy to switch between configurations, and the MIT license gives you maximum flexibility. + +**Choose Open WebUI if** you need a broader platform with knowledge bases, team collaboration tools, Python extensibility, or enterprise features like SCIM and analytics. + +**Run both.** They connect to the same backends. Some teams use LibreChat for quick individual chats and Open WebUI for collaborative work with knowledge bases and tools. + +--- + +## Use Them Together + +Both projects connect to the same backends (Ollama, OpenAI, etc.), so you can run both side by side. Some teams use LibreChat for quick individual chats and Open WebUI for team collaboration and knowledge work. + +--- + +*Two actively maintained projects making self-hosted AI accessible. Different strengths, same ecosystem.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do LibreChat and Open WebUI compare?** +LibreChat does the multi-provider chat interface really well. Open WebUI also includes knowledge bases, team collaboration, extensibility, and enterprise features. Different scope, both worth looking at. + +**Is LibreChat free?** +Yes. LibreChat is MIT licensed and free to self-host. + +**Can I use both LibreChat and Open WebUI?** +Yes. Both connect to the same backends (Ollama, OpenAI, etc.), so you can run both side by side. + +--- + +**Related:** [Open WebUI & AnythingLLM](./anythingllm) · [Open WebUI & Msty](./msty) · [Open WebUI & Ollama](./ollama) diff --git a/docs/alternatives/llama-cpp.mdx b/docs/alternatives/llama-cpp.mdx new file mode 100644 index 000000000..349b07689 --- /dev/null +++ b/docs/alternatives/llama-cpp.mdx @@ -0,0 +1,101 @@ +--- +sidebar_position: 2 +title: "Open WebUI & llama.cpp" +sidebar_label: "Open WebUI & llama.cpp" +description: "How to connect llama-server to Open WebUI. Integration guide for two essential local AI tools." +keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & llama.cpp + +*Last updated: May 2026* + +[llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov is one of the most important projects in the AI ecosystem, and we mean that. Without llama.cpp, the local AI movement as we know it wouldn't exist. It proved that you could run serious models on consumer hardware, introduced the GGUF format that became the industry standard, and inspired an entire generation of tools. And with `llama-server`, it's not just an engine anymore: it has its own built-in web interface and OpenAI-compatible API ready to go. + +[GitHub](https://github.com/ggml-org/llama.cpp) · [MIT License](https://github.com/ggml-org/llama.cpp/blob/main/LICENSE) + +--- + +## What llama.cpp Does Well + +- **State-of-the-art inference performance** on consumer hardware, consistently pushing what's possible +- **Built-in web interface** via `llama-server`, ready to use out of the box +- **Broad hardware support** including CPU, CUDA, Metal, Vulkan, and SYCL +- **GGUF format** that became the quantized model standard for the entire industry +- **Quantization options** from Q2 to Q8 with multiple strategies for different quality/speed tradeoffs +- **Speculative decoding** for faster generation using draft models +- **Flash Attention** and other advanced inference optimizations +- **Grammar-constrained generation** for structured outputs (JSON, code, etc.) +- **OpenAI-compatible API** via `llama-server` so any tool can connect to it +- **Multi-model router mode** for serving multiple models from one endpoint +- **One of the most actively developed projects in AI** with a pace of commits that's hard to match +- **MIT licensed** and genuinely community-driven + +--- + +## What Open WebUI Does Well + +- **Rich web platform** with full chat, conversations, history, organization, and search +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** including custom tools, MCP servers, pipelines, and community extensions +- **Multi-provider support** to use llama.cpp models alongside OpenAI, Anthropic, Google, and others +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Multi-user support** from one person to thousands + +--- + +## When to Use Each + +**Use llama.cpp directly if** you want maximum control over inference. It gives you fine-grained tuning of quantization, context sizes, batch processing, and hardware utilization that no wrapper can match. The built-in web UI works well for solo use. + +**Add Open WebUI if** you want a richer interface, knowledge bases, team access, or the ability to connect other providers alongside llama.cpp. Open WebUI talks to `llama-server` via its OpenAI-compatible API. + +**Use both.** llama.cpp handles inference with maximum performance. Open WebUI handles the platform layer with knowledge, tools, and collaboration. + +--- + +## Use Them Together + +llama.cpp's `llama-server` exposes an OpenAI-compatible API, which means Open WebUI can connect to it directly. Use llama.cpp for high-performance inference, Open WebUI for the platform layer. + +```bash +# Start llama-server +llama-server -m your-model.gguf --port 8081 + +# Point Open WebUI at it +# In Admin → Settings → Connections, add: +# URL: http://localhost:8081/v1 +``` + +--- + +*llama.cpp made local AI possible. Open WebUI builds a platform layer on top. They work well together.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I connect llama-server to Open WebUI?** +Yes. llama-server exposes an OpenAI-compatible API. Add `http://localhost:8081/v1` as a connection in Open WebUI and your models appear automatically. + +**Does Open WebUI support llama-server's multi-model routing?** +Yes. If you're running llama-server in router mode with multiple models, Open WebUI will detect and list all available models through the API. + +**Is llama.cpp free?** +Yes. llama.cpp is MIT licensed and free for any use. + +--- + +**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & Jan](./jan) diff --git a/docs/alternatives/lm-studio.mdx b/docs/alternatives/lm-studio.mdx new file mode 100644 index 000000000..f8758b0d0 --- /dev/null +++ b/docs/alternatives/lm-studio.mdx @@ -0,0 +1,112 @@ +--- +sidebar_position: 3 +title: "Open WebUI & LM Studio" +sidebar_label: "Open WebUI & LM Studio" +description: "How Open WebUI and LM Studio work together. Two approaches to local AI that pair well." +keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & LM Studio + +*Last updated: May 2026* + +[LM Studio](https://lmstudio.ai/) has nailed the desktop experience for local AI. The built-in model browser makes discovering and downloading models from Hugging Face effortless, the inference performance is solid, and the UI is clean and intuitive. For anyone who wants to run local models without touching a terminal, LM Studio is a strong option. + +Proprietary · Free for personal and commercial use + +--- + +## What LM Studio Does Well + +- **Model browser** for discovering, downloading, and managing models from Hugging Face with a GUI +- **Model search and filtering** to find exactly the right model by size, architecture, or quantization +- **Quantization preview** so you can see how different quantization levels affect model quality before downloading +- **Strong performance** with solid hardware utilization (Metal, CUDA) for fast local inference +- **OpenAI-compatible API server** that serves your local models to any application that speaks the OpenAI API +- **MCP support** for connecting to Model Context Protocol servers for extended tool use +- **RAG capabilities** with built-in document-based chat for local files +- **Prompt templates** with a library of pre-configured prompts for common tasks +- **Free for everyone** for both personal and commercial use +- **Cross-platform** on macOS, Windows, and Linux +- **Developer-friendly** local API server for integrating local models into your projects + +--- + +## What Open WebUI Does Well + +- **Full web platform** with multi-user chat, Notes, Channels, Automations, Open Terminal, and more +- **Any provider** so you can use LM Studio's local models alongside OpenAI, Anthropic, Google, and others +- **Deep RAG & Knowledge** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, pipelines, MCP, and OpenAPI integration +- **Team features** including RBAC, SSO/OIDC/LDAP, SCIM 2.0, analytics, and evaluation arena +- **Scales from one to thousands** via Docker, Kubernetes, and pip + +--- + +## At a Glance + +| | Open WebUI | LM Studio | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform for teams and individuals | Desktop app for local model management and chat | +| **Model management** | Connects to model runners (Ollama, etc.) | Built-in model browser with Hugging Face integration | +| **Multi-provider** | Local + cloud models in one interface | Focused on local models | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Built-in document chat | +| **Multi-user** | SSO, RBAC, SCIM, teams | Personal desktop use | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | MCP support | +| **API server** | Full API | OpenAI-compatible local server | +| **Pricing** | Free community edition; Enterprise plans available | Free for personal and commercial use | + +--- + +## When to Use Each + +**Choose LM Studio if** you want the best desktop experience for discovering and running local models. The model browser makes it easy to explore what's available on Hugging Face, compare quantizations, and get running quickly. + +**Choose Open WebUI if** you want a web-based platform with team access, persistent knowledge bases, or the ability to use local models alongside cloud providers like OpenAI, Anthropic, and Google. + +**Use both.** LM Studio's model browser and management are excellent for finding and running models. Open WebUI can connect to LM Studio's API server to add web access, knowledge bases, and team features on top. + +--- + +## Use Them Together + +LM Studio's OpenAI-compatible API server works well as a backend for Open WebUI. You can use LM Studio to manage and serve your local models, then connect Open WebUI to LM Studio's API. + +**How to connect:** + +1. In LM Studio, start the local API server (default port 1234) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI-compatible connection with URL `http://localhost:1234/v1` +4. Your LM Studio models will appear in the model selector + +--- + +*LM Studio makes local models accessible on the desktop. Open WebUI adds a web-based platform layer. Both are making local AI more useful.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use LM Studio with Open WebUI?** +Yes. Start LM Studio's local API server and add `http://localhost:1234/v1` as a connection in Open WebUI. + +**How do LM Studio and Open WebUI work together?** +LM Studio handles model management and local inference on your desktop. Open WebUI can add web-based multi-user access, knowledge bases, and team features. A lot of people use LM Studio as the backend and Open WebUI as the frontend. + +**Is LM Studio free?** +Yes. LM Studio is free for personal and commercial use, though it is proprietary software. + +--- + +**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & llama.cpp](./llama-cpp) · [Open WebUI & Jan](./jan) diff --git a/docs/alternatives/msty.mdx b/docs/alternatives/msty.mdx new file mode 100644 index 000000000..21f7a97fa --- /dev/null +++ b/docs/alternatives/msty.mdx @@ -0,0 +1,100 @@ +--- +sidebar_position: 13 +title: "Open WebUI vs Msty" +sidebar_label: "Open WebUI & Msty" +description: "Open WebUI vs Msty compared. A web platform and a desktop app, two approaches to AI." +keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Msty + +*Last updated: May 2026* + +[Msty](https://msty.app/) has built a refined desktop experience for people who want one place to use both local and cloud-based models. The split-chat feature for running multiple models side-by-side to compare responses is genuinely useful, and the overall design feels thoughtful. + +Proprietary · Free tier available + +--- + +## What Msty Does Well + +- **Split chat** for running multiple models side-by-side to compare responses in real time +- **Unified hub** for local models (via Ollama, llama.cpp, MLX) and cloud APIs (OpenAI, Anthropic, Google) +- **Knowledge Stacks** for uploading documents and chatting with them using built-in RAG +- **Offline mode** for fully air-gapped use with local models +- **Batch prompting** for sending the same prompt to multiple models simultaneously +- **Hardware optimization** with good performance across NVIDIA, AMD, and Apple Silicon +- **Persona & Prompt Studios** for creating reusable personas and prompt templates +- **Conversation export** in multiple formats for archiving and sharing +- **Web search integration** with real-time web search during conversations +- **Thoughtful experience** that feels refined and considered +- **Free tier** with core features available at no cost + +--- + +## What Open WebUI Does Well + +- **Web-based platform** with multi-user access from any browser +- **Any model, any provider** connecting to any OpenAI-compatible API, Ollama, or cloud provider +- **Deep RAG & Knowledge** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team features** including Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Source available** so you can read, audit, and modify the source code + +--- + +## At a Glance + +| | Open WebUI | Msty | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform | Desktop app | +| **Multi-model comparison** | Multi-model chats | Split chat with side-by-side responses | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Local models + cloud APIs | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Knowledge Stacks with document chat | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Persona & Prompt Studios | +| **Multi-user** | SSO, RBAC, SCIM, teams | Teams plan available | +| **Source availability** | Source available | Proprietary | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Aurum, and Teams plans | + +--- + +## When to Use Each + +**Choose Msty if** you want a polished desktop app for personal use, especially if you compare models frequently. The split-chat feature and batch prompting make it easy to evaluate different models side by side. + +**Choose Open WebUI if** you need a web-based platform, team access, deeper knowledge management, Python extensibility, or enterprise features. Open WebUI runs as a server that your whole team can reach from any browser. + +**Different form factors.** Msty excels as a desktop app for individual power users. Open WebUI works well as a team platform accessible from anywhere. + +--- + +*Msty brings polish to desktop AI. Open WebUI takes a web-based, team-oriented approach. Different tools, same goal of making AI more useful.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do Msty and Open WebUI compare?** +Msty has a polished desktop experience with a great split-chat feature for comparing models. Open WebUI takes a web-based approach with multi-user support, knowledge bases, and extensibility. Different tools for different preferences. + +**Is Msty free?** +Msty has a free tier. Premium features require a paid Aurum plan. Teams pricing is also available. + +**Is Msty open source?** +No. Msty is proprietary software with a free tier. + +--- + +**Related:** [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & LibreChat](./librechat) · [Open WebUI & Jan](./jan) diff --git a/docs/alternatives/ollama.mdx b/docs/alternatives/ollama.mdx new file mode 100644 index 000000000..223320590 --- /dev/null +++ b/docs/alternatives/ollama.mdx @@ -0,0 +1,118 @@ +--- +sidebar_position: 1 +title: "Open WebUI & Ollama" +sidebar_label: "Open WebUI & Ollama" +description: "How Open WebUI and Ollama work together. The most popular local AI pairing, with setup guide and honest comparison." +keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface"] +--- + +import Head from '@docusaurus/Head'; + + + + + +# Open WebUI & Ollama + +*Last updated: May 2026* + +[Ollama](https://ollama.com/) is the project that made local AI click for millions of people, and Open WebUI wouldn't be where it is without them. One command to install, one command to run, and you're chatting with a model. The desktop app includes a built-in chat interface, the CLI is fast and intuitive, and the team behind it consistently ships. We're big fans. + +[GitHub](https://github.com/ollama/ollama) · [MIT License](https://github.com/ollama/ollama/blob/main/LICENSE) + +--- + +## What Ollama Does Well + +- **Dead simple** to install and run a model in seconds +- **Desktop app with built-in chat** for a complete standalone experience +- **Huge model library** with hundreds of models ready to download from the Ollama registry +- **Modelfiles** for customizing models with system prompts, parameters, and adapters +- **Great performance** optimized for consumer hardware (Metal, CUDA, CPU) with automatic GPU layer splitting +- **OpenAI-compatible API** that works as a backend for many tools and applications +- **Concurrent model loading** for running multiple models simultaneously +- **Cross-platform** on macOS, Linux, Windows, and Docker +- **Actively developed** with fast iteration and a responsive team +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Rich web interface** with full chat, conversations, history, search, and organization +- **Knowledge & RAG** with 9 vector DBs, 5 extraction engines, and hybrid search +- **Python extensibility** including custom tools, MCP servers, pipelines, and OpenAPI integration +- **Multi-provider support** so you can use Ollama alongside OpenAI, Anthropic, Google, and others +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment for code execution +- **Model agents** with custom instructions, bound tools, and knowledge per model + +--- + +## Better Together + +Ollama and Open WebUI are the most popular pairing in the local AI ecosystem. Ollama manages and serves your models; Open WebUI adds a web-based platform with knowledge management, team features, and extensibility on top. + +```bash +# The most common Open WebUI setup +ollama pull llama3 +docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway \ + -v open-webui:/app/backend/data --name open-webui \ + ghcr.io/open-webui/open-webui:main +``` + +Open WebUI auto-detects Ollama when running on the same machine. All your Ollama models show up in the model selector immediately, no configuration needed. + +--- + +## When to Use Each + +**Use Ollama if** you want the fastest path to running a model locally. The CLI and desktop app work great on their own for quick interactions, scripting, and development. + +**Add Open WebUI if** you want a web-based interface with knowledge bases, team features, persistent conversations, or the ability to connect cloud providers alongside your local models. + +**Most people use both.** Ollama handles the model layer. Open WebUI handles the platform layer. They auto-detect each other and just work. + +--- + +## Other Great Ollama Frontends + +Ollama's OpenAI-compatible API means it works with many tools. If Open WebUI isn't your style, other projects that pair well with Ollama include: + +- [**LibreChat**](./librechat) for multi-provider chat with model comparison +- [**AnythingLLM**](./anythingllm) for workspace-based document Q&A + +--- + +*Ollama made local AI simple. Open WebUI builds on that foundation. Together, they've helped millions of people run AI on their own hardware.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I use Ollama with Open WebUI?** +Yes. Open WebUI has native Ollama integration and auto-detects it when running on the same machine. No configuration needed. + +**Is Ollama free?** +Yes. Ollama is MIT licensed and free for personal and commercial use. + +**How do Ollama and Open WebUI work together?** +Ollama handles running and managing models. Open WebUI can serve as the web interface and also has things like knowledge bases, team features, and extensibility. Most people use them together. + +**Do I need Ollama to use Open WebUI?** +No. Open WebUI works with any OpenAI-compatible API, including llama.cpp, LM Studio, OpenAI, Anthropic, Google, and more. Ollama is a popular option, but not required. + +--- + +**Related:** [Open WebUI & llama.cpp](./llama-cpp) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & Jan](./jan) diff --git a/docs/alternatives/onyx.mdx b/docs/alternatives/onyx.mdx new file mode 100644 index 000000000..32e74ff33 --- /dev/null +++ b/docs/alternatives/onyx.mdx @@ -0,0 +1,100 @@ +--- +sidebar_position: 15 +title: "Open WebUI vs Onyx" +sidebar_label: "Open WebUI & Onyx" +description: "Open WebUI vs Onyx compared. A general-purpose AI platform and an enterprise search tool." +keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Onyx + +*Last updated: May 2026* + +[Onyx](https://onyx.app/) (formerly Danswer) focuses on a specific and important problem: connecting AI to your organization's internal knowledge across Slack, Google Drive, Confluence, Jira, GitHub, and dozens of other tools, with permission-aware retrieval. If your team's knowledge is scattered across 40+ tools and you need AI to search across all of them while respecting access controls, that's Onyx's sweet spot. + +[GitHub](https://github.com/onyx-dot-app/onyx) · [Source Available](https://github.com/onyx-dot-app/onyx/blob/main/LICENSE) (MIT core + Onyx Enterprise License for `ee/` directories) · [Self-Host Terms](https://onyx.app/legal/self-host) + +--- + +## What Onyx Does Well + +- **40+ enterprise connectors** with native integrations for Slack, Google Drive, Confluence, Jira, GitHub, Notion, and more +- **Automatic syncing** that keeps connected sources up to date without manual re-ingestion +- **Permission-aware retrieval** that respects source system access controls when returning search results +- **Enterprise search** purpose-built for searching across your organization's internal knowledge +- **Multi-surface access** via web app, Slack bot, Discord bot, Chrome extension, and CLI +- **Managed cloud option** for teams that don't want to self-host +- **Custom agents with actions** for building AI assistants that can take actions across connected tools +- **Active development** with frequent releases and community responsiveness + +--- + +## What Open WebUI Does Well + +- **Full AI platform** with Chat, Notes, Channels, Automations, Open Terminal, voice/video calls, and image generation +- **Deploy anywhere** on your own infrastructure, fully air-gapped if needed +- **Free community edition** with unlimited users, SSO/OIDC/LDAP, RBAC, and SCIM 2.0 included +- **Any model, any provider** including Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and any OpenAI-compatible API +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid BM25 + vector search with cross-encoder reranking +- **Python extensibility** with custom tools, MCP servers, OpenAPI integration, pipelines, and a community marketplace + +--- + +## At a Glance + +| | Open WebUI | Onyx | +| :--- | :--- | :--- | +| **Primary focus** | General-purpose AI platform | Enterprise search and knowledge discovery | +| **Knowledge approach** | Document upload, knowledge bases, 9 vector DBs, 5 extraction engines | 40+ enterprise connectors with automatic syncing | +| **Permission handling** | RBAC, groups, per-resource access controls | Permission-aware retrieval from source systems | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Multiple LLM provider support | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Focused on connector and search ecosystem | +| **Collaboration** | Channels, Notes, shared conversations | AI-powered search and Q&A | +| **License** | Open WebUI License | Source Available (MIT core + Onyx Enterprise License for `ee/`); see [self-host terms](https://onyx.app/legal/self-host) | +| **Pricing** | Free community edition; Enterprise plans available | Free (self-hosted community), Cloud, and Enterprise plans | + +--- + +## When to Use Each + +**Choose Onyx if** you want to connect AI to your organization's existing tools. If your team's knowledge lives in Slack, Confluence, Jira, Google Drive, and GitHub, Onyx's 40+ connectors with automatic syncing and permission-aware retrieval were built for that. + +**Choose Open WebUI if** you need a general-purpose AI platform with chat, knowledge bases, team collaboration, Python extensibility, and support for any model provider. Open WebUI includes SSO, RBAC, and SCIM in the free community edition. + +**They solve different problems.** Onyx excels at enterprise search and connecting AI to your existing tools. Open WebUI excels as a general AI platform. Many organizations could use both. + +--- + +*Onyx connects AI to your enterprise knowledge. Open WebUI comes at it from a more general angle. They solve different problems, and many organizations could benefit from both.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**How do Onyx and Open WebUI compare?** +Onyx leans into enterprise search with 40+ connectors and permission-aware retrieval. Open WebUI comes at it from a more general angle with chat, knowledge bases, team collaboration, and extensibility. Different tools for different needs. + +**Is Onyx open source?** +Onyx's core is MIT licensed. Enterprise features are under a separate Onyx Enterprise License. Additional [self-host terms](https://onyx.app/legal/self-host) may also apply. + +**Is Onyx free?** +The community edition is free to self-host. Additional [self-host terms](https://onyx.app/legal/self-host) may apply. Onyx Cloud and Enterprise plans are available for teams that want managed hosting or additional features. + +**Can I use both Onyx and Open WebUI?** +Yes. They solve different problems. Onyx connects AI to your existing enterprise tools. Open WebUI also has knowledge management, team features, and extensibility built in. + +**Which is better for enterprise AI deployment?** +It depends on your needs. If your priority is searching across internal tools with permission-aware retrieval, Onyx was built for that. If you need more of a general-purpose AI platform that you can deploy on your own infrastructure, with SSO, RBAC, and SCIM included in the free edition, that is more where Open WebUI fits. + +--- + +**Related:** [Open WebUI & Dify](./dify) · [Open WebUI & AnythingLLM](./anythingllm) · [Open WebUI & LibreChat](./librechat) From a3740eb3b323042d2d3b8a35bd17570564de3f32 Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Wed, 13 May 2026 12:06:45 +0900 Subject: [PATCH 11/15] refac --- docs/alternatives/anythingllm.mdx | 2 +- docs/alternatives/chatgpt.mdx | 2 +- docs/alternatives/claude.mdx | 2 +- docs/alternatives/dify.mdx | 2 +- docs/alternatives/gemini.mdx | 2 +- docs/alternatives/jan.mdx | 2 +- docs/alternatives/librechat.mdx | 2 +- docs/alternatives/llama-cpp.mdx | 2 +- docs/alternatives/lm-studio.mdx | 2 +- docs/alternatives/msty.mdx | 2 +- docs/alternatives/ollama.mdx | 2 +- docs/alternatives/onyx.mdx | 2 +- 12 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/alternatives/anythingllm.mdx b/docs/alternatives/anythingllm.mdx index e6643a003..2a6855ef0 100644 --- a/docs/alternatives/anythingllm.mdx +++ b/docs/alternatives/anythingllm.mdx @@ -3,7 +3,7 @@ sidebar_position: 9 title: "Open WebUI vs AnythingLLM" sidebar_label: "Open WebUI & AnythingLLM" description: "Open WebUI vs AnythingLLM compared. Two of our favorite projects for local AI and document Q&A." -keywords: ["Open WebUI vs AnythingLLM", "AnythingLLM alternative", "document QA", "local document AI", "private RAG"] +keywords: ["Open WebUI vs AnythingLLM", "AnythingLLM alternative", "document QA", "local document AI", "private RAG", "open webui alternative", "best document AI", "self-hosted RAG"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/chatgpt.mdx b/docs/alternatives/chatgpt.mdx index 067488b25..8f709e135 100644 --- a/docs/alternatives/chatgpt.mdx +++ b/docs/alternatives/chatgpt.mdx @@ -3,7 +3,7 @@ sidebar_position: 30 title: "Self-Hosted ChatGPT Alternative" sidebar_label: "Open WebUI & ChatGPT" description: "Looking for a self-hosted ChatGPT alternative? Open WebUI connects to the OpenAI API and runs on your own infrastructure." -keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted"] +keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/claude.mdx b/docs/alternatives/claude.mdx index 9d299248b..7263b2ee9 100644 --- a/docs/alternatives/claude.mdx +++ b/docs/alternatives/claude.mdx @@ -3,7 +3,7 @@ sidebar_position: 31 title: "Self-Hosted Claude Alternative" sidebar_label: "Open WebUI & Claude" description: "Looking for a self-hosted Claude alternative? Use Claude models through Open WebUI on your own infrastructure." -keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server"] +keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/dify.mdx b/docs/alternatives/dify.mdx index a085b1833..bf644102c 100644 --- a/docs/alternatives/dify.mdx +++ b/docs/alternatives/dify.mdx @@ -3,7 +3,7 @@ sidebar_position: 20 title: "Open WebUI vs Dify" sidebar_label: "Open WebUI & Dify" description: "Open WebUI vs Dify compared. An AI chat platform and a visual workflow builder for different use cases." -keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder"] +keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder", "AI workflow builder comparison", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/gemini.mdx b/docs/alternatives/gemini.mdx index 9ebcea4c0..67a005e83 100644 --- a/docs/alternatives/gemini.mdx +++ b/docs/alternatives/gemini.mdx @@ -3,7 +3,7 @@ sidebar_position: 32 title: "Self-Hosted Gemini Alternative" sidebar_label: "Open WebUI & Gemini" description: "Looking for a self-hosted Gemini alternative? Use Google AI models through Open WebUI on your own terms." -keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted"] +keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/jan.mdx b/docs/alternatives/jan.mdx index bb8e9ba8b..abeba6da1 100644 --- a/docs/alternatives/jan.mdx +++ b/docs/alternatives/jan.mdx @@ -3,7 +3,7 @@ sidebar_position: 4 title: "Open WebUI & Jan" sidebar_label: "Open WebUI & Jan" description: "How Open WebUI and Jan work together. Two local-first AI tools with different strengths." -keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app"] +keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/librechat.mdx b/docs/alternatives/librechat.mdx index f4227f580..3aa8a55e3 100644 --- a/docs/alternatives/librechat.mdx +++ b/docs/alternatives/librechat.mdx @@ -3,7 +3,7 @@ sidebar_position: 10 title: "Open WebUI vs LibreChat" sidebar_label: "Open WebUI & LibreChat" description: "Open WebUI vs LibreChat compared. Two respected self-hosted AI chat platforms with different strengths." -keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI"] +keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/llama-cpp.mdx b/docs/alternatives/llama-cpp.mdx index 349b07689..1baafeff3 100644 --- a/docs/alternatives/llama-cpp.mdx +++ b/docs/alternatives/llama-cpp.mdx @@ -3,7 +3,7 @@ sidebar_position: 2 title: "Open WebUI & llama.cpp" sidebar_label: "Open WebUI & llama.cpp" description: "How to connect llama-server to Open WebUI. Integration guide for two essential local AI tools." -keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface"] +keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/lm-studio.mdx b/docs/alternatives/lm-studio.mdx index f8758b0d0..654d90d0b 100644 --- a/docs/alternatives/lm-studio.mdx +++ b/docs/alternatives/lm-studio.mdx @@ -3,7 +3,7 @@ sidebar_position: 3 title: "Open WebUI & LM Studio" sidebar_label: "Open WebUI & LM Studio" description: "How Open WebUI and LM Studio work together. Two approaches to local AI that pair well." -keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface"] +keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/msty.mdx b/docs/alternatives/msty.mdx index 21f7a97fa..8c195c174 100644 --- a/docs/alternatives/msty.mdx +++ b/docs/alternatives/msty.mdx @@ -3,7 +3,7 @@ sidebar_position: 13 title: "Open WebUI vs Msty" sidebar_label: "Open WebUI & Msty" description: "Open WebUI vs Msty compared. A web platform and a desktop app, two approaches to AI." -keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app"] +keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app", "AI desktop app comparison", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/ollama.mdx b/docs/alternatives/ollama.mdx index 223320590..5af11b829 100644 --- a/docs/alternatives/ollama.mdx +++ b/docs/alternatives/ollama.mdx @@ -3,7 +3,7 @@ sidebar_position: 1 title: "Open WebUI & Ollama" sidebar_label: "Open WebUI & Ollama" description: "How Open WebUI and Ollama work together. The most popular local AI pairing, with setup guide and honest comparison." -keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface"] +keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface", "open webui alternative"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/onyx.mdx b/docs/alternatives/onyx.mdx index 32e74ff33..acd780f23 100644 --- a/docs/alternatives/onyx.mdx +++ b/docs/alternatives/onyx.mdx @@ -3,7 +3,7 @@ sidebar_position: 15 title: "Open WebUI vs Onyx" sidebar_label: "Open WebUI & Onyx" description: "Open WebUI vs Onyx compared. A general-purpose AI platform and an enterprise search tool." -keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform"] +keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform", "open webui alternative"] --- import Head from '@docusaurus/Head'; From 50ce103c8f7f2f024c7d65640bcacefa8cdebdfc Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Wed, 13 May 2026 12:18:48 +0900 Subject: [PATCH 12/15] refac --- docs/alternatives/anythingllm.mdx | 2 +- docs/alternatives/chatgpt.mdx | 2 +- docs/alternatives/claude.mdx | 2 +- docs/alternatives/dify.mdx | 2 +- docs/alternatives/gemini.mdx | 2 +- docs/alternatives/index.mdx | 61 ++++++++++++++++++++++++------- docs/alternatives/jan.mdx | 2 +- docs/alternatives/librechat.mdx | 2 +- docs/alternatives/llama-cpp.mdx | 2 +- docs/alternatives/lm-studio.mdx | 2 +- docs/alternatives/msty.mdx | 2 +- docs/alternatives/ollama.mdx | 6 +-- docs/alternatives/onyx.mdx | 2 +- 13 files changed, 61 insertions(+), 28 deletions(-) diff --git a/docs/alternatives/anythingllm.mdx b/docs/alternatives/anythingllm.mdx index 2a6855ef0..20733c38a 100644 --- a/docs/alternatives/anythingllm.mdx +++ b/docs/alternatives/anythingllm.mdx @@ -93,4 +93,4 @@ Yes. AnythingLLM is MIT licensed. There's a free desktop app and a self-hosted D --- -**Related:** [Open WebUI & LibreChat](./librechat) · [Open WebUI & Dify](./dify) · [Open WebUI & Ollama](./ollama) +**Related:** [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & Dify](/alternatives/dify) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/chatgpt.mdx b/docs/alternatives/chatgpt.mdx index 8f709e135..545334c77 100644 --- a/docs/alternatives/chatgpt.mdx +++ b/docs/alternatives/chatgpt.mdx @@ -113,4 +113,4 @@ Yes. Many users run OpenAI for complex reasoning alongside local models via Olla --- -**Related:** [Open WebUI & Claude](./claude) · [Open WebUI & Gemini](./gemini) · [Open WebUI & Ollama](./ollama) +**Related:** [Open WebUI & Claude](/alternatives/claude) · [Open WebUI & Gemini](/alternatives/gemini) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/claude.mdx b/docs/alternatives/claude.mdx index 7263b2ee9..0a3f9e9ad 100644 --- a/docs/alternatives/claude.mdx +++ b/docs/alternatives/claude.mdx @@ -112,4 +112,4 @@ Yes. Extended thinking is supported for models that offer it, including Claude v --- -**Related:** [Open WebUI & ChatGPT](./chatgpt) · [Open WebUI & Gemini](./gemini) · [Open WebUI & Ollama](./ollama) +**Related:** [Open WebUI & ChatGPT](/alternatives/chatgpt) · [Open WebUI & Gemini](/alternatives/gemini) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/dify.mdx b/docs/alternatives/dify.mdx index bf644102c..efa6af7e3 100644 --- a/docs/alternatives/dify.mdx +++ b/docs/alternatives/dify.mdx @@ -111,4 +111,4 @@ The community edition is free to self-host. Dify is source available under a mod --- -**Related:** [Open WebUI & Onyx](./onyx) · [Open WebUI & LibreChat](./librechat) · [Open WebUI & AnythingLLM](./anythingllm) +**Related:** [Open WebUI & Onyx](/alternatives/onyx) · [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & AnythingLLM](/alternatives/anythingllm) diff --git a/docs/alternatives/gemini.mdx b/docs/alternatives/gemini.mdx index 67a005e83..32f682aaa 100644 --- a/docs/alternatives/gemini.mdx +++ b/docs/alternatives/gemini.mdx @@ -106,4 +106,4 @@ Yes, Open WebUI supports connecting to multiple providers at once. --- -**Related:** [Open WebUI & ChatGPT](./chatgpt) · [Open WebUI & Claude](./claude) · [Open WebUI & Ollama](./ollama) +**Related:** [Open WebUI & ChatGPT](/alternatives/chatgpt) · [Open WebUI & Claude](/alternatives/claude) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/index.mdx b/docs/alternatives/index.mdx index 6b67aa609..611259b19 100644 --- a/docs/alternatives/index.mdx +++ b/docs/alternatives/index.mdx @@ -1,32 +1,65 @@ --- sidebar_position: 1500 title: "🌍 Alternatives to Open WebUI" -description: "Looking for an Open WebUI alternative? Here are the AI tools we actually use and love." -keywords: ["open webui alternatives", "open webui alternative", "best open webui alternatives", "open webui vs", "open webui comparison", "tools like open webui"] +description: "Looking for an Open WebUI alternative? Honest comparisons of local runners, desktop apps, enterprise platforms, and commercial AI by the Open WebUI team." +keywords: ["open webui alternatives", "open webui alternative", "best open webui alternatives", "best open webui alternative", "open webui vs", "open webui comparison", "tools like open webui", "self-hosted AI chat", "self-hosted ChatGPT alternative"] --- +import Head from '@docusaurus/Head'; + + + + + # Alternatives to Open WebUI The AI space is full of great projects, and we're genuinely happy about that. More tools means more people get access to AI in a way that works for them. We get asked "what else is out there?" a lot, so we put this list together. If Open WebUI isn't quite the right fit for your use case, these are the tools we'd point you to. We've actually used them, built alongside them, or just think they do something really well. Everything listed here is free to get started with. +## How to Choose + +- **Running local models?** Start with [Ollama](/alternatives/ollama) or [llama.cpp](/alternatives/llama-cpp), both pair natively with Open WebUI +- **Want a desktop app?** [LM Studio](/alternatives/lm-studio) or [Jan](/alternatives/jan) are excellent standalone options +- **Need document Q&A?** [AnythingLLM](/alternatives/anythingllm) makes private RAG simple +- **Multi-provider chat?** [LibreChat](/alternatives/librechat) handles this well +- **Building AI workflows?** [Dify](/alternatives/dify) has a visual workflow designer +- **Just want frontier AI?** [ChatGPT](/alternatives/chatgpt), [Claude](/alternatives/claude), and [Gemini](/alternatives/gemini) are all available through Open WebUI via API + +--- + +## All Alternatives + | Tool | What It's Great For | License | Works with Open WebUI | | | :--- | :--- | :--- | :--- | :--- | -| **Ollama** | The most popular way to run local models | Open Source (MIT) | Native integration | [Learn more →](./ollama) | -| **llama.cpp** | The engine that made local AI possible | Open Source (MIT) | Via API | [Learn more →](./llama-cpp) | -| **LM Studio** | Beautiful desktop app for local model management | Proprietary (free) | Via API | [Learn more →](./lm-studio) | -| **Jan** | Simple, privacy-first local AI desktop app | Open Source (Apache 2.0) | Via API | [Learn more →](./jan) | -| **AnythingLLM** | Private document Q&A done right | Open Source (MIT) | | [Learn more →](./anythingllm) | -| **LibreChat** | Solid self-hosted multi-provider chat | Open Source (MIT) | | [Learn more →](./librechat) | -| **Msty** | Refined desktop hub for local and cloud models | Proprietary (free tier) | | [Learn more →](./msty) | -| **Onyx** | Enterprise search with 40+ connectors | Source Available (MIT core + Enterprise License for ee/) | | [Learn more →](./onyx) | -| **Dify** | Visual workflow builder for LLM applications | Source Available (modified Apache 2.0) | Via API | [Learn more →](./dify) | -| **ChatGPT / OpenAI** | The one that started it all, we use it daily | Commercial (free tier) | Via OpenAI API | [Learn more →](./chatgpt) | -| **Claude / Anthropic** | Exceptional writing and reasoning, we use it daily | Commercial (free tier) | Via Anthropic API | [Learn more →](./claude) | -| **Gemini / Google** | Multimodal AI with Google ecosystem integration | Commercial (free tier) | Via Google AI API | [Learn more →](./gemini) | +| **Ollama** | The most popular way to run local models | Open Source (MIT) | Native integration | [Learn more →](/alternatives/ollama) | +| **llama.cpp** | The engine that made local AI possible | Open Source (MIT) | Via API | [Learn more →](/alternatives/llama-cpp) | +| **LM Studio** | Beautiful desktop app for local model management | Proprietary (free) | Via API | [Learn more →](/alternatives/lm-studio) | +| **Jan** | Simple, privacy-first local AI desktop app | Open Source (Apache 2.0) | Via API | [Learn more →](/alternatives/jan) | +| **AnythingLLM** | Private document Q&A done right | Open Source (MIT) | | [Learn more →](/alternatives/anythingllm) | +| **LibreChat** | Solid self-hosted multi-provider chat | Open Source (MIT) | | [Learn more →](/alternatives/librechat) | +| **Msty** | Refined desktop hub for local and cloud models | Proprietary (free tier) | | [Learn more →](/alternatives/msty) | +| **Onyx** | Enterprise search with 40+ connectors | Source Available (MIT core + Enterprise License for ee/) | | [Learn more →](/alternatives/onyx) | +| **Dify** | Visual workflow builder for LLM applications | Source Available (modified Apache 2.0) | Via API | [Learn more →](/alternatives/dify) | +| **ChatGPT / OpenAI** | The one that started it all, we use it daily | Commercial (free tier) | Via OpenAI API | [Learn more →](/alternatives/chatgpt) | +| **Claude / Anthropic** | Exceptional writing and reasoning, we use it daily | Commercial (free tier) | Via Anthropic API | [Learn more →](/alternatives/claude) | +| **Gemini / Google** | Multimodal AI with Google ecosystem integration | Commercial (free tier) | Via Google AI API | [Learn more →](/alternatives/gemini) | Open WebUI itself is **source available** under the [Open WebUI License](/license). Every project on this list is built by people who care about making AI more accessible. That pushes all of us, Open WebUI included, to be better. We'd love to be your first choice, but we'd rather you have great options than no options. +--- + +## Frequently Asked Questions + +**Is Open WebUI free?** +Yes. The community edition is free for unlimited users. Enterprise plans are also available. + +**Can I self-host Open WebUI?** +Yes. Open WebUI runs on your own infrastructure via Docker, Kubernetes, pip, or the desktop app. Your data stays on your hardware. + +**What models does Open WebUI support?** +Open WebUI connects to any OpenAI-compatible API, plus native Ollama integration. This includes OpenAI, Anthropic, Google, Azure, AWS Bedrock, llama.cpp, LM Studio, and hundreds of other providers. + +*Last updated: May 2026* diff --git a/docs/alternatives/jan.mdx b/docs/alternatives/jan.mdx index abeba6da1..b15aae4df 100644 --- a/docs/alternatives/jan.mdx +++ b/docs/alternatives/jan.mdx @@ -101,4 +101,4 @@ Yes. Jan is open source under the Apache 2.0 license. --- -**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & llama.cpp](./llama-cpp) +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & llama.cpp](/alternatives/llama-cpp) diff --git a/docs/alternatives/librechat.mdx b/docs/alternatives/librechat.mdx index 3aa8a55e3..8609f75e5 100644 --- a/docs/alternatives/librechat.mdx +++ b/docs/alternatives/librechat.mdx @@ -101,4 +101,4 @@ Yes. Both connect to the same backends (Ollama, OpenAI, etc.), so you can run bo --- -**Related:** [Open WebUI & AnythingLLM](./anythingllm) · [Open WebUI & Msty](./msty) · [Open WebUI & Ollama](./ollama) +**Related:** [Open WebUI & AnythingLLM](/alternatives/anythingllm) · [Open WebUI & Msty](/alternatives/msty) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/llama-cpp.mdx b/docs/alternatives/llama-cpp.mdx index 1baafeff3..c9dc4985f 100644 --- a/docs/alternatives/llama-cpp.mdx +++ b/docs/alternatives/llama-cpp.mdx @@ -98,4 +98,4 @@ Yes. llama.cpp is MIT licensed and free for any use. --- -**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & Jan](./jan) +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/lm-studio.mdx b/docs/alternatives/lm-studio.mdx index 654d90d0b..cdecae6cc 100644 --- a/docs/alternatives/lm-studio.mdx +++ b/docs/alternatives/lm-studio.mdx @@ -109,4 +109,4 @@ Yes. LM Studio is free for personal and commercial use, though it is proprietary --- -**Related:** [Open WebUI & Ollama](./ollama) · [Open WebUI & llama.cpp](./llama-cpp) · [Open WebUI & Jan](./jan) +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & llama.cpp](/alternatives/llama-cpp) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/msty.mdx b/docs/alternatives/msty.mdx index 8c195c174..7cb7acbd5 100644 --- a/docs/alternatives/msty.mdx +++ b/docs/alternatives/msty.mdx @@ -97,4 +97,4 @@ No. Msty is proprietary software with a free tier. --- -**Related:** [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & LibreChat](./librechat) · [Open WebUI & Jan](./jan) +**Related:** [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/ollama.mdx b/docs/alternatives/ollama.mdx index 5af11b829..4eaca4ba4 100644 --- a/docs/alternatives/ollama.mdx +++ b/docs/alternatives/ollama.mdx @@ -88,8 +88,8 @@ Open WebUI auto-detects Ollama when running on the same machine. All your Ollama Ollama's OpenAI-compatible API means it works with many tools. If Open WebUI isn't your style, other projects that pair well with Ollama include: -- [**LibreChat**](./librechat) for multi-provider chat with model comparison -- [**AnythingLLM**](./anythingllm) for workspace-based document Q&A +- [**LibreChat**](/alternatives/librechat) for multi-provider chat with model comparison +- [**AnythingLLM**](/alternatives/anythingllm) for workspace-based document Q&A --- @@ -115,4 +115,4 @@ No. Open WebUI works with any OpenAI-compatible API, including llama.cpp, LM Stu --- -**Related:** [Open WebUI & llama.cpp](./llama-cpp) · [Open WebUI & LM Studio](./lm-studio) · [Open WebUI & Jan](./jan) +**Related:** [Open WebUI & llama.cpp](/alternatives/llama-cpp) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/onyx.mdx b/docs/alternatives/onyx.mdx index acd780f23..c944d3b88 100644 --- a/docs/alternatives/onyx.mdx +++ b/docs/alternatives/onyx.mdx @@ -97,4 +97,4 @@ It depends on your needs. If your priority is searching across internal tools wi --- -**Related:** [Open WebUI & Dify](./dify) · [Open WebUI & AnythingLLM](./anythingllm) · [Open WebUI & LibreChat](./librechat) +**Related:** [Open WebUI & Dify](/alternatives/dify) · [Open WebUI & AnythingLLM](/alternatives/anythingllm) · [Open WebUI & LibreChat](/alternatives/librechat) From 7e11229b9c94a25d06d16b337c0ec1f4dccf3f55 Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Wed, 13 May 2026 12:20:40 +0900 Subject: [PATCH 13/15] refac --- docs/alternatives/chatgpt.mdx | 2 +- docs/alternatives/claude.mdx | 2 +- docs/alternatives/dify.mdx | 2 +- docs/alternatives/gemini.mdx | 2 +- docs/alternatives/jan.mdx | 2 +- docs/alternatives/librechat.mdx | 2 +- docs/alternatives/llama-cpp.mdx | 2 +- docs/alternatives/lm-studio.mdx | 2 +- docs/alternatives/msty.mdx | 2 +- docs/alternatives/ollama.mdx | 2 +- docs/alternatives/onyx.mdx | 2 +- 11 files changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/alternatives/chatgpt.mdx b/docs/alternatives/chatgpt.mdx index 545334c77..07e78df0e 100644 --- a/docs/alternatives/chatgpt.mdx +++ b/docs/alternatives/chatgpt.mdx @@ -3,7 +3,7 @@ sidebar_position: 30 title: "Self-Hosted ChatGPT Alternative" sidebar_label: "Open WebUI & ChatGPT" description: "Looking for a self-hosted ChatGPT alternative? Open WebUI connects to the OpenAI API and runs on your own infrastructure." -keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted", "open webui alternative"] +keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/claude.mdx b/docs/alternatives/claude.mdx index 0a3f9e9ad..07456b829 100644 --- a/docs/alternatives/claude.mdx +++ b/docs/alternatives/claude.mdx @@ -3,7 +3,7 @@ sidebar_position: 31 title: "Self-Hosted Claude Alternative" sidebar_label: "Open WebUI & Claude" description: "Looking for a self-hosted Claude alternative? Use Claude models through Open WebUI on your own infrastructure." -keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server", "open webui alternative"] +keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/dify.mdx b/docs/alternatives/dify.mdx index efa6af7e3..4945d914e 100644 --- a/docs/alternatives/dify.mdx +++ b/docs/alternatives/dify.mdx @@ -3,7 +3,7 @@ sidebar_position: 20 title: "Open WebUI vs Dify" sidebar_label: "Open WebUI & Dify" description: "Open WebUI vs Dify compared. An AI chat platform and a visual workflow builder for different use cases." -keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder", "AI workflow builder comparison", "open webui alternative"] +keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder", "AI workflow builder comparison", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/gemini.mdx b/docs/alternatives/gemini.mdx index 32f682aaa..39c21d889 100644 --- a/docs/alternatives/gemini.mdx +++ b/docs/alternatives/gemini.mdx @@ -3,7 +3,7 @@ sidebar_position: 32 title: "Self-Hosted Gemini Alternative" sidebar_label: "Open WebUI & Gemini" description: "Looking for a self-hosted Gemini alternative? Use Google AI models through Open WebUI on your own terms." -keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted", "open webui alternative"] +keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/jan.mdx b/docs/alternatives/jan.mdx index b15aae4df..6a8cb6923 100644 --- a/docs/alternatives/jan.mdx +++ b/docs/alternatives/jan.mdx @@ -3,7 +3,7 @@ sidebar_position: 4 title: "Open WebUI & Jan" sidebar_label: "Open WebUI & Jan" description: "How Open WebUI and Jan work together. Two local-first AI tools with different strengths." -keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app", "open webui alternative"] +keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/librechat.mdx b/docs/alternatives/librechat.mdx index 8609f75e5..75416720a 100644 --- a/docs/alternatives/librechat.mdx +++ b/docs/alternatives/librechat.mdx @@ -3,7 +3,7 @@ sidebar_position: 10 title: "Open WebUI vs LibreChat" sidebar_label: "Open WebUI & LibreChat" description: "Open WebUI vs LibreChat compared. Two respected self-hosted AI chat platforms with different strengths." -keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI", "open webui alternative"] +keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/llama-cpp.mdx b/docs/alternatives/llama-cpp.mdx index c9dc4985f..6c53448b3 100644 --- a/docs/alternatives/llama-cpp.mdx +++ b/docs/alternatives/llama-cpp.mdx @@ -3,7 +3,7 @@ sidebar_position: 2 title: "Open WebUI & llama.cpp" sidebar_label: "Open WebUI & llama.cpp" description: "How to connect llama-server to Open WebUI. Integration guide for two essential local AI tools." -keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface", "open webui alternative"] +keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/lm-studio.mdx b/docs/alternatives/lm-studio.mdx index cdecae6cc..56aa8ba10 100644 --- a/docs/alternatives/lm-studio.mdx +++ b/docs/alternatives/lm-studio.mdx @@ -3,7 +3,7 @@ sidebar_position: 3 title: "Open WebUI & LM Studio" sidebar_label: "Open WebUI & LM Studio" description: "How Open WebUI and LM Studio work together. Two approaches to local AI that pair well." -keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface", "open webui alternative"] +keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/msty.mdx b/docs/alternatives/msty.mdx index 7cb7acbd5..1af43d44f 100644 --- a/docs/alternatives/msty.mdx +++ b/docs/alternatives/msty.mdx @@ -3,7 +3,7 @@ sidebar_position: 13 title: "Open WebUI vs Msty" sidebar_label: "Open WebUI & Msty" description: "Open WebUI vs Msty compared. A web platform and a desktop app, two approaches to AI." -keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app", "AI desktop app comparison", "open webui alternative"] +keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app", "AI desktop app comparison", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/ollama.mdx b/docs/alternatives/ollama.mdx index 4eaca4ba4..3b33604de 100644 --- a/docs/alternatives/ollama.mdx +++ b/docs/alternatives/ollama.mdx @@ -3,7 +3,7 @@ sidebar_position: 1 title: "Open WebUI & Ollama" sidebar_label: "Open WebUI & Ollama" description: "How Open WebUI and Ollama work together. The most popular local AI pairing, with setup guide and honest comparison." -keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface", "open webui alternative"] +keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; diff --git a/docs/alternatives/onyx.mdx b/docs/alternatives/onyx.mdx index c944d3b88..86c141542 100644 --- a/docs/alternatives/onyx.mdx +++ b/docs/alternatives/onyx.mdx @@ -3,7 +3,7 @@ sidebar_position: 15 title: "Open WebUI vs Onyx" sidebar_label: "Open WebUI & Onyx" description: "Open WebUI vs Onyx compared. A general-purpose AI platform and an enterprise search tool." -keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform", "open webui alternative"] +keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform", "open webui alternative", "open webui comparison"] --- import Head from '@docusaurus/Head'; From 7dde7282d7b7f21587c3de015a0df0d5744b2447 Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Wed, 13 May 2026 12:20:58 +0900 Subject: [PATCH 14/15] refac --- docs/alternatives/anythingllm.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/alternatives/anythingllm.mdx b/docs/alternatives/anythingllm.mdx index 20733c38a..55ebf3f5d 100644 --- a/docs/alternatives/anythingllm.mdx +++ b/docs/alternatives/anythingllm.mdx @@ -3,7 +3,7 @@ sidebar_position: 9 title: "Open WebUI vs AnythingLLM" sidebar_label: "Open WebUI & AnythingLLM" description: "Open WebUI vs AnythingLLM compared. Two of our favorite projects for local AI and document Q&A." -keywords: ["Open WebUI vs AnythingLLM", "AnythingLLM alternative", "document QA", "local document AI", "private RAG", "open webui alternative", "best document AI", "self-hosted RAG"] +keywords: ["Open WebUI vs AnythingLLM", "AnythingLLM alternative", "document QA", "local document AI", "private RAG", "open webui alternative", "open webui comparison", "best document AI", "self-hosted RAG"] --- import Head from '@docusaurus/Head'; From 2c80c489fb7cb426695dce76212d88ff0c381a9f Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Wed, 13 May 2026 12:30:08 +0900 Subject: [PATCH 15/15] refac --- docs/alternatives/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/alternatives/index.mdx b/docs/alternatives/index.mdx index 611259b19..c76664062 100644 --- a/docs/alternatives/index.mdx +++ b/docs/alternatives/index.mdx @@ -8,7 +8,7 @@ keywords: ["open webui alternatives", "open webui alternative", "best open webui import Head from '@docusaurus/Head'; - + # Alternatives to Open WebUI