diff --git a/docs/alternatives/anythingllm.mdx b/docs/alternatives/anythingllm.mdx new file mode 100644 index 000000000..55ebf3f5d --- /dev/null +++ b/docs/alternatives/anythingllm.mdx @@ -0,0 +1,96 @@ +--- +sidebar_position: 9 +title: "Open WebUI vs AnythingLLM" +sidebar_label: "Open WebUI & AnythingLLM" +description: "Open WebUI vs AnythingLLM compared. Two of our favorite projects for local AI and document Q&A." +keywords: ["Open WebUI vs AnythingLLM", "AnythingLLM alternative", "document QA", "local document AI", "private RAG", "open webui alternative", "open webui comparison", "best document AI", "self-hosted RAG"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & AnythingLLM + +*Last updated: May 2026* + +[AnythingLLM](https://anythingllm.com/) by Mintplex Labs is one of our favorite projects in the local AI space. They've made private document Q&A genuinely accessible, the workspace-based approach to organizing knowledge is intuitive, and the team behind it is great. If you're looking for a straightforward way to chat with your documents locally, AnythingLLM is well worth a look. + +[GitHub](https://github.com/Mintplex-Labs/anything-llm) · [MIT License](https://github.com/Mintplex-Labs/anything-llm/blob/master/LICENSE) + +--- + +## What AnythingLLM Does Well + +- **Document Q&A made simple** so you can upload PDFs, code repos, and websites and start asking questions immediately +- **Workspace model** providing clean separation of different knowledge bases and conversations +- **Embedding customization** with control over chunking, overlap, and embedding model selection +- **Desktop app** for a standalone local experience without Docker or servers +- **Cloud deployment option** for teams that want hosted document Q&A +- **Privacy-first** with everything running locally so your documents stay on your machine +- **Multi-modal support** for handling images and other file types alongside text +- **Agent support** with built-in capabilities for tool use and web search +- **Active development** with a responsive team and frequent releases +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Platform breadth** including Chat, Notes, Channels, Automations, Open Terminal, voice/video calls, and image generation +- **Advanced RAG pipeline** with 9 vector databases, 5 extraction engines, hybrid BM25 + vector search with cross-encoder reranking, and agentic retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, OpenAPI integration, and a community marketplace +- **Team features** including Channels for real-time collaboration, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Model agents** that wrap any model with instructions, tools, knowledge, and parameters +- **Enterprise scale** with Kubernetes, horizontal scaling, Redis-backed sessions, OpenTelemetry, and analytics + +--- + +## At a Glance + +| | Open WebUI | AnythingLLM | +| :--- | :--- | :--- | +| **Focus** | Full AI platform with knowledge, tools, and team features | Document Q&A and workspace-based RAG | +| **RAG approach** | 9 vector DBs, 5 extraction engines, hybrid search, reranking | Built-in vector DB with straightforward document ingestion | +| **Organization** | Folders, tags, knowledge bases, notes, channels | Workspaces with dedicated knowledge | +| **Multi-provider** | Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and more | Ollama, OpenAI, Anthropic, and more | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Agent tools and web search | +| **Desktop app** | Yes | Yes | +| **Multi-user** | SSO/OIDC, LDAP, SCIM 2.0, RBAC, groups | Multi-user with permissions | +| **License** | Open WebUI License | MIT | + +--- + +## When to Use Each + +**Choose AnythingLLM if** you mostly want to chat with your documents. The workspace model keeps different projects cleanly separated, and the desktop app makes it easy to get started without any server setup. + +**Choose Open WebUI if** you need a broader platform with team collaboration, multi-provider support, extensibility, or enterprise features alongside document Q&A. + +**They solve different problems.** AnythingLLM focuses on making document Q&A as simple as possible. Open WebUI takes a wider view with chat, knowledge, collaboration, and tools. Both are good at what they do. + +--- + +*Two projects making private AI and document Q&A accessible. Different scope, same commitment to keeping your data under your control.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do AnythingLLM and Open WebUI compare?** +AnythingLLM leans into document Q&A with its workspace model. Open WebUI also has knowledge bases, team features, extensibility, and multi-provider support. They have different strengths. + +**Is AnythingLLM free?** +Yes. AnythingLLM is MIT licensed. There's a free desktop app and a self-hosted Docker version. + + +--- + +**Related:** [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & Dify](/alternatives/dify) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/chatgpt.mdx b/docs/alternatives/chatgpt.mdx new file mode 100644 index 000000000..07e78df0e --- /dev/null +++ b/docs/alternatives/chatgpt.mdx @@ -0,0 +1,116 @@ +--- +sidebar_position: 30 +title: "Self-Hosted ChatGPT Alternative" +sidebar_label: "Open WebUI & ChatGPT" +description: "Looking for a self-hosted ChatGPT alternative? Open WebUI connects to the OpenAI API and runs on your own infrastructure." +keywords: ["Open WebUI vs ChatGPT", "ChatGPT alternative", "self-hosted ChatGPT", "ChatGPT alternative open source", "ChatGPT self-hosted", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & ChatGPT + +*Last updated: May 2026* + +[ChatGPT](https://chat.openai.com/) by OpenAI introduced hundreds of millions of people to AI and set the bar for what a conversational AI experience should feel like. We use it daily. GPT-5.5, the o-series reasoning models, and the constant pace of innovation have kept it at the forefront, and honestly, it keeps pushing us to make Open WebUI better too. + +Commercial · Free tier available + +--- + +## What ChatGPT Does Well + +- **Frontier models** including GPT-5.5, o3, and the o-series reasoning models +- **Canvas** for collaborative document and code editing inside conversations +- **Projects** for organizing conversations with persistent context and custom instructions +- **Deep research** that synthesizes information across multiple sources into comprehensive reports +- **Refined experience** from years of iteration on the interface +- **GPT Store** with an ecosystem of custom GPTs built by the community +- **Multimodal** with vision, voice, image generation (DALL-E), and code execution +- **Memory** that remembers context across conversations +- **Operator** for agentic web browsing and task automation +- **Enterprise tier** with SSO, admin controls, and data privacy commitments +- **Zero setup** where you sign up and start, no installation needed + +--- + +## What Open WebUI Does Well + +- **Self-hosted** so the platform itself runs on your hardware +- **Any model, any provider** so you can use GPT-5.5 *and* Claude *and* Gemini *and* local models all in one interface +- **Knowledge & RAG** for building knowledge bases from your documents with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment +- **Free community edition** for unlimited users on your own infrastructure + +--- + +## At a Glance + +| | Open WebUI | ChatGPT | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | OpenAI models (GPT-5.5, o-series) | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by OpenAI | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | File upload with in-chat context | +| **Custom agents** | Model agents with tools, knowledge, and parameters | Custom GPTs via GPT Store | +| **Code execution** | Python in-browser + Open Terminal | Built-in code interpreter | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | GPT Actions and plugins | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Plus, Team, and Enterprise plans | + +--- + +## When to Use Each + +**Choose ChatGPT if** you want the simplest possible path to frontier AI. No installation, no configuration, just sign up and start. The native experience with Canvas, Projects, and deep research features is polished and constantly improving. + +**Choose Open WebUI if** you want to run on your own infrastructure, connect to multiple providers in one interface, build knowledge bases from your documents, or need team features like RBAC and SSO included in the free community edition. + +**Use both.** Many people do. Connect Open WebUI to the OpenAI API and use GPT-5.5 alongside Claude, Gemini, and local models, all in one place. Use ChatGPT directly when you want Canvas or deep research, and Open WebUI when you need your own knowledge bases or team workspace. + +--- + +## Use OpenAI Models Through Open WebUI + +Open WebUI connects to the OpenAI API, so all of OpenAI's models are available alongside Open WebUI's knowledge management, tools, and team features. + +**How to connect:** + +1. Get an API key from [platform.openai.com](https://platform.openai.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI connection with your API key +4. OpenAI models will appear in your model selector + +Many users run OpenAI models for complex reasoning alongside local models via Ollama for privacy-sensitive tasks, all in the same interface. + +--- + +*ChatGPT brought AI to the world. Open WebUI is one way to use those same models on your own infrastructure.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I self-host ChatGPT?** +Not ChatGPT itself, but Open WebUI connects to the OpenAI API so you can use the same models. Open WebUI runs on your infrastructure, though API calls still go to OpenAI. + +**Can I use OpenAI models in Open WebUI?** +Yes. Add your OpenAI API key in Settings and all OpenAI models appear in the model selector. + +**Is Open WebUI a ChatGPT alternative?** +Open WebUI can connect to the OpenAI API, so you can use the same models. Open WebUI runs on your own infrastructure, though API calls still go to the provider. It also supports connecting to other providers. + +**Can I use ChatGPT and local models together?** +Yes. Many users run OpenAI for complex reasoning alongside local models via Ollama for privacy-sensitive tasks, all in the same Open WebUI interface. + + +--- + +**Related:** [Open WebUI & Claude](/alternatives/claude) · [Open WebUI & Gemini](/alternatives/gemini) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/claude.mdx b/docs/alternatives/claude.mdx new file mode 100644 index 000000000..07456b829 --- /dev/null +++ b/docs/alternatives/claude.mdx @@ -0,0 +1,115 @@ +--- +sidebar_position: 31 +title: "Self-Hosted Claude Alternative" +sidebar_label: "Open WebUI & Claude" +description: "Looking for a self-hosted Claude alternative? Use Claude models through Open WebUI on your own infrastructure." +keywords: ["Open WebUI vs Claude", "Claude alternative", "self-hosted Claude", "Claude alternative self-hosted", "use Claude on own server", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Claude + +*Last updated: May 2026* + +[Claude](https://claude.ai/) by Anthropic has earned a loyal following for its strength in writing, reasoning, and long-context analysis, and we count ourselves among those fans. We use Claude daily. The extended thinking capabilities, massive context windows (up to 200k tokens), and Anthropic's focus on safety and helpfulness make it genuinely one of the best AI experiences available. + +Commercial · Free tier available + +--- + +## What Claude Does Well + +- **Writing and reasoning** with thoughtful, nuanced responses and careful analysis +- **Extended thinking** that shows step-by-step reasoning for complex problems in real time +- **Large context windows** up to 200k tokens for working with large documents and codebases +- **Artifacts** for interactive outputs (code, documents, visualizations) alongside the conversation +- **Claude Code** for agentic coding directly in your terminal +- **Computer use** that lets Claude interact with desktop applications and web interfaces +- **MCP (Model Context Protocol)** which Anthropic created to standardize how AI tools connect to data sources +- **Safety-first design** through Anthropic's constitutional AI approach +- **Strong at code** with excellent code generation, review, and debugging +- **Projects** for organizing conversations with persistent context and instructions + +--- + +## What Open WebUI Does Well + +- **Any model, one interface** so you can use Claude *alongside* GPT-5.5, Gemini, Llama, and local models +- **Self-hosted** so the platform itself runs on your infrastructure +- **Knowledge & RAG** for persistent knowledge bases with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment +- **Free community edition** for unlimited users on your own infrastructure + +--- + +## At a Glance + +| | Open WebUI | Claude | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | Anthropic's Claude model family | +| **Extended thinking** | Supported for models that offer it (including Claude via API) | Native extended thinking | +| **Context window** | Depends on the model you connect | Up to 200k tokens | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Projects with persistent context | +| **Code execution** | Python in-browser + Open Terminal | Artifacts with interactive code | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by Anthropic | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Pro, Team, and Enterprise plans | + +--- + +## When to Use Each + +**Choose Claude if** you want the best writing and reasoning experience available, especially for long-context work, code review, or nuanced analysis. The extended thinking mode is particularly strong for complex problems. Claude Code and computer use push the boundaries of what AI can do autonomously. + +**Choose Open WebUI if** you want to use Claude alongside other models in one interface, build persistent knowledge bases, or need team collaboration features. Open WebUI also supports Claude's extended thinking via the API. + +**Use both.** Connect Open WebUI to the Anthropic API and use Claude for deep analysis alongside GPT-5.5 for other tasks and local models for privacy-sensitive work. Use claude.ai directly when you want Artifacts, Projects, or computer use. + +--- + +## Use Claude Through Open WebUI + +Claude models are available through Open WebUI via the Anthropic API. Many Open WebUI users run Claude as their primary model, getting Claude's reasoning alongside Open WebUI's knowledge bases, tools, and team features. + +**How to connect:** + +1. Get an API key from [console.anthropic.com](https://console.anthropic.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new connection with your Anthropic API key and the base URL `https://api.anthropic.com/v1` +4. Claude models will appear in your model selector + +You can use Claude for complex analysis and writing while routing simpler tasks to local models via Ollama, all in the same interface. + +--- + +*Claude is exceptional AI. Open WebUI is one way to use those same models on your own infrastructure, alongside other models you rely on.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I self-host Claude?** +Not Claude itself, but you can use Claude models through Open WebUI via the Anthropic API. Open WebUI itself runs on your infrastructure, though API calls still go to Anthropic. + +**Can I use Claude in Open WebUI?** +Yes. Add your Anthropic API key in Settings and Claude models appear in the model selector. + +**Can I use Claude and ChatGPT together?** +Yes, Open WebUI supports connecting to multiple providers at once. + +**Does Open WebUI support Claude's extended thinking?** +Yes. Extended thinking is supported for models that offer it, including Claude via the Anthropic API. + + +--- + +**Related:** [Open WebUI & ChatGPT](/alternatives/chatgpt) · [Open WebUI & Gemini](/alternatives/gemini) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/dify.mdx b/docs/alternatives/dify.mdx new file mode 100644 index 000000000..4945d914e --- /dev/null +++ b/docs/alternatives/dify.mdx @@ -0,0 +1,114 @@ +--- +sidebar_position: 20 +title: "Open WebUI vs Dify" +sidebar_label: "Open WebUI & Dify" +description: "Open WebUI vs Dify compared. An AI chat platform and a visual workflow builder for different use cases." +keywords: ["Open WebUI vs Dify", "Dify alternative", "AI workflow builder", "AI workflow builder comparison", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Dify + +*Last updated: May 2026* + +[Dify](https://dify.ai/) by LangGenius takes a fundamentally different approach to AI tooling. Where most tools on this page focus on conversation, Dify focuses on *building*: visual workflow design, agent orchestration, prompt engineering, and deploying AI-powered applications. If you think of AI as a platform for building things rather than just chatting, Dify is worth a serious look. + +[GitHub](https://github.com/langgenius/dify) · [Source Available (modified Apache 2.0)](https://github.com/langgenius/dify/blob/main/LICENSE) + +--- + +## What Dify Does Well + +- **Visual workflow builder** with drag-and-drop interface for designing complex AI pipelines and logic +- **Agent framework** for building autonomous agents that reason, use tools, and take actions +- **Prompt engineering IDE** for crafting, versioning, testing, and comparing prompts in a dedicated environment +- **Workflow marketplace** for sharing and importing community-built workflows and templates +- **Model routing** with smart routing across multiple providers for cost and capability optimization +- **RAG pipeline** with document ingestion, processing, and retrieval built in +- **Batch processing** for running prompts and workflows against large datasets +- **Annotation and feedback** for collecting human feedback to improve AI outputs over time +- **Observability** including integrated monitoring, logging, and cost tracking for production use +- **Backend-as-a-Service** for deploying AI apps as APIs instantly +- **Embeddable widget** for adding AI chat to any website or application +- **Strong community** with a large and active contributor and user base + +--- + +## What Open WebUI Does Well + +- **Conversational AI platform** with Chat, Notes, Channels, Automations, voice/video calls, and more +- **Any model, any provider** including Ollama, OpenAI, Anthropic, Google, Azure, and Bedrock in one interface +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team collaboration** including Channels, model agents, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment for code execution +- **Simpler deployment** with a single Docker container to get started + +--- + +## At a Glance + +| | Open WebUI | Dify | +| :--- | :--- | :--- | +| **Primary focus** | AI chat platform with knowledge, tools, and team features | AI application builder with visual workflows | +| **Approach** | Conversation-first | Build-first | +| **Workflow building** | Python tools and pipelines | Visual drag-and-drop workflow designer | +| **RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Built-in RAG pipeline | +| **Agent capabilities** | Model agents with bound tools and knowledge | Agent framework with reasoning and tool use | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Multi-provider with model routing | +| **Observability** | OpenTelemetry, analytics dashboards | Built-in monitoring, logging, and cost tracking | +| **License** | Open WebUI License | Source Available (modified Apache 2.0 with commercial restrictions) | + +--- + +## When to Use Each + +**Choose Dify if** you want to build AI-powered applications with visual workflows. The drag-and-drop builder, prompt IDE, and agent framework are designed for developers and product teams who are creating AI features, not just chatting. + +**Choose Open WebUI if** your team needs a daily AI workspace for chat, knowledge management, and collaboration. Open WebUI focuses on using AI rather than building AI applications. + +**Use both.** Dify exposes an OpenAI-compatible API. Connect Open WebUI to Dify's API and your Dify workflows appear as models in Open WebUI. Build in Dify, use in Open WebUI. + +--- + +## Use Them Together + +Dify exposes an OpenAI-compatible API for any workflow or app you build. You can connect Open WebUI to Dify's API endpoint to use your Dify-built AI applications as models inside Open WebUI, combining Dify's workflow orchestration with Open WebUI's chat interface, knowledge management, and team features. + +**How to connect:** + +1. In Dify, publish your app and copy the API endpoint and key +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI-compatible connection with Dify's API URL and key +4. Your Dify apps will appear as models in Open WebUI + +--- + +*Dify is for building AI applications. Open WebUI is for using AI daily. The AI ecosystem needs both builders and users.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do Dify and Open WebUI compare?** +Dify takes a visual, workflow-first approach to building AI applications. Open WebUI leans more toward conversation and daily AI use. They come at AI from different angles, and many teams could use both. + +**Can I use Dify with Open WebUI?** +Yes. Dify exposes an OpenAI-compatible API. Connect Open WebUI to Dify's API to use your Dify workflows as models inside Open WebUI. + +**Is Dify free?** +The community edition is free to self-host. Dify is source available under a modified Apache 2.0 license. + +--- + +**Related:** [Open WebUI & Onyx](/alternatives/onyx) · [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & AnythingLLM](/alternatives/anythingllm) diff --git a/docs/alternatives/gemini.mdx b/docs/alternatives/gemini.mdx new file mode 100644 index 000000000..39c21d889 --- /dev/null +++ b/docs/alternatives/gemini.mdx @@ -0,0 +1,109 @@ +--- +sidebar_position: 32 +title: "Self-Hosted Gemini Alternative" +sidebar_label: "Open WebUI & Gemini" +description: "Looking for a self-hosted Gemini alternative? Use Google AI models through Open WebUI on your own terms." +keywords: ["Open WebUI vs Gemini", "Gemini alternative", "self-hosted Gemini", "Gemini alternative self-hosted", "Google AI self-hosted", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Gemini + +*Last updated: May 2026* + +[Gemini](https://gemini.google.com/) brings Google's AI research into a consumer product with strong multimodal capabilities (text, vision, audio, code), generous context windows, and natural integration with Google Workspace. Gemini 3.1 Pro is among the strongest models available for code and complex tasks. + +Commercial · Free tier available + +--- + +## What Gemini Does Well + +- **Multimodal strength** across text, images, audio, video, and code in a single model +- **Google Workspace integration** that works naturally with Gmail, Docs, Drive, and other Google tools +- **Gems** for creating custom AI personas with specific instructions and behavior +- **NotebookLM** for turning documents into interactive study guides and audio overviews +- **Deep Research** that conducts multi-step research and produces comprehensive reports +- **Generous context windows** with Gemini 3.1 Pro handling large documents and codebases +- **Competitive API pricing** as one of the most cost-effective APIs for high-quality models +- **Code generation** where Gemini 3.1 Pro is among the strongest for code tasks +- **Google Search grounding** with responses backed by Google's search infrastructure +- **Zero setup** and available immediately through your Google account + +--- + +## What Open WebUI Does Well + +- **Any model, one interface** so you can use Gemini *alongside* Claude, GPT-5.5, Llama, and local models +- **Self-hosted** so the platform itself runs on your infrastructure +- **Knowledge & RAG** for persistent knowledge bases with advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment + +--- + +## At a Glance + +| | Open WebUI | Gemini | +| :--- | :--- | :--- | +| **Models** | Any model from any provider | Gemini model family | +| **Multimodal** | Depends on connected models | Native text, vision, audio, video, code | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Google Search grounding, file uploads | +| **Ecosystem** | Connects to any API via MCP/OpenAPI | Deep Google Workspace integration | +| **Data** | Self-hosted, your infrastructure | Cloud-hosted by Google | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Advanced (Google One AI Premium) | + +--- + +## When to Use Each + +**Choose Gemini if** you live in the Google ecosystem and want AI that integrates naturally with Gmail, Docs, Drive, and Search. NotebookLM and Deep Research are standout features with no direct equivalent elsewhere. The API pricing is also among the most competitive. + +**Choose Open WebUI if** you want to use Gemini alongside other providers, need persistent knowledge bases, or want to self-host. Open WebUI connects to the Google AI API so you still get Gemini's models. + +**Use both.** Use Gemini directly for Google Workspace integration and NotebookLM. Connect Open WebUI to the Google AI API for Gemini models alongside Claude, OpenAI, and local models in one interface. + +--- + +## Use Gemini Through Open WebUI + +Gemini models are available through Open WebUI via the Google AI API. You can use Gemini's multimodal capabilities alongside other models you connect. + +**How to connect:** + +1. Get an API key from [aistudio.google.com](https://aistudio.google.com/) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new connection with the base URL `https://generativelanguage.googleapis.com/v1beta/openai` and your Google AI API key +4. Gemini models will appear in your model selector + +--- + +*Gemini brings strong multimodal AI to the Google ecosystem. Open WebUI is one way to use those models alongside others, on your own infrastructure.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use Gemini in Open WebUI?** +Yes. Add your Google AI API key in Settings and Gemini models appear in the model selector. + +**Can I self-host Gemini?** +Not Gemini itself, but you can use Gemini models through Open WebUI via the Google AI API. Open WebUI runs on your infrastructure, though API calls still go to Google. + +**Can I use Gemini and Claude together?** +Yes, Open WebUI supports connecting to multiple providers at once. + +--- + +**Related:** [Open WebUI & ChatGPT](/alternatives/chatgpt) · [Open WebUI & Claude](/alternatives/claude) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/index.mdx b/docs/alternatives/index.mdx new file mode 100644 index 000000000..c76664062 --- /dev/null +++ b/docs/alternatives/index.mdx @@ -0,0 +1,65 @@ +--- +sidebar_position: 1500 +title: "🌍 Alternatives to Open WebUI" +description: "Looking for an Open WebUI alternative? Honest comparisons of local runners, desktop apps, enterprise platforms, and commercial AI by the Open WebUI team." +keywords: ["open webui alternatives", "open webui alternative", "best open webui alternatives", "best open webui alternative", "open webui vs", "open webui comparison", "tools like open webui", "self-hosted AI chat", "self-hosted ChatGPT alternative"] +--- + +import Head from '@docusaurus/Head'; + + + + + +# Alternatives to Open WebUI + +The AI space is full of great projects, and we're genuinely happy about that. More tools means more people get access to AI in a way that works for them. + +We get asked "what else is out there?" a lot, so we put this list together. If Open WebUI isn't quite the right fit for your use case, these are the tools we'd point you to. We've actually used them, built alongside them, or just think they do something really well. Everything listed here is free to get started with. + +## How to Choose + +- **Running local models?** Start with [Ollama](/alternatives/ollama) or [llama.cpp](/alternatives/llama-cpp), both pair natively with Open WebUI +- **Want a desktop app?** [LM Studio](/alternatives/lm-studio) or [Jan](/alternatives/jan) are excellent standalone options +- **Need document Q&A?** [AnythingLLM](/alternatives/anythingllm) makes private RAG simple +- **Multi-provider chat?** [LibreChat](/alternatives/librechat) handles this well +- **Building AI workflows?** [Dify](/alternatives/dify) has a visual workflow designer +- **Just want frontier AI?** [ChatGPT](/alternatives/chatgpt), [Claude](/alternatives/claude), and [Gemini](/alternatives/gemini) are all available through Open WebUI via API + +--- + +## All Alternatives + +| Tool | What It's Great For | License | Works with Open WebUI | | +| :--- | :--- | :--- | :--- | :--- | +| **Ollama** | The most popular way to run local models | Open Source (MIT) | Native integration | [Learn more →](/alternatives/ollama) | +| **llama.cpp** | The engine that made local AI possible | Open Source (MIT) | Via API | [Learn more →](/alternatives/llama-cpp) | +| **LM Studio** | Beautiful desktop app for local model management | Proprietary (free) | Via API | [Learn more →](/alternatives/lm-studio) | +| **Jan** | Simple, privacy-first local AI desktop app | Open Source (Apache 2.0) | Via API | [Learn more →](/alternatives/jan) | +| **AnythingLLM** | Private document Q&A done right | Open Source (MIT) | | [Learn more →](/alternatives/anythingllm) | +| **LibreChat** | Solid self-hosted multi-provider chat | Open Source (MIT) | | [Learn more →](/alternatives/librechat) | +| **Msty** | Refined desktop hub for local and cloud models | Proprietary (free tier) | | [Learn more →](/alternatives/msty) | +| **Onyx** | Enterprise search with 40+ connectors | Source Available (MIT core + Enterprise License for ee/) | | [Learn more →](/alternatives/onyx) | +| **Dify** | Visual workflow builder for LLM applications | Source Available (modified Apache 2.0) | Via API | [Learn more →](/alternatives/dify) | +| **ChatGPT / OpenAI** | The one that started it all, we use it daily | Commercial (free tier) | Via OpenAI API | [Learn more →](/alternatives/chatgpt) | +| **Claude / Anthropic** | Exceptional writing and reasoning, we use it daily | Commercial (free tier) | Via Anthropic API | [Learn more →](/alternatives/claude) | +| **Gemini / Google** | Multimodal AI with Google ecosystem integration | Commercial (free tier) | Via Google AI API | [Learn more →](/alternatives/gemini) | + +Open WebUI itself is **source available** under the [Open WebUI License](/license). + +Every project on this list is built by people who care about making AI more accessible. That pushes all of us, Open WebUI included, to be better. We'd love to be your first choice, but we'd rather you have great options than no options. + +--- + +## Frequently Asked Questions + +**Is Open WebUI free?** +Yes. The community edition is free for unlimited users. Enterprise plans are also available. + +**Can I self-host Open WebUI?** +Yes. Open WebUI runs on your own infrastructure via Docker, Kubernetes, pip, or the desktop app. Your data stays on your hardware. + +**What models does Open WebUI support?** +Open WebUI connects to any OpenAI-compatible API, plus native Ollama integration. This includes OpenAI, Anthropic, Google, Azure, AWS Bedrock, llama.cpp, LM Studio, and hundreds of other providers. + +*Last updated: May 2026* diff --git a/docs/alternatives/jan.mdx b/docs/alternatives/jan.mdx new file mode 100644 index 000000000..6a8cb6923 --- /dev/null +++ b/docs/alternatives/jan.mdx @@ -0,0 +1,104 @@ +--- +sidebar_position: 4 +title: "Open WebUI & Jan" +sidebar_label: "Open WebUI & Jan" +description: "How Open WebUI and Jan work together. Two local-first AI tools with different strengths." +keywords: ["Open WebUI vs Jan", "Jan AI alternative", "local AI desktop app", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Jan + +*Last updated: May 2026* + +[Jan](https://jan.ai/) by Homebrew (Menlo Research) is built on a clear vision: AI should run on your device, offline, completely under your control. The desktop app is clean, the model hub makes it easy to get started, and the commitment to privacy is genuine. + +[GitHub](https://github.com/janhq/jan) · [Apache 2.0 License](https://github.com/janhq/jan/blob/main/LICENSE) + +--- + +## What Jan Does Well + +- **Local-first** with everything running on your machine, 100% offline +- **Simple and focused** with a clean interface that avoids unnecessary complexity +- **Built-in model hub** for browsing and downloading models with one click +- **Cortex engine** powering the runtime with support for GGUF and TensorRT-LLM +- **Thread-based conversations** for organizing chats by topic +- **Extensions system** for adding capabilities through community plugins +- **Open source** under the Apache 2.0 license +- **Privacy by design** so your data never leaves your device +- **Lightweight** and runs well on modest hardware +- **Cross-platform** on macOS, Windows, and Linux + +--- + +## What Open WebUI Does Well + +- **Web-based platform** with multi-user access from any browser +- **Any model, any provider** using local models alongside OpenAI, Anthropic, Google, and others +- **Knowledge & RAG** with persistent knowledge bases and advanced retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team features** including Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Scales up** from one person to thousands, Docker to Kubernetes + +--- + +## At a Glance + +| | Open WebUI | Jan | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform for individuals and teams | Desktop app for private, local AI | +| **Model management** | Connects to model runners and APIs | Built-in model hub with one-click downloads | +| **Multi-provider** | Local + cloud models | Focused on local models | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Focused on chat | +| **Multi-user** | SSO, RBAC, SCIM, teams | Personal desktop use | +| **Offline** | Fully offline with local models | 100% offline | +| **License** | Open WebUI License | Apache 2.0 | + +--- + +## When to Use Each + +**Choose Jan if** you want the simplest, most private way to run AI locally on your desktop. No servers, no configuration, no accounts. Just download, pick a model, and start chatting. + +**Choose Open WebUI if** you need web-based access, team collaboration, knowledge bases, or want to combine local models with cloud providers. Open WebUI runs as a web server that your whole team can use. + +**Use both.** Jan can serve models via its local API. Connect Open WebUI to Jan's API for web-based team access while keeping Jan as your model runner. + +--- + +## Works With Open WebUI + +Jan can serve models via a local API endpoint. If you're using Jan to manage your local models, you can connect Open WebUI to Jan's API for a web-based experience with multi-user support, knowledge bases, and tools. + +--- + +*Jan keeps local AI simple and private. Open WebUI adds a platform layer on top. Different approaches, same belief that AI should run on your hardware.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use Jan with Open WebUI?** +Yes. Jan can serve models via a local API endpoint. Connect Open WebUI to Jan's API for web-based access with team features. + +**How do Jan and Open WebUI work together?** +Jan handles running models locally on your desktop. Open WebUI can add web-based access, knowledge bases, and team features. You can connect Open WebUI to Jan's API and use them together. + +**Is Jan free?** +Yes. Jan is open source under the Apache 2.0 license. + +--- + +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & llama.cpp](/alternatives/llama-cpp) diff --git a/docs/alternatives/librechat.mdx b/docs/alternatives/librechat.mdx new file mode 100644 index 000000000..75416720a --- /dev/null +++ b/docs/alternatives/librechat.mdx @@ -0,0 +1,104 @@ +--- +sidebar_position: 10 +title: "Open WebUI vs LibreChat" +sidebar_label: "Open WebUI & LibreChat" +description: "Open WebUI vs LibreChat compared. Two respected self-hosted AI chat platforms with different strengths." +keywords: ["Open WebUI vs LibreChat", "LibreChat alternative", "self-hosted AI chat", "LibreChat comparison", "best self-hosted AI", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & LibreChat + +*Last updated: May 2026* + +[LibreChat](https://www.librechat.ai/) is one of the projects we genuinely respect in this space. It offers a multi-provider chat experience with strong authentication support, side-by-side model comparison, and a focused feature set that does the fundamentals well. The project is MIT-licensed, actively maintained, and Danny and the community behind it have built something solid. + +[GitHub](https://github.com/danny-avila/LibreChat) · [MIT License](https://github.com/danny-avila/LibreChat/blob/main/LICENSE) + +--- + +## What LibreChat Does Well + +- **Multi-provider chat** with a unified interface for OpenAI, Anthropic, Google, Azure, Ollama, and any OpenAI-compatible API +- **Model comparison** with side-by-side responses from different models in a single conversation +- **Presets system** for saving and quickly switching between model configurations and system prompts +- **Artifacts** for rendering code outputs, documents, and visualizations inline +- **Authentication** including LDAP, SSO, and social login support +- **Built-in code interpreter** for supported models +- **Prompt caching** for reducing API costs on repeated interactions +- **Focused scope** that does the chat interface well without overcomplicating things +- **Active development** with a responsive maintainer and engaged community +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Platform beyond chat** including Notes, Channels, Automations, Open Terminal, voice/video calls, image generation, and calendar +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, hybrid search with reranking, and agentic retrieval +- **Python extensibility** with custom tools, MCP servers, pipelines, OpenAPI integration, and a community marketplace +- **Model agents** that wrap any model with custom instructions, tools, knowledge, and parameters +- **Enterprise features** including RBAC, SSO/OIDC/LDAP, SCIM 2.0, analytics dashboards, and evaluation arena +- **Flexible deployment** via Docker, Kubernetes, pip, or desktop app, with horizontal scaling and OpenTelemetry + +--- + +## At a Glance + +| | Open WebUI | LibreChat | +| :--- | :--- | :--- | +| **Focus** | Full AI platform with knowledge, tools, and team features | Multi-provider AI chat interface | +| **Multi-provider** | Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and more | OpenAI, Anthropic, Google, Azure, Ollama, and more | +| **Model comparison** | Multi-model chats | Side-by-side comparison | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search, agentic retrieval | File attachment support | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Plugin system with presets | +| **Code execution** | Python in-browser + Open Terminal | Built-in code interpreter | +| **Team collaboration** | Channels, Notes, RBAC, SSO, SCIM | Multi-user with auth | +| **License** | Open WebUI License | MIT | + +--- + +## When to Use Each + +**Choose LibreChat if** you want a clean, focused multi-provider chat interface with strong model comparison features. The presets system makes it easy to switch between configurations, and the MIT license gives you maximum flexibility. + +**Choose Open WebUI if** you need a broader platform with knowledge bases, team collaboration tools, Python extensibility, or enterprise features like SCIM and analytics. + +**Run both.** They connect to the same backends. Some teams use LibreChat for quick individual chats and Open WebUI for collaborative work with knowledge bases and tools. + +--- + +## Use Them Together + +Both projects connect to the same backends (Ollama, OpenAI, etc.), so you can run both side by side. Some teams use LibreChat for quick individual chats and Open WebUI for team collaboration and knowledge work. + +--- + +*Two actively maintained projects making self-hosted AI accessible. Different strengths, same ecosystem.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do LibreChat and Open WebUI compare?** +LibreChat does the multi-provider chat interface really well. Open WebUI also includes knowledge bases, team collaboration, extensibility, and enterprise features. Different scope, both worth looking at. + +**Is LibreChat free?** +Yes. LibreChat is MIT licensed and free to self-host. + +**Can I use both LibreChat and Open WebUI?** +Yes. Both connect to the same backends (Ollama, OpenAI, etc.), so you can run both side by side. + +--- + +**Related:** [Open WebUI & AnythingLLM](/alternatives/anythingllm) · [Open WebUI & Msty](/alternatives/msty) · [Open WebUI & Ollama](/alternatives/ollama) diff --git a/docs/alternatives/llama-cpp.mdx b/docs/alternatives/llama-cpp.mdx new file mode 100644 index 000000000..6c53448b3 --- /dev/null +++ b/docs/alternatives/llama-cpp.mdx @@ -0,0 +1,101 @@ +--- +sidebar_position: 2 +title: "Open WebUI & llama.cpp" +sidebar_label: "Open WebUI & llama.cpp" +description: "How to connect llama-server to Open WebUI. Integration guide for two essential local AI tools." +keywords: ["Open WebUI vs llama.cpp", "llama.cpp frontend", "llama-server alternative", "llama.cpp web UI", "llama-server web interface", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & llama.cpp + +*Last updated: May 2026* + +[llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov is one of the most important projects in the AI ecosystem, and we mean that. Without llama.cpp, the local AI movement as we know it wouldn't exist. It proved that you could run serious models on consumer hardware, introduced the GGUF format that became the industry standard, and inspired an entire generation of tools. And with `llama-server`, it's not just an engine anymore: it has its own built-in web interface and OpenAI-compatible API ready to go. + +[GitHub](https://github.com/ggml-org/llama.cpp) · [MIT License](https://github.com/ggml-org/llama.cpp/blob/main/LICENSE) + +--- + +## What llama.cpp Does Well + +- **State-of-the-art inference performance** on consumer hardware, consistently pushing what's possible +- **Built-in web interface** via `llama-server`, ready to use out of the box +- **Broad hardware support** including CPU, CUDA, Metal, Vulkan, and SYCL +- **GGUF format** that became the quantized model standard for the entire industry +- **Quantization options** from Q2 to Q8 with multiple strategies for different quality/speed tradeoffs +- **Speculative decoding** for faster generation using draft models +- **Flash Attention** and other advanced inference optimizations +- **Grammar-constrained generation** for structured outputs (JSON, code, etc.) +- **OpenAI-compatible API** via `llama-server` so any tool can connect to it +- **Multi-model router mode** for serving multiple models from one endpoint +- **One of the most actively developed projects in AI** with a pace of commits that's hard to match +- **MIT licensed** and genuinely community-driven + +--- + +## What Open WebUI Does Well + +- **Rich web platform** with full chat, conversations, history, organization, and search +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** including custom tools, MCP servers, pipelines, and community extensions +- **Multi-provider support** to use llama.cpp models alongside OpenAI, Anthropic, Google, and others +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Multi-user support** from one person to thousands + +--- + +## When to Use Each + +**Use llama.cpp directly if** you want maximum control over inference. It gives you fine-grained tuning of quantization, context sizes, batch processing, and hardware utilization that no wrapper can match. The built-in web UI works well for solo use. + +**Add Open WebUI if** you want a richer interface, knowledge bases, team access, or the ability to connect other providers alongside llama.cpp. Open WebUI talks to `llama-server` via its OpenAI-compatible API. + +**Use both.** llama.cpp handles inference with maximum performance. Open WebUI handles the platform layer with knowledge, tools, and collaboration. + +--- + +## Use Them Together + +llama.cpp's `llama-server` exposes an OpenAI-compatible API, which means Open WebUI can connect to it directly. Use llama.cpp for high-performance inference, Open WebUI for the platform layer. + +```bash +# Start llama-server +llama-server -m your-model.gguf --port 8081 + +# Point Open WebUI at it +# In Admin → Settings → Connections, add: +# URL: http://localhost:8081/v1 +``` + +--- + +*llama.cpp made local AI possible. Open WebUI builds a platform layer on top. They work well together.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I connect llama-server to Open WebUI?** +Yes. llama-server exposes an OpenAI-compatible API. Add `http://localhost:8081/v1` as a connection in Open WebUI and your models appear automatically. + +**Does Open WebUI support llama-server's multi-model routing?** +Yes. If you're running llama-server in router mode with multiple models, Open WebUI will detect and list all available models through the API. + +**Is llama.cpp free?** +Yes. llama.cpp is MIT licensed and free for any use. + +--- + +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/lm-studio.mdx b/docs/alternatives/lm-studio.mdx new file mode 100644 index 000000000..56aa8ba10 --- /dev/null +++ b/docs/alternatives/lm-studio.mdx @@ -0,0 +1,112 @@ +--- +sidebar_position: 3 +title: "Open WebUI & LM Studio" +sidebar_label: "Open WebUI & LM Studio" +description: "How Open WebUI and LM Studio work together. Two approaches to local AI that pair well." +keywords: ["Open WebUI vs LM Studio", "LM Studio alternative", "local AI interface", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & LM Studio + +*Last updated: May 2026* + +[LM Studio](https://lmstudio.ai/) has nailed the desktop experience for local AI. The built-in model browser makes discovering and downloading models from Hugging Face effortless, the inference performance is solid, and the UI is clean and intuitive. For anyone who wants to run local models without touching a terminal, LM Studio is a strong option. + +Proprietary · Free for personal and commercial use + +--- + +## What LM Studio Does Well + +- **Model browser** for discovering, downloading, and managing models from Hugging Face with a GUI +- **Model search and filtering** to find exactly the right model by size, architecture, or quantization +- **Quantization preview** so you can see how different quantization levels affect model quality before downloading +- **Strong performance** with solid hardware utilization (Metal, CUDA) for fast local inference +- **OpenAI-compatible API server** that serves your local models to any application that speaks the OpenAI API +- **MCP support** for connecting to Model Context Protocol servers for extended tool use +- **RAG capabilities** with built-in document-based chat for local files +- **Prompt templates** with a library of pre-configured prompts for common tasks +- **Free for everyone** for both personal and commercial use +- **Cross-platform** on macOS, Windows, and Linux +- **Developer-friendly** local API server for integrating local models into your projects + +--- + +## What Open WebUI Does Well + +- **Full web platform** with multi-user chat, Notes, Channels, Automations, Open Terminal, and more +- **Any provider** so you can use LM Studio's local models alongside OpenAI, Anthropic, Google, and others +- **Deep RAG & Knowledge** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, pipelines, MCP, and OpenAPI integration +- **Team features** including RBAC, SSO/OIDC/LDAP, SCIM 2.0, analytics, and evaluation arena +- **Scales from one to thousands** via Docker, Kubernetes, and pip + +--- + +## At a Glance + +| | Open WebUI | LM Studio | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform for teams and individuals | Desktop app for local model management and chat | +| **Model management** | Connects to model runners (Ollama, etc.) | Built-in model browser with Hugging Face integration | +| **Multi-provider** | Local + cloud models in one interface | Focused on local models | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Built-in document chat | +| **Multi-user** | SSO, RBAC, SCIM, teams | Personal desktop use | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | MCP support | +| **API server** | Full API | OpenAI-compatible local server | +| **Pricing** | Free community edition; Enterprise plans available | Free for personal and commercial use | + +--- + +## When to Use Each + +**Choose LM Studio if** you want the best desktop experience for discovering and running local models. The model browser makes it easy to explore what's available on Hugging Face, compare quantizations, and get running quickly. + +**Choose Open WebUI if** you want a web-based platform with team access, persistent knowledge bases, or the ability to use local models alongside cloud providers like OpenAI, Anthropic, and Google. + +**Use both.** LM Studio's model browser and management are excellent for finding and running models. Open WebUI can connect to LM Studio's API server to add web access, knowledge bases, and team features on top. + +--- + +## Use Them Together + +LM Studio's OpenAI-compatible API server works well as a backend for Open WebUI. You can use LM Studio to manage and serve your local models, then connect Open WebUI to LM Studio's API. + +**How to connect:** + +1. In LM Studio, start the local API server (default port 1234) +2. In Open WebUI, go to **Admin → Settings → Connections** +3. Add a new OpenAI-compatible connection with URL `http://localhost:1234/v1` +4. Your LM Studio models will appear in the model selector + +--- + +*LM Studio makes local models accessible on the desktop. Open WebUI adds a web-based platform layer. Both are making local AI more useful.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**Can I use LM Studio with Open WebUI?** +Yes. Start LM Studio's local API server and add `http://localhost:1234/v1` as a connection in Open WebUI. + +**How do LM Studio and Open WebUI work together?** +LM Studio handles model management and local inference on your desktop. Open WebUI can add web-based multi-user access, knowledge bases, and team features. A lot of people use LM Studio as the backend and Open WebUI as the frontend. + +**Is LM Studio free?** +Yes. LM Studio is free for personal and commercial use, though it is proprietary software. + +--- + +**Related:** [Open WebUI & Ollama](/alternatives/ollama) · [Open WebUI & llama.cpp](/alternatives/llama-cpp) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/msty.mdx b/docs/alternatives/msty.mdx new file mode 100644 index 000000000..1af43d44f --- /dev/null +++ b/docs/alternatives/msty.mdx @@ -0,0 +1,100 @@ +--- +sidebar_position: 13 +title: "Open WebUI vs Msty" +sidebar_label: "Open WebUI & Msty" +description: "Open WebUI vs Msty compared. A web platform and a desktop app, two approaches to AI." +keywords: ["Open WebUI vs Msty", "Msty alternative", "AI desktop app", "AI desktop app comparison", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Msty + +*Last updated: May 2026* + +[Msty](https://msty.app/) has built a refined desktop experience for people who want one place to use both local and cloud-based models. The split-chat feature for running multiple models side-by-side to compare responses is genuinely useful, and the overall design feels thoughtful. + +Proprietary · Free tier available + +--- + +## What Msty Does Well + +- **Split chat** for running multiple models side-by-side to compare responses in real time +- **Unified hub** for local models (via Ollama, llama.cpp, MLX) and cloud APIs (OpenAI, Anthropic, Google) +- **Knowledge Stacks** for uploading documents and chatting with them using built-in RAG +- **Offline mode** for fully air-gapped use with local models +- **Batch prompting** for sending the same prompt to multiple models simultaneously +- **Hardware optimization** with good performance across NVIDIA, AMD, and Apple Silicon +- **Persona & Prompt Studios** for creating reusable personas and prompt templates +- **Conversation export** in multiple formats for archiving and sharing +- **Web search integration** with real-time web search during conversations +- **Thoughtful experience** that feels refined and considered +- **Free tier** with core features available at no cost + +--- + +## What Open WebUI Does Well + +- **Web-based platform** with multi-user access from any browser +- **Any model, any provider** connecting to any OpenAI-compatible API, Ollama, or cloud provider +- **Deep RAG & Knowledge** with 9 vector databases, 5 extraction engines, and hybrid search with reranking +- **Python extensibility** with custom tools, MCP servers, pipelines, and community extensions +- **Team features** including Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full computing environment for code execution +- **Source available** so you can read, audit, and modify the source code + +--- + +## At a Glance + +| | Open WebUI | Msty | +| :--- | :--- | :--- | +| **Approach** | Self-hosted web platform | Desktop app | +| **Multi-model comparison** | Multi-model chats | Split chat with side-by-side responses | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Local models + cloud APIs | +| **Knowledge & RAG** | 9 vector DBs, 5 extraction engines, hybrid search | Knowledge Stacks with document chat | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Persona & Prompt Studios | +| **Multi-user** | SSO, RBAC, SCIM, teams | Teams plan available | +| **Source availability** | Source available | Proprietary | +| **Pricing** | Free community edition; Enterprise plans available | Free tier, Aurum, and Teams plans | + +--- + +## When to Use Each + +**Choose Msty if** you want a polished desktop app for personal use, especially if you compare models frequently. The split-chat feature and batch prompting make it easy to evaluate different models side by side. + +**Choose Open WebUI if** you need a web-based platform, team access, deeper knowledge management, Python extensibility, or enterprise features. Open WebUI runs as a server that your whole team can reach from any browser. + +**Different form factors.** Msty excels as a desktop app for individual power users. Open WebUI works well as a team platform accessible from anywhere. + +--- + +*Msty brings polish to desktop AI. Open WebUI takes a web-based, team-oriented approach. Different tools, same goal of making AI more useful.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + + + +--- + +## Frequently Asked Questions + +**How do Msty and Open WebUI compare?** +Msty has a polished desktop experience with a great split-chat feature for comparing models. Open WebUI takes a web-based approach with multi-user support, knowledge bases, and extensibility. Different tools for different preferences. + +**Is Msty free?** +Msty has a free tier. Premium features require a paid Aurum plan. Teams pricing is also available. + +**Is Msty open source?** +No. Msty is proprietary software with a free tier. + +--- + +**Related:** [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & LibreChat](/alternatives/librechat) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/ollama.mdx b/docs/alternatives/ollama.mdx new file mode 100644 index 000000000..3b33604de --- /dev/null +++ b/docs/alternatives/ollama.mdx @@ -0,0 +1,118 @@ +--- +sidebar_position: 1 +title: "Open WebUI & Ollama" +sidebar_label: "Open WebUI & Ollama" +description: "How Open WebUI and Ollama work together. The most popular local AI pairing, with setup guide and honest comparison." +keywords: ["Open WebUI vs Ollama", "Ollama alternative", "Ollama frontend", "best Ollama UI", "Ollama web interface", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + +# Open WebUI & Ollama + +*Last updated: May 2026* + +[Ollama](https://ollama.com/) is the project that made local AI click for millions of people, and Open WebUI wouldn't be where it is without them. One command to install, one command to run, and you're chatting with a model. The desktop app includes a built-in chat interface, the CLI is fast and intuitive, and the team behind it consistently ships. We're big fans. + +[GitHub](https://github.com/ollama/ollama) · [MIT License](https://github.com/ollama/ollama/blob/main/LICENSE) + +--- + +## What Ollama Does Well + +- **Dead simple** to install and run a model in seconds +- **Desktop app with built-in chat** for a complete standalone experience +- **Huge model library** with hundreds of models ready to download from the Ollama registry +- **Modelfiles** for customizing models with system prompts, parameters, and adapters +- **Great performance** optimized for consumer hardware (Metal, CUDA, CPU) with automatic GPU layer splitting +- **OpenAI-compatible API** that works as a backend for many tools and applications +- **Concurrent model loading** for running multiple models simultaneously +- **Cross-platform** on macOS, Linux, Windows, and Docker +- **Actively developed** with fast iteration and a responsive team +- **MIT licensed** + +--- + +## What Open WebUI Does Well + +- **Rich web interface** with full chat, conversations, history, search, and organization +- **Knowledge & RAG** with 9 vector DBs, 5 extraction engines, and hybrid search +- **Python extensibility** including custom tools, MCP servers, pipelines, and OpenAPI integration +- **Multi-provider support** so you can use Ollama alongside OpenAI, Anthropic, Google, and others +- **Team platform** with Channels, Notes, Automations, RBAC, SSO/OIDC/LDAP, and SCIM 2.0 +- **Open Terminal** providing a full sandboxed computing environment for code execution +- **Model agents** with custom instructions, bound tools, and knowledge per model + +--- + +## Better Together + +Ollama and Open WebUI are the most popular pairing in the local AI ecosystem. Ollama manages and serves your models; Open WebUI adds a web-based platform with knowledge management, team features, and extensibility on top. + +```bash +# The most common Open WebUI setup +ollama pull llama3 +docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway \ + -v open-webui:/app/backend/data --name open-webui \ + ghcr.io/open-webui/open-webui:main +``` + +Open WebUI auto-detects Ollama when running on the same machine. All your Ollama models show up in the model selector immediately, no configuration needed. + +--- + +## When to Use Each + +**Use Ollama if** you want the fastest path to running a model locally. The CLI and desktop app work great on their own for quick interactions, scripting, and development. + +**Add Open WebUI if** you want a web-based interface with knowledge bases, team features, persistent conversations, or the ability to connect cloud providers alongside your local models. + +**Most people use both.** Ollama handles the model layer. Open WebUI handles the platform layer. They auto-detect each other and just work. + +--- + +## Other Great Ollama Frontends + +Ollama's OpenAI-compatible API means it works with many tools. If Open WebUI isn't your style, other projects that pair well with Ollama include: + +- [**LibreChat**](/alternatives/librechat) for multi-provider chat with model comparison +- [**AnythingLLM**](/alternatives/anythingllm) for workspace-based document Q&A + +--- + +*Ollama made local AI simple. Open WebUI builds on that foundation. Together, they've helped millions of people run AI on their own hardware.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**Can I use Ollama with Open WebUI?** +Yes. Open WebUI has native Ollama integration and auto-detects it when running on the same machine. No configuration needed. + +**Is Ollama free?** +Yes. Ollama is MIT licensed and free for personal and commercial use. + +**How do Ollama and Open WebUI work together?** +Ollama handles running and managing models. Open WebUI can serve as the web interface and also has things like knowledge bases, team features, and extensibility. Most people use them together. + +**Do I need Ollama to use Open WebUI?** +No. Open WebUI works with any OpenAI-compatible API, including llama.cpp, LM Studio, OpenAI, Anthropic, Google, and more. Ollama is a popular option, but not required. + +--- + +**Related:** [Open WebUI & llama.cpp](/alternatives/llama-cpp) · [Open WebUI & LM Studio](/alternatives/lm-studio) · [Open WebUI & Jan](/alternatives/jan) diff --git a/docs/alternatives/onyx.mdx b/docs/alternatives/onyx.mdx new file mode 100644 index 000000000..86c141542 --- /dev/null +++ b/docs/alternatives/onyx.mdx @@ -0,0 +1,100 @@ +--- +sidebar_position: 15 +title: "Open WebUI vs Onyx" +sidebar_label: "Open WebUI & Onyx" +description: "Open WebUI vs Onyx compared. A general-purpose AI platform and an enterprise search tool." +keywords: ["Open WebUI vs Onyx", "Onyx alternative", "Danswer alternative", "Onyx vs Open WebUI", "enterprise AI platform", "open webui alternative", "open webui comparison"] +--- + +import Head from '@docusaurus/Head'; + + + + + + +# Open WebUI & Onyx + +*Last updated: May 2026* + +[Onyx](https://onyx.app/) (formerly Danswer) focuses on a specific and important problem: connecting AI to your organization's internal knowledge across Slack, Google Drive, Confluence, Jira, GitHub, and dozens of other tools, with permission-aware retrieval. If your team's knowledge is scattered across 40+ tools and you need AI to search across all of them while respecting access controls, that's Onyx's sweet spot. + +[GitHub](https://github.com/onyx-dot-app/onyx) · [Source Available](https://github.com/onyx-dot-app/onyx/blob/main/LICENSE) (MIT core + Onyx Enterprise License for `ee/` directories) · [Self-Host Terms](https://onyx.app/legal/self-host) + +--- + +## What Onyx Does Well + +- **40+ enterprise connectors** with native integrations for Slack, Google Drive, Confluence, Jira, GitHub, Notion, and more +- **Automatic syncing** that keeps connected sources up to date without manual re-ingestion +- **Permission-aware retrieval** that respects source system access controls when returning search results +- **Enterprise search** purpose-built for searching across your organization's internal knowledge +- **Multi-surface access** via web app, Slack bot, Discord bot, Chrome extension, and CLI +- **Managed cloud option** for teams that don't want to self-host +- **Custom agents with actions** for building AI assistants that can take actions across connected tools +- **Active development** with frequent releases and community responsiveness + +--- + +## What Open WebUI Does Well + +- **Full AI platform** with Chat, Notes, Channels, Automations, Open Terminal, voice/video calls, and image generation +- **Deploy anywhere** on your own infrastructure, fully air-gapped if needed +- **Free community edition** with unlimited users, SSO/OIDC/LDAP, RBAC, and SCIM 2.0 included +- **Any model, any provider** including Ollama, OpenAI, Anthropic, Google, Azure, Bedrock, and any OpenAI-compatible API +- **Knowledge & RAG** with 9 vector databases, 5 extraction engines, and hybrid BM25 + vector search with cross-encoder reranking +- **Python extensibility** with custom tools, MCP servers, OpenAPI integration, pipelines, and a community marketplace + +--- + +## At a Glance + +| | Open WebUI | Onyx | +| :--- | :--- | :--- | +| **Primary focus** | General-purpose AI platform | Enterprise search and knowledge discovery | +| **Knowledge approach** | Document upload, knowledge bases, 9 vector DBs, 5 extraction engines | 40+ enterprise connectors with automatic syncing | +| **Permission handling** | RBAC, groups, per-resource access controls | Permission-aware retrieval from source systems | +| **Multi-provider** | Any OpenAI-compatible API + Ollama | Multiple LLM provider support | +| **Extensibility** | Python tools, MCP, OpenAPI, pipelines | Focused on connector and search ecosystem | +| **Collaboration** | Channels, Notes, shared conversations | AI-powered search and Q&A | +| **License** | Open WebUI License | Source Available (MIT core + Onyx Enterprise License for `ee/`); see [self-host terms](https://onyx.app/legal/self-host) | +| **Pricing** | Free community edition; Enterprise plans available | Free (self-hosted community), Cloud, and Enterprise plans | + +--- + +## When to Use Each + +**Choose Onyx if** you want to connect AI to your organization's existing tools. If your team's knowledge lives in Slack, Confluence, Jira, Google Drive, and GitHub, Onyx's 40+ connectors with automatic syncing and permission-aware retrieval were built for that. + +**Choose Open WebUI if** you need a general-purpose AI platform with chat, knowledge bases, team collaboration, Python extensibility, and support for any model provider. Open WebUI includes SSO, RBAC, and SCIM in the free community edition. + +**They solve different problems.** Onyx excels at enterprise search and connecting AI to your existing tools. Open WebUI excels as a general AI platform. Many organizations could use both. + +--- + +*Onyx connects AI to your enterprise knowledge. Open WebUI comes at it from a more general angle. They solve different problems, and many organizations could benefit from both.* + +**Ready to try Open WebUI?** [Get started →](/getting-started) + +--- + +## Frequently Asked Questions + +**How do Onyx and Open WebUI compare?** +Onyx leans into enterprise search with 40+ connectors and permission-aware retrieval. Open WebUI comes at it from a more general angle with chat, knowledge bases, team collaboration, and extensibility. Different tools for different needs. + +**Is Onyx open source?** +Onyx's core is MIT licensed. Enterprise features are under a separate Onyx Enterprise License. Additional [self-host terms](https://onyx.app/legal/self-host) may also apply. + +**Is Onyx free?** +The community edition is free to self-host. Additional [self-host terms](https://onyx.app/legal/self-host) may apply. Onyx Cloud and Enterprise plans are available for teams that want managed hosting or additional features. + +**Can I use both Onyx and Open WebUI?** +Yes. They solve different problems. Onyx connects AI to your existing enterprise tools. Open WebUI also has knowledge management, team features, and extensibility built in. + +**Which is better for enterprise AI deployment?** +It depends on your needs. If your priority is searching across internal tools with permission-aware retrieval, Onyx was built for that. If you need more of a general-purpose AI platform that you can deploy on your own infrastructure, with SSO, RBAC, and SCIM included in the free edition, that is more where Open WebUI fits. + +--- + +**Related:** [Open WebUI & Dify](/alternatives/dify) · [Open WebUI & AnythingLLM](/alternatives/anythingllm) · [Open WebUI & LibreChat](/alternatives/librechat) diff --git a/docs/features/authentication-access/rbac/permissions.md b/docs/features/authentication-access/rbac/permissions.md index 3c42803cd..6c9f07820 100644 --- a/docs/features/authentication-access/rbac/permissions.md +++ b/docs/features/authentication-access/rbac/permissions.md @@ -78,6 +78,7 @@ Controls what users can share with the community or make public. | **Share Notes** | **(Parent)** Ability to share Notes. | | **Public Notes** | *(Requires Share Notes)* Ability to make Notes public. | | **Chats Public Sharing** | *(Requires Share Chat)* Ability to make a chat share link reachable by anyone (including unauthenticated visitors). When disabled, users can still share chats with specific users or groups via the access-control selector, but the "Public" option is hidden for non-admins. Admins are always exempt. | +| **Calendars Public Sharing** | *(Requires Features > Calendar)* Ability to make a calendar publicly readable or writable by every user with the Calendar feature. When disabled, wildcard access grants are stripped from calendar create/update payloads — owners can still share with specific users or groups. Admins are always exempt. | ### 3. Chat Permissions Controls the features available to the user inside the chat interface. diff --git a/docs/features/calendar/index.md b/docs/features/calendar/index.md index 747c2e6f3..bd9d0bbe3 100644 --- a/docs/features/calendar/index.md +++ b/docs/features/calendar/index.md @@ -195,6 +195,10 @@ Calendars support the same access grant system used by knowledge bases, models, Only the calendar **owner** (or an admin) can manage access grants and delete the calendar itself. +:::info Public sharing is permission-gated +Wildcard access grants (calendar readable or writable by every user with the Calendar feature) are gated by the **Calendars Public Sharing** permission. When disabled for a non-admin owner, public principals are silently stripped from the access grant list on calendar create/update — per-user and per-group grants remain unaffected. Admins always retain the ability to share publicly. Configurable per-group in **Admin Panel → Users → Groups → Permissions** or via [`USER_PERMISSIONS_CALENDAR_ALLOW_PUBLIC_SHARING`](/reference/env-configuration#user_permissions_calendar_allow_public_sharing). +::: + --- ## Attendees and RSVP @@ -243,6 +247,7 @@ The global alert polling window is configurable via [`CALENDAR_ALERT_LOOKAHEAD_M |----------|---------|-------------| | [`ENABLE_CALENDAR`](/reference/env-configuration#enable_calendar) | `True` | Enable or disable the Calendar feature globally | | [`USER_PERMISSIONS_FEATURES_CALENDAR`](/reference/env-configuration#user_permissions_features_calendar) | `True` | Enable or disable Calendar access for non-admin users by default | +| [`USER_PERMISSIONS_CALENDAR_ALLOW_PUBLIC_SHARING`](/reference/env-configuration#user_permissions_calendar_allow_public_sharing) | `False` | Allow non-admin owners to attach wildcard read/write access grants to a calendar | | [`SCHEDULER_POLL_INTERVAL`](/reference/env-configuration#scheduler_poll_interval) | `10` | Seconds between scheduler ticks (shared with automations) | | [`CALENDAR_ALERT_LOOKAHEAD_MINUTES`](/reference/env-configuration#calendar_alert_lookahead_minutes) | `10` | Default alert window in minutes for upcoming events | diff --git a/docs/features/channels/index.md b/docs/features/channels/index.md index f206a96ee..0287854ee 100644 --- a/docs/features/channels/index.md +++ b/docs/features/channels/index.md @@ -70,6 +70,20 @@ Channels are **passive by default**. AI doesn't jump into every conversation. Wh This means your team can discuss freely without AI interrupting, and call on exactly the right model when it's needed. +### Full chat-completion pipeline + +Mentioning a model in a channel runs through the same chat-completion pipeline as a standard chat. The reply is **streamed in real time** as the model generates it, and the model has access to the full set of capabilities its configuration grants: + +| Capability | What it enables in a channel | +|------------|------------------------------| +| **Native and default function calling** | Tool calls resolve and execute mid-message | +| **Built-in tools** | Web search, image generation, code interpreter, calendar | +| **User tools and MCP tools** | Whatever the model is configured to call, it can call | +| **Filters** | Inlet/outlet/stream filters apply just like in chats | +| **Knowledge (RAG)** | Knowledge bases attached to the model are queried and injected | + +In other words, a channel-summoned model is a fully-equipped agent — not a one-shot completion. + ### Tagging people and linking channels Use `@username` to notify teammates. Use `#channel-name` to create clickable cross-references between conversations. diff --git a/docs/features/open-terminal/advanced/multi-user.md b/docs/features/open-terminal/advanced/multi-user.md index d5edc544f..b70ce4939 100644 --- a/docs/features/open-terminal/advanced/multi-user.md +++ b/docs/features/open-terminal/advanced/multi-user.md @@ -15,6 +15,12 @@ When multiple people on your team need terminal access through Open WebUI, you h | **Best for** | Small teams you trust | Production, larger teams, untrusted users | | **Included in** | Open Terminal (free) | [Terminals](https://github.com/open-webui/terminals) (enterprise) | +:::danger Required for multi-user Open WebUI deployments +If your Open WebUI instance has **more than one user account** and the same terminal-server connection is shared across users, you **must** use one of the two isolation modes below. A single Open Terminal container without `OPEN_TERMINAL_MULTI_USER=true` (or without per-user containers via Terminals) places every user inside the same shell, the same filesystem, and the same network namespace — which means any user can read, modify, or replace any other user's files, run commands as the shared user, and bind shared ports. This is not a supported configuration for multi-user Open WebUI. + +For deployments with **untrusted users** (open signup, public-facing portals, mixed-tenant setups), Option 1 is also insufficient on its own — file isolation does not extend to network namespace, so users can still reach each other through bound ports on the shared container. Use **Option 2 (per-user containers via Terminals)** for these deployments, or layer `TERMINAL_PROXY_HEADERS` on top of Option 1 to restrict what proxied responses can do in the user's browser. +::: + --- ## Option 1: Built-in multi-user mode @@ -55,7 +61,9 @@ Each user sees only their own files in the file browser. | Network access | | ✔ | :::warning Good for small teams, not production -This mode gives everyone their own workspace, but they're all running inside the same container. If one user runs a script that uses too much memory, it can slow things down for everyone. Use this for small, trusted groups — not for wide-open deployments. +This mode gives everyone their own workspace, but they're all running inside the same container. Resource pressure (memory, CPU) is shared, **and so is the network namespace** — a user who binds a port (e.g. `python -m http.server 8080`) is reachable from any other user's terminal-server proxy URL on that port. Per-user file isolation does **not** extend to per-user network isolation in this mode. + +Use this for small, trusted groups — not for wide-open deployments. For untrusted multi-user deployments, use **Option 2 (per-user containers)** below, or layer the [`TERMINAL_PROXY_HEADERS`](/reference/env-configuration#terminal_proxy_headers) configuration on top to lock proxied responses into a sandbox CSP. ::: ```mermaid diff --git a/docs/getting-started/advanced-topics/hardening.md b/docs/getting-started/advanced-topics/hardening.md index 59ac626b5..9dc6e3751 100644 --- a/docs/getting-started/advanced-topics/hardening.md +++ b/docs/getting-started/advanced-topics/hardening.md @@ -545,6 +545,12 @@ WEB_FETCH_FILTER_LIST=!internal.yourcompany.com,!10.0.0.0/8 Prefix entries with `!` to block them. +Outbound HTTP requests also do not follow `3xx` redirects by default. Without this gate, an attacker-supplied URL can pass the allowlist check on the originally-submitted host and then `302`-redirect to an internal address (RFC 1918, `127.0.0.1`, the cloud-metadata IP) that is reached without re-validation. The default closes that bypass across the RAG web loader, image loading, OAuth pre-flight, code-interpreter login, and tool-server execution. Keep the default unless you have a specific need (e.g. shortlink URLs) and other SSRF protections are in place: + +```bash +AIOHTTP_CLIENT_ALLOW_REDIRECTS=false +``` + ### Profile image URL forwarding The user and model profile-image endpoints can issue a `302 Found` redirect to whatever origin is stored in `profile_image_url` so that externally-hosted avatars (e.g. Gravatar via an upstream identity provider) display in the UI. That redirect causes the user's browser to make a request directly to the external origin, leaking client IP, User-Agent, and Referer headers — and an account whose `profile_image_url` was set to an attacker-controlled host can use that to deanonymize anyone who renders their avatar. @@ -557,6 +563,22 @@ ENABLE_PROFILE_IMAGE_URL_FORWARDING=false Default is `true` so existing deployments relying on external avatars keep working. Data URIs and same-origin/static images are unaffected by this flag — they continue to render normally. +Profile images stored as base64 `data:` URIs are also constrained to a MIME-type allowlist. The default is `image/png,image/jpeg,image/gif,image/webp`; SVG is intentionally excluded because it can carry inline `