You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
improvement(models): derive provider colors/resellers from definitions, reorient FAQs to agent builder
Dynamic data:
- Add `color` and `isReseller` fields to ProviderDefinition interface
- Move brand colors for all 10 providers into their definitions
- Mark 6 reseller providers (Azure, Bedrock, Vertex, OpenRouter, Fireworks)
- consts.ts now derives color map from MODEL_CATALOG_PROVIDERS
- model-comparison-charts derives RESELLER_PROVIDERS from catalog
- Fix deepseek name: Deepseek → DeepSeek; remove now-redundant
PROVIDER_NAME_OVERRIDES and getProviderDisplayName from utils
- Add color/isReseller fields to CatalogProvider; clean up duplicate
providerDisplayName in searchText array
FAQs:
- Replace all 4 main-page FAQs with 5 agent-builder-oriented ones
covering model selection, context windows, pricing, tool use, and
how to use models in a Sim agent workflow
- buildProviderFaqs: add conditional tool use FAQ per provider
- buildModelFaqs: add bestFor FAQ (conditional on field presence);
improve context window answer to explain agent implications;
tighten capabilities answer wording
Copy file name to clipboardExpand all lines: apps/sim/app/(landing)/models/page.tsx
+13-8Lines changed: 13 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -22,24 +22,29 @@ const baseUrl = getBaseUrl()
22
22
23
23
constfaqItems=[
24
24
{
25
-
question: 'What is the Sim AI models directory?',
25
+
question: 'Which AI models are best for building agents and automated workflows?',
26
26
answer:
27
-
'The Sim AI models directory is a public catalog of the language models and providers tracked inside Sim. It shows provider coverage, model IDs, pricing per one million tokens, context windows, and supported capabilities such as reasoning controls, structured outputs, and deep research.',
27
+
'The most important factors for agent tasks are reliable tool use (function calling), a large enough context window to track conversation history and tool outputs, and consistent instruction following. In Sim, OpenAI GPT-4.1, Anthropic Claude Sonnet, and Google Gemini 2.5 Pro are popular choices — each supports tool use, structured outputs, and context windows of 128K tokens or more. For cost-sensitive or high-throughput agents, Groq and Cerebras offer significantly faster inference at lower cost.',
28
28
},
29
29
{
30
-
question: 'Can I compare models from multiple providers in one place?',
30
+
question: 'What does context window size mean when running an AI agent?',
31
31
answer:
32
-
'Yes. This page organizes every tracked model by provider and lets you search across providers, model names, and capabilities. You can quickly compare OpenAI, Anthropic, Google, xAI, Mistral, Groq, Cerebras, Fireworks, Bedrock, and more from a single directory.',
32
+
'The context window is the total number of tokens a model can process in a single call, including your system prompt, conversation history, tool call results, and any documents you pass in. For agents running multi-step tasks, context fills up quickly — each tool result and each retrieved document adds tokens. A 128K-token context window fits roughly 300 pages of text; models like Gemini 2.5 Pro support up to 1M tokens, enough to hold an entire codebase in a single pass.',
33
33
},
34
34
{
35
-
question: 'Are these model prices shown per million tokens?',
35
+
question: 'Are model prices shown per million tokens?',
36
36
answer:
37
-
'Yes. Input, cached input, and output prices on this page are shown per one million tokens based on the provider metadata tracked in Sim.',
37
+
'Yes. Input, cached input, and output prices are all listed per one million tokens, matching how providers bill through their APIs. For agents that chain multiple calls, costs compound quickly — an agent completing 100 turns at 10K tokens each consumes roughly 1M tokens per session. Cached input pricing applies when a provider supports prompt caching, where a repeated prefix like a system prompt is billed at a reduced rate.',
38
38
},
39
39
{
40
-
question: 'Does Sim support providers with dynamic model catalogs too?',
40
+
question: 'Which AI models support tool use and function calling?',
41
41
answer:
42
-
'Yes. Some providers such as OpenRouter, Fireworks, Ollama, and vLLM load their model lists dynamically at runtime. Those providers are still shown here even when their full public model list is not hard-coded into the catalog.',
42
+
'Tool use — also called function calling — lets an agent invoke external APIs, query databases, run code, or take any action you define. In Sim, all first-party models from OpenAI, Anthropic, Google, Mistral, Groq, Cerebras, and xAI support tool use. Look for the Tool Use capability tag on any model card in this directory to confirm support.',
43
+
},
44
+
{
45
+
question: 'How do I add a model to a Sim agent workflow?',
46
+
answer:
47
+
'Open any workflow in Sim, add an Agent block, and select your provider and model from the model picker inside that block. Every model listed in this directory is available in the Agent block. Swapping models takes one click and does not affect the rest of your workflow, making it straightforward to test different models on the same task without rebuilding anything.',
question: `What ${provider.name} models are available in Sim?`,
639
638
answer: `Sim currently tracks ${provider.modelCount}${provider.name} model${provider.modelCount===1 ? '' : 's'} including ${provider.models
@@ -664,10 +663,27 @@ export function buildProviderFaqs(provider: CatalogProvider): CatalogFaq[] {
664
663
: `Context window details are not fully available for every ${provider.name} model in the public catalog.`,
665
664
},
666
665
]
666
+
667
+
if(toolUseModels.length>0){
668
+
faqs.push({
669
+
question: `Which ${provider.name} models support tool use and function calling in Sim?`,
670
+
answer:
671
+
toolUseModels.length===provider.modelCount
672
+
? `All ${provider.name} models in Sim support tool use and function calling, allowing agents to invoke external APIs, query databases, and run custom actions.`
673
+
: `${toolUseModels
674
+
.slice(0,5)
675
+
.map((m)=>m.displayName)
676
+
.join(
677
+
', '
678
+
)}${toolUseModels.length>5 ? ', and others' : ''} support tool use and function calling in Sim, enabling agents to invoke external APIs and run custom actions.`,
answer: `${model.displayName} is a ${provider.name} model available in Sim. ${model.summary}`,
@@ -679,17 +695,26 @@ export function buildModelFaqs(provider: CatalogProvider, model: CatalogModel):
679
695
{
680
696
question: `What is the context window for ${model.displayName}?`,
681
697
answer: model.contextWindow
682
-
? `${model.displayName} supports a listed context window of ${formatTokenCount(model.contextWindow)} tokens in Sim.`
698
+
? `${model.displayName} supports a context window of ${formatTokenCount(model.contextWindow)} tokens in Sim. In an agent workflow, this determines how much conversation history, tool outputs, and retrieved documents the model can hold in a single call.`
683
699
: `A public context window value is not currently tracked for ${model.displayName}.`,
684
700
},
685
701
{
686
702
question: `What capabilities does ${model.displayName} support?`,
: `${model.displayName}is available in Sim, but no extra public capability flags are currently tracked for this model.`,
705
+
? `${model.displayName} supports the following capabilities in Sim: ${model.capabilityTags.join(', ')}.`
706
+
: `${model.displayName}supports standard text generation in Sim. No additional capability flags such as tool use or structured outputs are currently tracked for this model.`,
691
707
},
692
708
]
709
+
710
+
if(model.bestFor){
711
+
faqs.push({
712
+
question: `What is ${model.displayName} best used for?`,
713
+
answer: `${model.bestFor} When used in a Sim workflow, it can be selected in any Agent block from the model picker.`,
0 commit comments