Skip to main content
RelayHub gives your team access to the latest AI models from four leading providers. Models are available through the platform’s shared keys, your own BYOK keys, or custom endpoints.

OpenAI

ModelBest ForContext Window
GPT-5.3Conversational tasks, nuanced dialogue256K tokens
GPT-5.2Coding, agentic workflows, tool use256K tokens
GPT-5.1Fast reasoning, analytical tasks256K tokens
GPT-5-miniGeneral-purpose, balanced speed and quality128K tokens
GPT-5-nanoFast, cost-efficient responses128K tokens
RelayHub uses the OpenAI Responses API for all GPT-5 series models, ensuring access to the latest capabilities including native tool use and structured outputs.

Anthropic

ModelBest ForContext Window
Claude Sonnet 4.5Most tasks — strong balance of speed and intelligence200K tokens
Claude Opus 4.5Complex reasoning, deep analysis, long documents200K tokens

Google

ModelBest ForContext Window
Gemini 3.1 ProComplex, multi-step tasks1M tokens
Gemini 3 FlashFast responses, everyday tasks1M tokens
Gemini 3.1 Flash LiteBudget-friendly, high-volume tasks1M tokens
Google Gemini models support the longest context windows, making them well-suited for analyzing large documents or lengthy conversation histories.

xAI

ModelBest ForContext Window
Grok 4Complex tasks with reasoning and vision128K tokens
Grok 3General-purpose, standard tasks128K tokens
Grok 3 MiniFast, lightweight tasks128K tokens

Model Selection in Chat

Users select their preferred model from the model picker at the top of any chat session. In Dual Chat mode, users select two different models (from the same or different providers) to compare responses side by side. The available models in the picker depend on your organization’s configuration:
  • Platform keys: All models from all four providers are available
  • BYOK keys: Only models from providers where you have added a key
  • Custom endpoints: Only the models discovered or configured on your endpoint

Feature Support by Provider

Not all providers support every RelayHub feature equally. Here is a summary of feature availability:
FeatureOpenAIAnthropicGooglexAI
Standard ChatYesYesYesYes
Dual ChatYesYesYesYes
File AnalysisYesYesYesYes
Web SearchYesYesYesYes
Reasoning/ThinkingYesYesYesYes
Vision (Image Input)YesYesYesYes
EmbeddingsYesNoYesNo
Embedding generation (used for document indexing and semantic search) is handled by OpenAI or Google embedding models. If you use Anthropic or xAI as your primary chat provider, RelayHub will use a separate embedding provider for indexing tasks.

Model Updates

RelayHub’s model catalog is updated regularly as providers release new models. Updates happen on the platform side — no action is needed on your part. New models automatically appear in the model picker after a platform update. If you use a custom endpoint, new models become available as soon as you deploy them and re-run model discovery.