RelayHub gives your team access to the latest AI models from four leading providers. Models are available through the platform’s shared keys, your own BYOK keys, or custom endpoints.
OpenAI
| Model | Best For | Context Window |
|---|
| GPT-5.3 | Conversational tasks, nuanced dialogue | 256K tokens |
| GPT-5.2 | Coding, agentic workflows, tool use | 256K tokens |
| GPT-5.1 | Fast reasoning, analytical tasks | 256K tokens |
| GPT-5-mini | General-purpose, balanced speed and quality | 128K tokens |
| GPT-5-nano | Fast, cost-efficient responses | 128K tokens |
RelayHub uses the OpenAI Responses API for all GPT-5 series models, ensuring access to the latest capabilities including native tool use and structured outputs.
Anthropic
| Model | Best For | Context Window |
|---|
| Claude Sonnet 4.5 | Most tasks — strong balance of speed and intelligence | 200K tokens |
| Claude Opus 4.5 | Complex reasoning, deep analysis, long documents | 200K tokens |
Google
| Model | Best For | Context Window |
|---|
| Gemini 3.1 Pro | Complex, multi-step tasks | 1M tokens |
| Gemini 3 Flash | Fast responses, everyday tasks | 1M tokens |
| Gemini 3.1 Flash Lite | Budget-friendly, high-volume tasks | 1M tokens |
Google Gemini models support the longest context windows, making them well-suited for analyzing large documents or lengthy conversation histories.
xAI
| Model | Best For | Context Window |
|---|
| Grok 4 | Complex tasks with reasoning and vision | 128K tokens |
| Grok 3 | General-purpose, standard tasks | 128K tokens |
| Grok 3 Mini | Fast, lightweight tasks | 128K tokens |
Model Selection in Chat
Users select their preferred model from the model picker at the top of any chat session. In Dual Chat mode, users select two different models (from the same or different providers) to compare responses side by side.
The available models in the picker depend on your organization’s configuration:
- Platform keys: All models from all four providers are available
- BYOK keys: Only models from providers where you have added a key
- Custom endpoints: Only the models discovered or configured on your endpoint
Feature Support by Provider
Not all providers support every RelayHub feature equally. Here is a summary of feature availability:
| Feature | OpenAI | Anthropic | Google | xAI |
|---|
| Standard Chat | Yes | Yes | Yes | Yes |
| Dual Chat | Yes | Yes | Yes | Yes |
| File Analysis | Yes | Yes | Yes | Yes |
| Web Search | Yes | Yes | Yes | Yes |
| Reasoning/Thinking | Yes | Yes | Yes | Yes |
| Vision (Image Input) | Yes | Yes | Yes | Yes |
| Embeddings | Yes | No | Yes | No |
Embedding generation (used for document indexing and semantic search) is handled by OpenAI or Google embedding models. If you use Anthropic or xAI as your primary chat provider, RelayHub will use a separate embedding provider for indexing tasks.
Model Updates
RelayHub’s model catalog is updated regularly as providers release new models. Updates happen on the platform side — no action is needed on your part. New models automatically appear in the model picker after a platform update.
If you use a custom endpoint, new models become available as soon as you deploy them and re-run model discovery.