Supported Endpoint Types
- Azure AI Foundry
- OpenAI-Compatible
Connect to models deployed through Azure AI Foundry (formerly Azure AI Studio). RelayHub communicates with Azure’s OpenAI-compatible API surface, so any model available through your Azure deployment works seamlessly.Requirements:
- Azure AI Foundry endpoint URL (e.g.,
https://your-resource.openai.azure.com/) - API key or Azure AD credentials
- Deployment name(s) for your models
Adding a Custom Endpoint
Configure the connection
Fill in the following fields:
- Provider Name — a label for this endpoint (e.g., “Azure Production”)
- Base URL — the full URL to your API endpoint
- API Key — your authentication credential (encrypted at rest)
- Provider Type — select Azure AI Foundry or OpenAI-Compatible
Discover models
Click Discover Models. RelayHub queries your endpoint’s model listing API and displays all available models. Select which models your team should have access to.
Zero Leakage Guarantee
When a custom provider is active with platform fallback disabled (the default), every LLM call in the system routes through your endpoint:- Chat conversations (standard and dual chat)
- Embedding generation for document indexing
- Background workers (memory crystals, knowledge extraction)
- Vision and image analysis tasks
- Utility tasks (summarization, classification)
Model Discovery
RelayHub queries your endpoint’s/v1/models route to discover available models. Discovered models appear in a selection list where you can:
- Enable or disable individual models for your team
- Set a default model that is pre-selected in new chat sessions
- Label models with friendly names (e.g., “Fast” or “High Quality”)
If your endpoint does not support the
/v1/models listing route, you can manually add models by name. This is common with some Azure deployments where the model ID matches the deployment name.Resolution Order
Custom endpoints sit at the top of RelayHub’s provider resolution chain:- Custom Provider (highest priority)
- BYOK Key
- Platform Key (lowest priority)