Providers, Models, and Credentials
WRM organises AI model access through a three-tier stack: providers, models, and availabilities. This page explains how the tiers relate, how credentials are managed, and how worker definitions select which models to use.
The three-tier model stack
Providers
A provider is a company or service that hosts AI models. Examples include Anthropic, OpenAI, and Amazon Bedrock. Each provider record includes:
- Name — the provider's display name
- Provider type — a lookup value identifying the provider category
- Merchant link — a reference to the provider's merchant record in XRM, where credentials are stored
Raytio ships with a starter set of providers (Anthropic, OpenAI, Amazon Bedrock) included in each tenant.
Models
A model is a specific AI model offered by a provider. Each model record includes:
- Name — the model's display name (e.g. "Claude Opus 4")
- Model identifier — the API identifier used when calling the provider (e.g.
claude-opus-4-20250514) - Provider link — which provider offers this model
Models are defined at the provider level — each model belongs to exactly one provider.
Availabilities
An availability represents a specific deployment endpoint where a model can be accessed. A single model may be available through multiple endpoints — for example, Claude Opus 4 might be accessible both through Anthropic's direct API and through Amazon Bedrock, each in different regions.
Each availability record includes:
- Model link — which model this availability is for
- Region — the deployment region (e.g.
us-east-1) - Endpoint URL — the API endpoint to call
- Pricing information — input and output token costs
Availabilities give you fine-grained control over where and how models are accessed, which matters for latency, cost, and data residency requirements.
Model parameters
Each model availability can have associated model parameters that define default configuration values for that model — settings like temperature, max tokens, top-p, and other provider-specific options.
When a worker definition adds a model preference, it can optionally override these defaults for its specific use case. For example, a creative writing worker might use a higher temperature than the default, while a code review worker might use a lower one.
Managing provider credentials
Provider API keys and authentication details are managed through the platform's existing merchant authentication system, not stored directly in worker configuration. This means:
- Credentials are managed through the same interface used for all merchant authentication in Raytio
- API keys are encrypted and never exposed in worker configuration or starter data
- Each tenant manages its own provider credentials independently
- Credential rotation uses the existing merchant authentication workflow
Starter provider data does not include API keys. After the starter providers are created in your tenant, you must configure credentials for each provider through the merchant authentication settings before workers can use those providers.
How worker definitions select models
Worker definitions specify model preferences — an ordered list that controls which models a worker should use:
| Field | Purpose |
|---|---|
| Availability | Which specific model deployment to use |
| Priority | Ordering — lower numbers are tried first |
| Parameter overrides | Optional overrides for default model parameters (temperature, etc.) |
The priority ordering enables fallback behaviour: if the first-choice model is unavailable or rate-limited, the system can fall back to the next preference in the list. This ensures workers remain operational even when a specific provider or endpoint has issues.
Example configuration
A worker definition for code review might specify:
- Priority 1 — Claude Opus 4 via Anthropic direct API (us-east-1)
- Priority 2 — Claude Opus 4 via Bedrock (us-east-1)
- Priority 3 — GPT-4o via OpenAI (default endpoint)
If the Anthropic API is down, the worker falls back to Bedrock; if Bedrock is also unavailable, it falls back to GPT-4o.