Switchboard runs on a provider-agnostic AI architecture with automatic failover across OpenAI and Anthropic. When a provider has a bad hour, your contact centre doesn’t.
What AI resilience means here
Every critical AI component in Switchboard sits behind a provider abstraction. Nothing is welded to a single vendor’s roadmap, pricing page, or status page.
Switchboard routes language model calls across OpenAI and Anthropic with automatic fallback. If a provider returns an error, times out, or starts degrading, the next provider picks up the conversation mid-flow.
Deepgram handles real-time speech recognition today. The integration sits behind an abstraction so the transcription provider can be swapped without touching the conversation logic or the case management side.
ElevenLabs powers voice output today, with the same provider-agnostic wiring. Voice quality, pricing, and latency are benchmarked continuously — and we can move when the market does.
Provider failover happens inside the SwiftCase platform. Your case data, audit logs, and conversation history still live in UK data centres regardless of which AI provider is active in the moment.
Why it matters
Most AI platforms are built against one LLM vendor, one speech vendor, one voice vendor. That is a convenient story on day one and a hostage situation on day four hundred.
Every major AI provider — OpenAI, Anthropic, Google, and the voice stack vendors — has had public incidents in the last year. Single-provider platforms go dark for the duration. A contact centre or claims line that can't answer the phone is not a small problem.
Prompts tuned for one model version don't always survive the next. When a provider retires a model, single-provider platforms face a forced migration on the provider's timetable, not yours. Multi-provider means you can choose when to move.
Token prices, rate limits, and tier requirements change on provider timelines. A platform locked to one vendor has no leverage. Provider-agnostic architecture turns those changes into a config update, not a business problem.
FCA operational resilience, NHS DSPT, and SRA business continuity expectations all ask how your critical tech fails over. “Our AI supplier went down” is not an answer. Multi-provider failover is something you can write into a BCP and evidence in an audit.
The architecture
Every Switchboard conversation flows through the same stack. Providers sit behind an abstraction layer, not in the conversation code itself.
Voice, chat, WhatsApp, SMS, and email all share the same conversation engine. No per-channel AI silos.
A single interface in front of every AI provider. LLMs, speech-to-text, and voice synthesis are all swappable without rewriting conversation logic.
Primary and fallback LLMs with automatic failover on error, timeout, or degraded response quality. Retries are invisible to the caller.
The AI reads and writes SwiftCase cases during the conversation. Tool calls are provider-independent, so a failover mid-call doesn't lose context.
Case data, workflow engine, audit logs, and customer records — hosted in UK data centres, unaffected by which AI provider is active upstream.
The layers you care about — conversation logic, tool use, case data — live on the SwiftCase side. Providers plug in behind them. Swap a provider and the upper layers don’t notice.
Scenarios
Five real scenarios, with how SwiftCase handles them versus a platform built against a single AI vendor.
Based on how single-vendor AI platforms typically describe failure modes in their own documentation. Always verify specifics with any vendor you are evaluating.
What multi-provider doesn’t protect you from.
The LLM layer has mature alternatives and is genuinely provider-agnostic. Some components — specific voice styles, certain real-time features — have fewer production-ready alternatives. Where that's true, we say so rather than claim a failover that wouldn't help in practice.
If a workflow calls out to your CRM, payment gateway, or carrier API, that system's availability is its own. AI resilience doesn't protect you from a third-party system you chose to integrate with going down.
We monitor primary provider health continuously. Failover is a backstop, not a business model. If a provider is consistently below grade, we move the primary — not leave it broken and rely on the fallback.
Questions we get asked
OpenAI and Anthropic for language models with automatic failover between them. Deepgram for speech-to-text. ElevenLabs for voice synthesis. The list evolves as the market evolves — the point of the architecture is that it can.
Failover happens within the timeout window of a single provider call, typically measured in seconds, not minutes. The caller experiences a short pause, not a dropped call. Conversation context, tool-use state, and case data carry over because they live in SwiftCase, not in the provider.
Yes. Some customers have contractual or compliance reasons to prefer one provider. Routing is configurable per deployment. The default is multi-provider with failover; pinning is an option, not the norm.
No. Multi-provider routing is part of the platform architecture, not a premium tier. You are billed for the usage that happens, regardless of which provider handled it.
The conversation continues. The calling customer does not need to restart. Tool-use state, conversation history, and case context are held in SwiftCase, so the incoming provider picks up with full context rather than starting from scratch.
We can provide an architecture summary, recent failover telemetry, and the relevant sections of our business continuity documentation under NDA. This is designed to land cleanly in an FCA, NHS DSPT, or SRA review.
We can walk your technical team through the provider abstraction, share failover telemetry, and provide the sections your BCP needs — under NDA.