Skip to main content
SwiftCase
PlatformSwitchboardFeaturesSolutionsCase StudiesFree ToolsPricingAbout
Book a Demo
SwiftCase

Workflow automation for UK service businesses. Created in the UK.

A Livepoint Solution

Platform

  • Platform Overview
  • Workflow Engine
  • Case Management
  • CRM
  • Document Generation
  • Data Model
  • Integrations
  • Analytics

Switchboard

  • Switchboard Overview
  • Voice AI
  • Chat
  • Email
  • SMS
  • WhatsApp

Features

  • All Features
  • High-Volume Operations
  • Multi-Party Collaboration
  • Contract Renewals
  • Compliance & Audit
  • Pricing
  • Case Studies
  • Customers
  • Why SwiftCase

Company

  • About
  • Our Team
  • Adam Sykes
  • Nik Ellis
  • Implementation
  • 30-Day Pilot
  • Operations Pressure Map
  • For Your Role
  • Peer Clusters
  • Engineering
  • Careers
  • Partners
  • Press
  • Research
  • Tech Radar
  • Blog
  • Contact

Resources

  • Use Cases
  • Software
  • ROI Calculator
  • Pressure Diagnostic
  • Pilot Scope Estimator
  • Board Case Builder
  • Free Tools
  • Guides & Templates
  • FAQ
  • Compare
  • Glossary
  • Best Practices
  • Changelog
  • Help Centre

Legal

  • Privacy
  • Terms
  • Cookies
  • Accessibility

Stay in the loop

Cyber Essentials CertifiedGDPR CompliantUK Data CentresISO 27001 Standards

© 2026 SwiftCase. All rights reserved.

Platform

YourAIkeepsanswering,
evenwhenaproviderdoesn’t.

Switchboard runs on a provider-agnostic AI architecture with automatic failover across OpenAI and Anthropic. When a provider has a bad hour, your contact centre doesn’t.

Talk to Us About Resilience
See Switchboard

What AI resilience means here

Four layers, each designed to fail over.

Every critical AI component in Switchboard sits behind a provider abstraction. Nothing is welded to a single vendor’s roadmap, pricing page, or status page.

Multi-LLM failover

Switchboard routes language model calls across OpenAI and Anthropic with automatic fallback. If a provider returns an error, times out, or starts degrading, the next provider picks up the conversation mid-flow.

Speech-to-text as a swappable layer

Deepgram handles real-time speech recognition today. The integration sits behind an abstraction so the transcription provider can be swapped without touching the conversation logic or the case management side.

Voice synthesis as a swappable layer

ElevenLabs powers voice output today, with the same provider-agnostic wiring. Voice quality, pricing, and latency are benchmarked continuously — and we can move when the market does.

UK hosting doesn't move

Provider failover happens inside the SwiftCase platform. Your case data, audit logs, and conversation history still live in UK data centres regardless of which AI provider is active in the moment.

Why it matters

The AI layer is the new single point of failure.

Most AI platforms are built against one LLM vendor, one speech vendor, one voice vendor. That is a convenient story on day one and a hostage situation on day four hundred.

Provider outages are not hypothetical

Every major AI provider — OpenAI, Anthropic, Google, and the voice stack vendors — has had public incidents in the last year. Single-provider platforms go dark for the duration. A contact centre or claims line that can't answer the phone is not a small problem.

Models get deprecated

Prompts tuned for one model version don't always survive the next. When a provider retires a model, single-provider platforms face a forced migration on the provider's timetable, not yours. Multi-provider means you can choose when to move.

Pricing and rate limits shift

Token prices, rate limits, and tier requirements change on provider timelines. A platform locked to one vendor has no leverage. Provider-agnostic architecture turns those changes into a config update, not a business problem.

Regulated operations need a DR story

FCA operational resilience, NHS DSPT, and SRA business continuity expectations all ask how your critical tech fails over. “Our AI supplier went down” is not an answer. Multi-provider failover is something you can write into a BCP and evidence in an audit.

The architecture

Five layers, one conversation.

Every Switchboard conversation flows through the same stack. Providers sit behind an abstraction layer, not in the conversation code itself.

1

Channel layer

Voice, chat, WhatsApp, SMS, and email all share the same conversation engine. No per-channel AI silos.

2

Provider abstraction

A single interface in front of every AI provider. LLMs, speech-to-text, and voice synthesis are all swappable without rewriting conversation logic.

3

LLM routing

Primary and fallback LLMs with automatic failover on error, timeout, or degraded response quality. Retries are invisible to the caller.

4

Tool use

The AI reads and writes SwiftCase cases during the conversation. Tool calls are provider-independent, so a failover mid-call doesn't lose context.

5

SwiftCase core

Case data, workflow engine, audit logs, and customer records — hosted in UK data centres, unaffected by which AI provider is active upstream.

The layers you care about — conversation logic, tool use, case data — live on the SwiftCase side. Providers plug in behind them. Swap a provider and the upper layers don’t notice.

Scenarios

What happens when the AI layer wobbles.

Five real scenarios, with how SwiftCase handles them versus a platform built against a single AI vendor.

Scenario
SwiftCase
Single-provider platform
Primary LLM returns 500 errors for 30 minutes
Automatic failover to secondary LLM. Conversations continue.
Service degraded or offline for the duration.
Provider deprecates the model you rely on
Test the alternative provider in parallel, cut over on your timetable.
Forced migration on the provider's timetable.
Provider raises prices or tightens rate limits
Re-route traffic to the better-priced provider; renegotiate from a position of leverage.
Absorb the cost or start a multi-month platform migration.
A new model releases with materially better performance
A/B test against the current provider, switch when the numbers land.
Wait for your vendor to integrate it — if they ever do.
Auditor asks how the AI layer fails over
Documented multi-provider architecture, runtime telemetry, and a written DR story.
“We rely on our vendor’s SLA.”

Based on how single-vendor AI platforms typically describe failure modes in their own documentation. Always verify specifics with any vendor you are evaluating.

Honest limits.

What multi-provider doesn’t protect you from.

Not every component has a drop-in alternative today

The LLM layer has mature alternatives and is genuinely provider-agnostic. Some components — specific voice styles, certain real-time features — have fewer production-ready alternatives. Where that's true, we say so rather than claim a failover that wouldn't help in practice.

Third-party integrations you enable stay on their own uptime

If a workflow calls out to your CRM, payment gateway, or carrier API, that system's availability is its own. AI resilience doesn't protect you from a third-party system you chose to integrate with going down.

Failover is not a licence to ignore the primary

We monitor primary provider health continuously. Failover is a backstop, not a business model. If a provider is consistently below grade, we move the primary — not leave it broken and rely on the fallback.

Questions we get asked

Multi-provider AI, without the hand-waving.

Which AI providers does Switchboard use today?

OpenAI and Anthropic for language models with automatic failover between them. Deepgram for speech-to-text. ElevenLabs for voice synthesis. The list evolves as the market evolves — the point of the architecture is that it can.

How fast is the failover?

Failover happens within the timeout window of a single provider call, typically measured in seconds, not minutes. The caller experiences a short pause, not a dropped call. Conversation context, tool-use state, and case data carry over because they live in SwiftCase, not in the provider.

Can I pin to a specific provider if I want to?

Yes. Some customers have contractual or compliance reasons to prefer one provider. Routing is configurable per deployment. The default is multi-provider with failover; pinning is an option, not the norm.

Does failover cost extra?

No. Multi-provider routing is part of the platform architecture, not a premium tier. You are billed for the usage that happens, regardless of which provider handled it.

What happens to a conversation that is mid-call during a failover?

The conversation continues. The calling customer does not need to restart. Tool-use state, conversation history, and case context are held in SwiftCase, so the incoming provider picks up with full context rather than starting from scratch.

How do I evidence this for an audit or DR review?

We can provide an architecture summary, recent failover telemetry, and the relevant sections of our business continuity documentation under NDA. This is designed to land cleanly in an FCA, NHS DSPT, or SRA review.

Want the architecture detail for a DR review?

We can walk your technical team through the provider abstraction, share failover telemetry, and provide the sections your BCP needs — under NDA.

Request Architecture Briefing
Explore Switchboard