Switchboard AI does not generate, negotiate, or commit to a customer-facing value or decision. Humans configure what is offered. Humans approve what is sent. Humans own every outcome. Built for FCA Consumer Duty — not retro-fitted to it.
How We Build It
Switchboard is tool-using AI in regulated work. These are the design constraints that make those tools safe.
Values, offers, and decisions are configured by named humans before the AI ever speaks. The engineer enters the total-loss valuation. The handler authors the email. The manager signs off the threshold. The AI presents what humans chose — it does not generate, decide, or negotiate.
When the AI shares comparable evidence — market valuations, similar cases, supporting data — it shares it both ways. Comps below the offer get shown. Comps above the offer get shown. The customer sees the full picture. The AI does not haggle, convince, or selectively withhold.
AI-drafted customer emails land in a handler approval queue before sending. pending_approval → approved → sent (or rejected). Never auto-sent. Always traceable to a named human who clicked approve.
Hand off to a named human on low confidence, frustration detection, explicit customer request, or error states. The receiving handler gets the full conversation summary and any actions already taken — no cold transfers, no repeated information.
Every AI action and every human action is logged to Timeline with actor, timestamp, resource, outcome, and severity. Filterable. Exportable as a regulatory evidence pack. Per case. Per agent. Per second.
There is no “the AI did it” entry in Timeline. Every value the AI presented is one a named engineer set. Every email it drafted was approved by a named handler. Every workflow change was made by a named manager. Personal accountability mapped to actions, by design.
Worked Example
The conversation pattern Switchboard runs in production. The AI presents the engineer’s value, captures the customer’s response, and routes disagreement back to the named human.
Before the AI calls the customer, a named engineer reviews the vehicle assessment and enters the total-loss valuation as a structured field on the case. This is the value the AI will present. The AI cannot change it.
The AI calls the customer, presents the engineer's value, and walks through the comparable market evidence symmetrically — comps below the value and comps above. The same evidence pack a human handler would use.
Whatever the customer says is captured as a structured event on the case. Agreement is logged. Disagreement is logged. Tone, hesitation, repeated questions — all written into the Timeline. The AI does not push back, haggle, or attempt to convince.
If the customer keeps disagreeing, the conversation escalates back to the named engineer. The AI never settles a dispute. Resolution is a human-to-human conversation with the AI's complete record of the call as evidence.
In production at Laird Assessors. Switchboard runs this exact flow on real total-loss conversations: out-of-hours intake, outbound bodyshop chasing, and TL valuation chats. Every conversation is logged in Timeline. Every value the AI presents was set by a named engineer. Every disagreement is escalated to that engineer.
The Hard Limits
These are not features that could be turned on with a config flag. They are design constraints baked into the platform.
No AI-generated total-loss valuations. No haggling. No convincing.
Indemnity, liability, repudiation, FOS-relevant calls — humans only.
Drafts go to the approval queue. A human clicks send.
Every action logged in Timeline names a human. No anonymous AI actions.
Symmetric disclosure: comps both directions, every time.
Low confidence, frustration, explicit request, or error — handed to a human, with full context.
Escalation
Four configurable triggers. The receiving handler picks up with the full conversation summary and any actions the AI has already taken.
Configurable threshold per agent. Below it, transfer.
Tone, repetition, escalating language — handed off.
“Speak to a human” — immediate transfer with full context.
Tool failure, integration timeout, ambiguous input — handed off.
Why This Design
Consumer Duty, SMCR, and audit defensibility are not bolt-ons. They shape what the AI can do — and, more importantly, what it cannot.
Fair customer outcomes, by design. Symmetric disclosure removes selective-information risk. Continued disagreement always reaches a named human. The evidence trail is a side-effect of normal operation — not a separate compliance project.
Every AI action attributes to a named human owner. The engineer who set the value, the handler who approved the email, the manager who signed off the threshold. Senior Manager Conduct Rules map cleanly onto Timeline entries.
The AI does not haggle, convince, or attempt to lower an offer. It presents and listens. Customer disagreement is a routing signal, not an objection to overcome.
Timeline produces regulatory evidence packs on demand. Every conversation has a transcript. Every decision has an attributed human. Every Approval has a signature. No retro-fitted documentation projects.
Buyer Questions
Because that's not what the AI is for. A total-loss valuation is a regulated decision with a real customer impact and a named human accountable for getting it right. The AI's job is to present the engineer's value with the same evidence pack a human handler would use, capture the customer's response cleanly, and route disagreement back to the engineer. Letting the AI generate or negotiate the value would put a regulatory decision in the hands of a system that cannot hold a SMCR responsibility.
The AI does not push back. It captures their position, logs it as a structured event in the case, and escalates the conversation to the named engineer who set the value. The engineer takes over with a complete transcript and all the context the AI gathered. There is no haggling layer between the customer and the human accountable for the decision.
Confidence is measured per-turn against an explicit threshold configured per agent. The agent definition specifies what the AI is allowed to do, what tools it can call, and the confidence threshold below which it must transfer. Thresholds are per-task: clarifying a name has a different threshold to confirming a settlement value.
Yes. Confidence thresholds, frustration sensitivity, explicit-handoff phrases, and tool-failure handling are all configurable per agent definition. Most teams start with our defaults during a 30-day pilot and tune from there based on real call data.
No. Switchboard is tool-using AI — it makes real API calls during conversations: looking up cases, verifying identity, updating fields, creating new cases. The 'extra steps' are the design constraints that make those tools safe in regulated work: human-set values, symmetric disclosure, approval queues, configurable escalation, and Timeline audit trails. A chatbot does none of that.
Switchboard runs in production at Laird Assessors — out-of-hours inbound calls, outbound bodyshop chasing, and total-loss valuation conversations. Every TL conversation follows the engineer-sets-value flow described on this page. See the case study for context.
Pick one workflow — overflow FNOL, out-of-hours intake, or your bodyshop chase queue. Measured against your own SLA and cost-per-claim. No platform migration. No long lock-in.