15 years of building workflow automation for UK businesses has taught us what actually works. This is how we think about architecture, code quality, and shipping software that real people depend on.
These aren't aspirational. They're how we actually build. 11.8M+ cases processed, 40,000+ users served, 15 years running.
Every decision starts with data. Before adding AI or automation, we ensure data is integrated, standardised, and governed. Bad data in, bad results out:at scale.
Modular design with clear domain boundaries. You can understand one part of the system without loading the entire codebase into your head.
JWT authentication, role-based access control, audit logging on every action, data encryption at rest and in transit. Security isn't a feature:it's foundational.
Workflows trigger events. Events trigger actions. Scheduled jobs handle background processing. The system reacts rather than polls.
No agile theatre. No story points for their own sake. Practices that help us ship reliable software, not rituals that make us feel productive.
Short-lived feature branches. Frequent merges to main. No month-long branches that become impossible to merge.
Tests exist to catch regressions and document behaviour, not to hit arbitrary coverage metrics.
Readability matters more than cleverness. Code is read far more than it's written.
Deploying should be boring. If it's exciting, something is wrong.
Technology decisions should be explainable. Here's the reasoning behind our major choices:no “because it's cool” justifications.
Mature, battle-tested, excellent for complex business logic. We've processed 11.8M+ cases on this foundation. Stability matters more than hype.
Real-time AI communication needs type safety and modern async patterns. Node.js handles concurrent connections well. TypeScript catches errors before production.
Our data is inherently relational. Workflows have relationships. Cases connect to users, documents, events. Relational databases handle this well:and have for decades.
We build custom models for domain-specific tasks and use external providers (OpenAI, Anthropic, Deepgram, ElevenLabs) where they excel. No vendor lock-in:if one provider degrades, we switch.
Our customers are UK businesses handling sensitive data. Keeping data in the UK isn't just compliance:it's the right thing to do.
Voice AI needs millisecond latency. HTTP polling doesn't cut it. WebSockets give us bidirectional, real-time communication.
Big decisions aren't made in meetings by the loudest person. They're made through clear thinking, documented reasoning, and evidence from real usage.
We start with the problem, not the technology. What user pain are we solving? What business outcome matters?
For genuinely new territory, we build throwaway prototypes. Learn fast, then build properly.
Significant architectural changes get written up. Anyone can comment. Decisions are documented, not tribal knowledge.
Perfect is the enemy of good. Ship something useful, get feedback, improve. Real usage beats theoretical design.
Every codebase has debt. Pretending otherwise is delusional. We track it explicitly.
Not big-bang rewrites. Small improvements every sprint. Refactor as you go.
Intentional shortcuts for speed are fine:if documented. Accidental mess is not.
Solving tomorrow's problems today creates its own debt. Build for now, refactor when needed.
Every system has incidents. What matters is how you respond. Our approach is simple: fix fast, learn thoroughly, prevent recurrence.
Automated monitoring catches issues. Alerts go to the right people, not everyone.
Clear escalation paths. The person who can fix it gets notified. No waiting for approval.
Fix first, understand later. Get the system working, then dig into root cause.
Blameless analysis. What broke? Why? How do we prevent it? Document and share.
If this approach resonates with how you like to work, we'd love to hear from you. We're always looking for engineers who care about doing things properly.