The Right Question Is Not Build vs Buy

For the past decade, core banking migration conversations have started with the wrong question. Banks and their advisors ask: should we build or should we buy? They ask Temenos versus Thought Machine, Mambu versus Fiserv, build versus Azure Bank. The vendor shortlists get longer, the RFPs get more detailed, and the decision still takes eighteen months.

The question I have started asking instead: what is the right architecture for your bank?

This is a different question because it starts from the bank's product strategy and works backward to the technology — not the other way around. A challenger bank launching savings and SME lending products from scratch has a different architecture requirement to a regional building society migrating twelve product lines from a COBOL core. Treating those as the same migration decision is where most strategies fail.

In 2026, the architecture that is winning — and by winning I mean producing the best operational outcomes, the fastest time-to-product, and the most sustainable long-term maintainability — is composable banking architecture. I want to show you what that looks like in practice, because most commentary on this topic stays theoretical.


Allica Bank: The Reference Architecture

Allica Bank is a UK challenger bank focused on SME lending. It is also the clearest example of modern composable core banking architecture that I have encountered in advisory work. Understanding their stack is the fastest way to understand what composable banking actually means.

Allica's architecture runs from front end through a microservices and orchestration layer, to a data layer structured around product-specific ledgers — not a single monolithic core. Each product line uses a best-of-breed ledger for that domain:

The orchestration layer is where the architecture earns its keep. It ties everything together, manages cross-product logic (a loan drawdown affects a deposit account, an asset purchase triggers a GL posting), and exposes a unified API surface to the front end. The front end does not know — and does not care — that deposits and lending run on different ledger systems.

This is composable architecture done correctly. No single platform is trying to do everything. Each component does one thing well. The integration complexity is real, but it is managed centrally in the orchestration layer rather than distributed across the product surface.


Why Saascada Specifically Matters

I want to call out Saascada because it illustrates a specific point that is widely misunderstood in core banking procurement decisions.

A specialist deposit engine is not a lesser option than a platform that does deposits, lending, cards, and payments. In many architectures, it is a better option.

A deposit ledger has a specific, well-defined job: account opening, deposit management, interest calculation, reporting for FSCS protection, integration with payment rails for inbound and outbound transfers. Those requirements are stable and well-understood. When you build a deposit engine that focuses exclusively on that problem domain — and nothing else — you get a system where product changes are fast, regulatory updates are scoped to the relevant module, and the engineering surface area is contained.

A monolithic core banking platform that tries to handle deposit accounting alongside complex lending structures, multi-currency exposure, and asset finance introduces coupling between domains that has nothing to do with each other. Change the FSCS reporting requirements for deposits and you are running change management through a system that also manages loan book risk-weights. That is not a hypothetical risk — it is a predictable source of implementation delay, testing overhead, and upgrade risk.

Saascada is cloud-native, API-first, and designed to be the deposit ledger in a composable stack. For banks that have separated their product domains architecturally, this is exactly the right component to slot in.


Three Principles That Define Modern Banking Architecture

If I were advising a bank on the architecture they should be targeting — regardless of where they are starting from — these are the three non-negotiable principles:

1. Composable

Assemble your banking stack from best-of-breed components connected via APIs, not from a monolithic suite that makes trade-offs across every product domain simultaneously. The monolithic approach (Temenos, FIS Profile, Finastra) has a legitimate track record and large install bases. The trade-off is that you are accepting the vendor's product roadmap and their prioritisation of engineering investment. If your bank's competitive differentiation is in SME lending, you do not want your core banking vendor's product team optimising for consumer current accounts.

Composable architecture means you can swap a component — change your deposit engine, migrate your lending ledger — without touching the rest of the stack. That flexibility is worth the integration cost.

2. Cloud-native

This is not a marketing term. Cloud-native architecture means the system was designed for cloud infrastructure from the ground up: designed for horizontal scaling, for managed database services, for containerised deployment, for API-first everything. Lifted-and-shifted legacy systems that run in AWS do not get those benefits. The architectural decisions that make a system cloud-native are made at design time, not at deployment time.

The practical consequence: a cloud-native deposit ledger like Saascada will update its compliance modules faster, deploy product changes with less ceremony, and scale without capacity planning cycles that legacy systems require. When you are building a challenger bank, that operational velocity matters.

3. Event-driven

Synchronous point-to-point integrations between banking systems are the architecture that produces the highest failure rate during migrations. System A calls System B synchronously. System B is slow. System A times out. Customer gets an error.

Event-driven architecture — asynchronous event streaming between services, typically via a message bus — decouples producers from consumers. When a loan drawdown event is published, every service that needs to react to it (update the deposit account, post to the GL, trigger fraud checks, update the CRM) subscribes to that event and reacts at its own pace. The services do not need to know about each other's availability.

Starling Bank's proprietary core banking platform — the Starling Engine, now licensed externally — is a well-documented example of event-driven architecture at scale. Monument Bank runs a similar model. These are the reference implementations for how event-driven core banking operates in production at millions of accounts.


Applying This to a Migration Decision

Most banks I advise are not greenfield. They are migrating from a legacy system that has accumulated twenty years of product logic, integrations, and data. The composable architecture model I have described is the target state — getting there from a legacy monolith requires a migration strategy that does not bet everything on a single cutover.

The pattern I recommend is the strangler fig: migrate product lines one at a time from the legacy core to a new architecture, running in parallel until each product line has stabilised in the new environment. During the transition period, you have both systems running simultaneously — which is operationally complex and requires careful design of the transition orchestration.

This is where the orchestration layer concept from Allica's architecture is useful even in a migration context. You do not need to migrate everything at once. You can build the orchestration layer first, connect it to your legacy core (as a transitional component), and migrate product lines one at a time. The orchestration layer grows to absorb the new product ledgers as they come online, and shrinks away from the legacy core as products migrate off it.

The upside is dramatically lower cutover risk. The downside is a longer programme and more operational complexity during the transition. That is a trade-off worth making for any bank where customer-facing system availability is a regulatory and commercial requirement.


The Old Debate Is Over

Build vs buy was the right question in 2015, when the alternatives were either a bespoke COBOL system or a monolithic Temenos deployment. The technology landscape has moved. In 2026, the relevant question is whether your architecture is composable enough to evolve as your product strategy evolves — and whether it is built on a platform that was designed for cloud, not one that was moved to cloud.

The banks that are winning in the challenger and regional segment are not the ones that picked the right monolithic vendor. They are the ones that understood their product strategy first, designed the architecture around it, and picked components that fit.

Allica's stack is not the only way to do this. Starling Engine, Monument, and a growing number of UK challengers have all arrived at similar architecture patterns independently. That convergence is a signal — not noise.


Raj leads Core Banking Technology advisory at Aicura Consulting, specialising in composable architecture design, migration strategy, and technical due diligence for financial institutions undergoing core system transformation.