FINMA Guidance 05/2025 in Practice: Operational Resilience for the 2026 Supervisory Cycle
The Swiss Financial Market Supervisory Authority — FINMA — published Guidance 05/2025 on 10 November 2025, restating what it expects from operational resilience at supervised institutions and giving the market a hard date of 1 January 2026 to comply. As of today, every bank, securities firm, and financial market infrastructure under FINMA's supervision, regardless of size or category, is expected to demonstrate that it can withstand and recover from disruptions to the activities its clients and counterparties depend on. The picture the guidance paints of the Swiss market — even four months into the new regime — is heterogeneous, uneven, and in places markedly under-prepared.
Operational resilience for banks, securities firms, and financial market infrastructures
Restating the supervisory expectations in FINMA Circular 23/1 against a 267-institution data survey, with concrete measures expected from 1 January 2026.
- Step 131 Dec 2024Survey snapshot
267 institutions assessed across critical functions, tolerances, testing, and framework integration.
- Step 210 Nov 2025Guidance published
FINMA Guidance 05/2025 — interpretation of the existing supervisory text against the survey findings.
- Step 31 Jan 2026In force
Concrete measures expected from every institution under FINMA supervision, regardless of size or category.
- Step 42026Horizontal Reviews
Institution-specific supervisory reviews and Horizontal Reviews of Operational Resilience continue across the year.
The guidance does not introduce new rules. The rules already exist, principally in FINMA Circular 2023/1, "Operational Risks and Resilience – Banks" — the supervisory text that defines critical functions, disruption tolerances, and the testing regime — and in the underlying Banking Ordinance. What the guidance does is interpret those rules against what institutions actually built. It tells the market, with the survey numbers attached, where the gaps are and how to close them. We read the document as a practitioner text for the second line of defence: a clear restatement of supervisory expectations, plus a diagnostic of where the typical institution is not yet meeting them.
Overview
- Where the market actually stands
- Critical functions: top-down, not exhaustive
- Disruption tolerances start with the executive board
- Testing: severe but plausible, beyond cyber
- Framework integration: stop running resilience as a parallel programme
- Where to focus through the 2026 supervisory cycle
- Where we sit
Where the market actually stands
The data survey behind Guidance 05/2025 covered 267 banks, securities firms, financial groups, and financial market infrastructures, as of 31 December 2024. The figures are easier to absorb together than separately:
Of category 1–3 institutions, 85% had run no operational resilience testing at the survey date. A further 10% had testing planned, mostly annually.
Only 12–15% of category 1–3 institutions had genuinely coordinated BCM, ICT, third-party risk, and crisis management into one framework.
Around six in ten institutions monitor resilience using metrics they developed themselves, with wide variation in construction and use.
Mean of 3.5 critical functions per institution, with the highest count in the dataset at 36 — usually a sign of process and resource confusion.
- Eighty-five per cent of institutions in supervisory categories 1 to 3 — the largest and most systemically relevant — had not yet conducted operational resilience testing at the date of the survey. A further ten per cent had testing planned, with most of those institutions planning annual cycles. Five per cent had neither tested nor included testing in their future plans.
- Only twelve to fifteen per cent of institutions in those same categories had genuinely integrated the components of operational resilience — business continuity management, ICT and cyber risk, third-party risk, crisis management, recovery planning — into a single coherent framework. The remainder were running them in parallel.
- Around sixty per cent of institutions defined their own resilience metrics rather than relying on a common reference, with wide variation in how those metrics were constructed and how they were used.
- The arithmetic mean number of identified critical functions was 3.5; the highest count in the dataset was 36. The disruption tolerances institutions defined for those functions ranged from one hour to over a year, with a median of 48 hours and a middle half between 24 and 72 hours.
These are not edge cases. They are the central tendencies of a market most Swiss institutions sit inside. The bar FINMA set for 1 January 2026 sits well above what the typical institution had built by the survey date, and the supervisory work scheduled across 2026 — institution-specific reviews and the Horizontal Reviews of Operational Resilience 2025 — will be the vehicle through which the gap closes in practice.
Critical functions: top-down, not exhaustive
A critical function, in the supervisory sense, is a service, activity, or operation whose disruption would cause material harm to clients, to the institution itself, or to the orderly functioning of the financial markets. The definition is deliberately tight. The failure mode FINMA flags is institutions that drift away from that tight definition, either fragmenting one critical function across many lines or sweeping in a broader inventory of processes (back office, IT operations) and resources (the core banking system) that support critical functions but are not critical functions themselves.
The numbers in the guidance show what that drift looks like at the extremes. The mean is 3.5 critical functions per institution. Some institutions report up to 36. A list of 36 lines is almost always an inventory of processes and resources rebadged as critical functions, and the next time that institution runs a tolerance assessment, every one of the 36 lines becomes harder to make meaningful, because the unit of analysis is wrong.
The corrective is a top-down view. The institution starts from its strategic mandate and the activities that meet the supervisory definition of criticality, then derives the underlying processes, resources, and dependencies from there. The inventory has to capture not only the functions themselves, but the supply chain behind each one — the systems, vendors, premises, people, and data the function depends on. As FINMA puts it in the guidance: a table without internal dependencies does not constitute an inventory of critical functions within the meaning of the supervisory text.
Disruption tolerances start with the executive board
A disruption tolerance is the maximum acceptable level of disruption to a critical function. It is most often expressed as a duration of unavailability, but it can equally be expressed in financial loss, client loss, error rates, or thresholds on liquid assets. The supervisory provisions are explicit on a single point: the tolerance reflects the executive board's tolerance for shocks, not the institution's current ability to recover from them.
- Start from current recovery capabilityLook up the existing RTO/RPO numbers in the BCM register.
- Round and re-badgeRe-label them as the disruption tolerance for the critical function.
- Tell the board what we already doTolerance becomes a description of capability, not a constraint on it.
- No gap, no programmeThe institution is "in tolerance" by definition — until a real event tests the assumption.
- Start from board appetiteHow much harm to clients, the institution, and the markets is the executive board willing to accept?
- Express in usable dimensionsTime, plus financial loss, client loss, error rate — whichever the board can actually reason about.
- Take an end-to-end viewInclude third parties, vendors, and supply chain — a tolerance tighter than your supplier SLA is not a tolerance.
- Compare to recovery capabilityIf appetite is tighter than capability, document the gap and run a remediation programme.
In practice, the survey shows that many institutions arrived at their tolerances backwards. They started from the institution's existing recovery capability — the recovery time objectives (RTOs) and recovery point objectives (RPOs) that business continuity management (BCM) has long expressed — and declared the result as the tolerance. FINMA refers to this as reverse engineering, and treats it as misaligned with the supervisory text. The point of a tolerance is to express the level of harm the board is willing to accept; if that appetite is tighter than the institution's current ability to recover, the institution is out of tolerance, and closing the gap becomes the resilience programme's mandate rather than something the BCM register reasons away.
Two practical signals emerge from the data. Tolerances of less than 24 hours often indicate that what is being measured is an RTO for a process, not a disruption tolerance for a critical function — the resilience inventory has been confused with the BCM register. Tolerances longer than 120 hours, and especially longer than 240 hours (10 days), invite the opposite question: if clients, the institution, and the financial markets can do without this function for that long, is it actually a critical function? The supervisory circular itself requires functions with a tolerance over 240 hours to be re-checked for criticality.
Tolerances also need to be expressed in dimensions the executive board can actually use. Time alone is rarely sufficient. The guidance notes that effects on the client relationship, on the continuation of the institution, and on the proper functioning of the financial markets should all factor in. End-to-end views of the supply chain matter especially when tolerances are tight: a four-hour tolerance on a function that depends on an outsourced provider with a twenty-four-hour service-level agreement is not a tolerance — it is a breach waiting to be reported.
Testing: severe but plausible, beyond cyber
The supervisory provisions require institutions in categories 1 to 3 to test the resilience of their critical functions against severe but plausible scenarios. Both words are doing real work. A scenario is severe if it would push the institution close to or past its tolerance. It is plausible if it can be argued from the institution's specific threat landscape — its geography, its technology, its supply chain, its client base.
Where institutions are testing, they are heavily weighted toward cyber. Eighty-four per cent of respondents listed a successful cyber attack on the institution itself as a tested or to-be-tested scenario; successful supply-chain cyber attacks (54%) and third-party outsourcing disruptions (60%) are also common. Non-cyber threats — natural disasters, pandemics, geopolitical disruption to market access, internal fraud or theft — appear less consistently. FINMA explicitly flags that the threat-and-vulnerability analysis for non-cyber scenarios is, in places, not yet at the required level of maturity.
The other point worth surfacing is that testing means more than a tabletop walkthrough of one scenario per year. It means an end-to-end view that includes third parties and supply chains. It means evidence that the executive board can independently assess the framework that produces the test outcomes. And, in time, it will mean cross-institution and sector-wide exercises: FINMA notes in its conclusion that scenario analyses will deepen and that the conditions for sector-wide testing are being prepared.
Framework integration: stop running resilience as a parallel programme
The third gap the survey flags is structural. Operational resilience is composed of activities — business continuity, crisis management, third-party risk, information security, ICT continuity, recovery planning — that already exist at most institutions, often with their own owners, their own committees, and their own annual cycles. The supervisory expectation is that these are coordinated as a coherent framework rather than run as parallel programmes that converge once a year in a board pack.
According to the survey, that coordination is in place at only twelve to fifteen per cent of category 1 to 3 institutions. The rest are running parallel work. The cost is hard to see in good times and obvious in bad ones: when a real disruption hits, the institution discovers which assumptions in the BCM playbook contradict the assumptions in the third-party register, and which controls in information security were never wired into the resilience tolerance.
Coordinating the framework does not require a new owner. It requires a clear definition of how the components interlock — which inventories are authoritative, which metrics roll up into which board reports, which incidents trigger which playbooks, and how testing in one component informs the next. It also requires that the framework itself be observable: the executive board needs an independent view of the institution's resilience that does not rely on the operational owners marking their own homework. About sixty per cent of institutions are currently using metrics they developed themselves, with substantial variation in how those metrics are constructed. That variation is part of why FINMA cannot yet make horizontal comparisons across the industry; closing it is a stated supervisory priority.
Where to focus through the 2026 supervisory cycle
The 1 January 2026 deadline has passed. The realistic question now is what an institution should be working on through the rest of the supervisory cycle, with FINMA's Horizontal Reviews of Operational Resilience 2025 still partly underway and institution-specific reviews intensifying.
First, fix the inventory of critical functions. Top-down from strategy, tightly scoped, with the underlying processes, resources, vendors, premises, and data dependencies attached to each entry. For most institutions, the right number is closer to 3.5 than to 36, and every entry should be defensible against the supervisory definition.
Second, run the tolerance conversation properly with the executive board. Not as a sign-off on what the BCM register already says, but as a discussion of what level of harm to clients, to the institution, and to the markets the board is genuinely willing to accept. Express the answer in time and, where they apply, in non-time dimensions. Where the answer is tighter than current recovery capability, document the gap and the remediation plan.
Third, design tests against severe but plausible scenarios that are not exclusively cyber. Build out the threat-and-vulnerability analysis for natural events, geopolitical disruption, internal fraud, and pandemic-class events. Where third-party dependencies are material, design the test to include them; an end-to-end exercise that stops at the perimeter is missing the part FINMA is asking about.
Fourth, integrate the framework. Make the existing components — BCM, ICT continuity, third-party risk, information security, crisis management, recovery planning — talk to each other through a single set of authoritative inventories and a single reporting line into the board. The framework should be assessable independently of the people who run its constituent parts.
Fifth, prepare for the supervisory conversation. FINMA has stated explicitly that institution-specific supervisory activity in this area will continue and intensify, including deeper scenario analyses and the groundwork for sector-wide testing. The institution should know, before the conversation, where its weaknesses are, what it has done about them, and what is still on the roadmap.
Where we sit
We build Bollwerk Frontier for the second line of defence — risk, compliance, and financial-crime teams whose job it is to surface signals their institutions cannot afford to miss. Operational resilience is adjacent to that work in obvious ways: the same governing bodies, the same tolerance conversations, the same supply-chain dependencies, the same supervisory counterparties. We are not going to claim that a surveillance platform implements a resilience framework. It does not.
What we do think matters, and what we are paying attention to, is the part of the resilience workload that lives in the second line — the framework integration, the tolerance reasoning, and the executive board's independent view of how the components fit together. That work is hard precisely because it spans multiple operational owners, several years of accumulated controls, and the regulatory translation between supervisory text and institutional practice. It is also exactly the kind of work where an analyst with the right tooling can move noticeably faster than one without.
The Swiss Risk Association is hosting an SRA chapter event on Guidance 05/2025 in Zurich on 11 May 2026, with Dusko Ignjic-Gawot from FINMA, Dominik Henn from Julius Baer, Martin Hofer from PostFinance, and Emanuel Hierl from zeb-Switzerland. The event is sold out, but the SRA generally makes recordings of chapter events available to members. If your institution is working through any of this and would find it useful to compare notes, write to hello@bollwerk.ai.