Cybersecurity in the Age of AI: Supply Chains, Phishing, Impersonation, and the Infrastructure Underneath
The cybersecurity perimeter has shifted. Three years into mainstream large-language-model (LLM) deployment, four parts of the threat landscape are under measurable new pressure: software supply chains, phishing, impersonation, and the cyber infrastructure underneath the financial system. The same technology that is changing how attackers work is also changing how defenders work, but the two adoption curves are not symmetric — attackers tend to integrate the cheap, productivity-style applications first, and defenders carry more friction from regulation, change management, and integration cost. The first five months of 2026 have given us fresh evidence on each of the four fronts, alongside a clearer picture of how Swiss organisations are responding. This post sets out what we are seeing, with named incidents where they help, and where AI cuts the other way for security and compliance teams that build it into their stack.
LLM-aided maintainer-impersonation and account-takeover campaigns lower the labour cost of compromising critical open-source dependencies and propagate within hours.
Targeted pretexts in the victim’s language and corporate voice, paired with second-channel reinforcement — grammar errors and odd urgency are no longer reliable tells.
Real-time synthetic video and cloned voice defeat controls designed against text-based BEC and voice-only vishing.
Concentrated dependencies — payment networks, EDR vendors, telecom carriers — produce cross-sector blast radius when they fail or are persistently compromised.
We write this from the second line of defence — risk, compliance, and financial-crime teams. Most of these pressures land first as a security problem, but they reach the second line as fraud cases, sanctions exposure, third-party risk findings, and operational resilience failures. The MROS 2025 Annual Report named manipulated business communication — the umbrella under which CEO fraud and Business Email Compromise (BEC) sit — as the first of its three trend themes for Switzerland. FINMA's Guidance 05/2025 on operational resilience requires institutions to test severe-but-plausible scenarios that include cyber outages. The threats are converging on a single workload. The teams that handle them have to converge in step.
Overview
- The shape of the shift
- Supply chains: from rare events to baseline risk
- Phishing: from generic spam to bespoke deception
- Impersonation and synthetic identity
- Infrastructure under merciless test
- The Swiss picture
- Where AI helps defenders
- What the second line should be building
The shape of the shift
What changes when attackers have access to capable language and voice models is not primarily the types of attacks they run. Phishing is decades old. Supply-chain compromises predate the internet. Impersonation is the oldest fraud category there is. The shift is in unit economics and reach. Work that previously required attacker time — researching a target, writing a convincing pretext in the victim's language, mimicking the voice of a senior, automating the layered laundering of stolen funds — now runs at near-marginal cost. An attacker who could prepare three high-quality lures a week can now prepare three hundred, in any language, at any hour.
The defender's economics have also improved, but the gradient is shallower. Security operations teams adopt LLM tooling against budget cycles, regulatory constraints, model-risk approval, and the integration cost of stitching new tools into legacy SIEM, EDR, and ticketing systems. A multinational bank that decides today to put LLMs into its SOC analyst workflow is, optimistically, ninety days from a controlled pilot. An attacker is fifteen minutes from a working prompt. This asymmetry is not permanent — defenders have data, scale, and architectural advantages the attackers do not — but it shapes the next two to three years.
Supply chains: from rare events to baseline risk
The first five months of 2026 have produced more high-impact open-source supply-chain incidents than any equivalent period on record. On 30–31 March, attackers used stolen credentials to take over the npm account of Jason Saayman, one of the maintainers of axios — a JavaScript HTTP client with more than 100 million weekly downloads. They published two backdoored versions tagged as latest and legacy within a 39-minute window, pulling in a freshly seeded helper package, plain-crypto-js, whose post-install hook silently downloaded a remote-access trojan from a command-and-control server. Trend Micro and Elastic Security Labs detected the compromise within hours, but the operation had been pre-staged across roughly 18 hours, with the decoy dependency uploaded ahead of the malicious release to avoid the "brand-new package" alarms that scanners use as a first-pass signal.
- May 2023MOVEit Transfer
Zero-day in a widely deployed managed file-transfer product (CVE-2023-34362).
Blast radius · Mass exploitation against government, financial-services, and education customers.
- Mar 2024XZ Utils
Backdoor in xz/liblzma 5.6.0–5.6.1 inserted by a long-running maintainer-impersonation persona.
Blast radius · Caught by chance — would have given unauthenticated SSH access on most Linux hosts.
- Jun 2024Polyfill.io
Acquired JavaScript CDN repurposed to inject malicious code into sites embedding the script.
Blast radius · Estimated >100,000 sites affected before major CDNs blocked the domain.
- Late 2025Shai-Hulud worm (npm)
Self-spreading worm reached @ctrl/tinycolor and hundreds of other packages; initial vector was a phishing campaign spoofing npm MFA enrolment.
Blast radius · Credentials exfiltrated via public GitHub repos under victim accounts; LLM-assisted authoring per Unit 42.
- Jan–Apr 2026BlueNoroff / UNC1069 multi-registry
~1,700 malicious packages across npm, PyPI, Go, Rust, and PHP attributed to North Korea by Socket.dev, Checkmarx, and Phylum.
Blast radius · Largest single multi-registry campaign reported to date; broad developer-machine exposure.
- Mar 2026Axios maintainer takeover
Stolen npm credentials used to publish backdoored axios@1.14.1 and 0.30.4; staged via plain-crypto-js helper with post-install RAT.
Blast radius · axios has 100M+ weekly downloads; latest and legacy dist-tags both poisoned within 39 minutes.
- Apr–May 2026Mini Shai-Hulud
Pre-install hooks in SAP @cap-js, PyTorch Lightning, intercom-client, TanStack, Mistral AI, UiPath, and OpenSearch JS SDK.
Blast radius · ~1,800 compromised, ~10M monthly downloads of affected packages; payload triggers silently in CI/CD.
The Axios takeover was bracketed by two related campaigns. Through the second half of 2025 and into early 2026, a self-spreading npm worm dubbed Shai-Hulud compromised hundreds of packages including the widely used @ctrl/tinycolor library, harvested credentials from the developer machines on which they were installed, and exfiltrated them by creating public GitHub repositories under the victim's own account — a technique that evades data-loss-prevention tools watching for unfamiliar outbound destinations. Unit 42 assessed with moderate confidence that the malware was at least partly written with LLM assistance, based on the unusually conversational comments and emoji left in the bash payload. Initial access came from a credential-harvesting phishing campaign that spoofed npm and asked maintainers to "update" their multi-factor authentication. That is a supply-chain attack whose first stage was a well-crafted phish — and a clean example of how the categories in this post overlap in practice.
Then on 29 April and again on 11 May 2026, a Mini Shai-Hulud variant hit a different set of trust anchors: SAP's official @cap-js Cloud Application Programming Model packages on npm, PyTorch Lightning on PyPI, the intercom-client npm package, TanStack, Mistral AI's TypeScript client, UiPath, and OpenSearch's JavaScript SDK. The campaign exploited malicious pre-install hooks rather than post-install — meaning the payload triggered the instant a developer ran npm install in a CI/CD pipeline, before any human inspection of the package contents. SecurityWeek puts the immediate victim count at around 1,800 across roughly ten million monthly downloads of the affected packages. Separately, Socket.dev, Checkmarx, and Phylum jointly attributed a ~1,700-package campaign across npm, PyPI, Go, Rust, and PHP between January and April 2026 to North Korea's BlueNoroff / UNC1069 cluster — the largest single multi-registry campaign reported to date.
The 2024 attempt on XZ Utils is still the clearest case study of the slow version of this threat. A pseudonymous contributor calling themselves Jia Tan spent more than two years building credibility as a maintainer of the xz/liblzma compression library, eventually inserting a backdoor into versions 5.6.0 and 5.6.1 that would have given remote attackers unauthenticated access to any sshd process loading the compromised library. The patch was caught by a Postgres performance engineer, Andres Freund, who noticed an unexplained half-second latency in a benchmark and traced it back. With LLMs, the cost of synthesising a credible technical persona has fallen far enough that the Jia-Tan strategy is repeatable — and the 2026 incident record shows that the fast version, where a single account takeover or pipeline compromise reaches millions of downstream installs in hours, has become routine.
For second-line teams, the implication is a counterparty-risk problem in a new wrapper. Third-party cyber-risk programmes that audit vendors against SIG, SOC 2, or ISO 27001 reports give little signal on whether a vendor's source dependencies are well-curated. The signals worth integrating are at the artefact level: signed builds, software bills of materials (SBOMs), reproducible-build verification, lock-file enforcement that blocks unreviewed transitive updates, and continuous monitoring against published vulnerability and compromise indicators. Treating SBOMs as a compliance artefact rather than a live monitoring input misses the point of the exercise — and in a year where the time from maintainer-account compromise to widespread propagation has been measured in hours, the live monitoring is where the avoided losses sit.
Phishing: from generic spam to bespoke deception
Phishing volumes have grown every year of the last decade. What has changed since 2023 is composition. The generic, grammatically mangled "Your package has been delayed" SMS still arrives, but the steep growth is in targeted, well-written pretexts that are very hard to distinguish from legitimate correspondence. Switzerland's Federal Office for Cybersecurity (BACS / NCSC) recorded 6,299 phishing reports in the second half of 2025, a 17 per cent increase year-on-year, and 970 reports of CEO fraud over the full year — up from 719 in 2024. BEC reports rose from 49 to 73 cases in the second half alone, a 49 per cent jump. One case reported in that window cost a single Swiss company CHF 1.5 million.
The mechanics on the attacker side are mundane. A scraper pulls LinkedIn for a target firm's executive team and reporting structure. A generator drafts a contextually appropriate email in the relevant language — German with regional idiom, French with the right Madame/Monsieur tone, English with the corporate voice of the target firm — referencing recent press, board changes, or supplier renegotiations. A second pass tunes the email against open-source examples of the target firm's outgoing correspondence. The end product reads like an internal memo. It costs the attacker nothing meaningful per send. In early 2026, Computer Weekly reported that "dual-channel" BEC — where the email pretext is reinforced by a follow-up phone call or WhatsApp message from the same purported sender — has become the dominant pattern, supplanting the single-channel email lure that defined the BEC category through 2023.
The infrastructure layer has also evolved. In summer 2025, Swiss carriers identified the first domestic use of an SMS blaster — a portable device that masquerades as a mobile base station and broadcasts fraudulent text messages to every phone within roughly a kilometre, bypassing the operator-side filters that catch traditional SMS spam. NCSC has flagged it as a particular concern for two-stage attacks, in which the SMS asks the victim to confirm an innocuous-looking detail on a phishing page and a follow-up call from a "fraud team" then exploits the established context to extract e-banking credentials. The Shai-Hulud worm's initial vector — a phishing campaign that spoofed npm and asked maintainers to update their MFA — is the developer-targeted counterpart. The phishing problem has stopped sitting at the inbox boundary alone.
Two operational consequences follow. The first is that the long-standing user-training assumption — "look for grammar errors, urgency, suspicious sender domains" — has lost most of its discriminative power. Grammar errors are gone. Urgency is contextually justified. Sender domains can be carefully spoofed or routed through a compromised legitimate mailbox in the same supply chain. Awareness training is still useful for raising the floor, but the ceiling on what a careful employee can distinguish has dropped.
The second is that channel-level controls — DMARC, BIMI, domain reputation, anomaly detection on email metadata — matter more than they did, because the human-level controls are degraded. Multi-factor authentication is necessary but not sufficient against pretexts that ask for a wire transfer, a contract addendum, or a change of vendor payment instructions rather than a credential. The strongest controls now sit at the transaction layer, not the message layer: out-of-band verification of new payment instructions, four-eyes approval on supplier changes, and segregation between the inbox that receives the request and the system that executes the payment.
Impersonation and synthetic identity
The cases that defined the category through 2024 — the Arup HK$200m (US$25m) deepfake Zoom call against the firm's Hong Kong finance team, the cluster of voice-clone family-emergency scams that produced ransom-style payments in North America — are no longer the cutting edge. In March 2025, a finance director at a different Singapore multinational authorised US$499,000 against what looked like a routine Zoom call with the CFO and several colleagues. The attackers had updated their playbook against Arup-style awareness training: rather than push for an email transfer that might trigger suspicion, they proactively suggested the video call, betting that the apparent willingness to verify face-to-face would create the very confidence the call was designed to manufacture. The Singapore Police Force published the case as "one of the most convincing examples of AI-powered impersonation seen to date."
In February 2026, the AI Incident Database published a study — covered by The Guardian — concluding that deepfake fraud has gone "industrial." More than a dozen recent campaigns documented in the database use commodity tooling against a varied roster of targets: deepfake videos of Robert Cook, the premier of Western Australia, fronting an investment scheme; synthetic doctors promoting skin creams; a deepfake of the president of Cyprus; cloned voices of Swedish journalists. UK consumer-fraud body CIFAS reports that British consumers lost £9.4 billion to fraud in the nine months to November 2025, a substantial and growing share of which is attributed to AI-augmented social engineering.
The technical bar to produce a passable real-time deepfake has dropped to the level of commodity SaaS tools. The bar to produce a convincing one — registering with the cadence, mannerisms, and contextual specificity of the impersonated person — is higher, but well within the reach of motivated attackers willing to invest a week of reconnaissance. The implication for transaction monitoring is awkward: synthetic voice and video defeat the controls that were designed against text-based BEC, which had themselves been hardened against voice-only vishing. Layered controls have to be redesigned for an environment in which any single modality can be faked.
MROS reads this through the BEC lens. The 2025 Annual Report's chapter 4.1 indicators — last-minute change of payment instructions, pressure for confidentiality, new beneficiary in a different jurisdiction, lookalike domains — translate directly into deepfake-augmented attacks; the only difference is that the trigger event may now be a video call rather than an email. Second-line teams looking at fraud SAR pipelines should expect the deepfake-augmented share of manipulated business communication to grow through 2026 and adjust their typology coverage accordingly. The control redesign sits in two places: a stronger out-of-band confirmation regime for any payment-instruction change that is touched by a video or voice channel, and a clear playbook for staff who suspect a synthetic interlocutor — including a defined escalation path that does not depend on the channel under suspicion.
Infrastructure under merciless test
The class of cyber incident that turns into a board-level story is no longer primarily about data exfiltration. It is about availability and persistence. The 2024 anchors — Change Healthcare's February ransomware that disrupted U.S. pharmacy and insurance payment processing for weeks, the CDK Global ransomware that took thousands of car dealerships offline in June, the CrowdStrike kernel-driver update in July that grounded flights and shut down brokerage front ends for the better part of a day, the Salt Typhoon intrusion that sat undetected inside U.S. telecom lawful-intercept infrastructure — are still the canonical examples. What the first quarter of 2026 has shown is that the same pattern keeps producing variants.
In February, Singapore's Cyber Security Agency disclosed that the China-linked group UNC3886 had breached all four of the country's major telecommunications providers in a months-long espionage campaign, using zero-day exploits and rootkits for persistent access. The Singapore government had spent eleven months on a counter-operation, dubbed CYBER GUARDIAN — its largest ever — to evict the attackers and harden the carriers. That is Salt Typhoon's playbook against a smaller national footprint, and a reminder that the concentration risk in telecoms is structural rather than incidental.
In March, several incidents landed in close succession. The European Commission disclosed a cyberattack on 24 March that targeted the cloud infrastructure hosting its Europa platform, with early findings suggesting data exfiltration before the Commission contained the incident. Medical-device maker Stryker disclosed a 12 March attack by Handala, an Iranian-linked group, that triggered simultaneous factor resets on more than 200,000 corporate devices across 79 countries — a configuration-level outage rather than a classical encryption ransomware. Canadian carrier Telus disclosed an intrusion that the ShinyHunters group claimed had exfiltrated 700 terabytes of data, including personally identifiable information, call records, background checks, and source code. Romania's national oil pipeline operator Conpet confirmed a Qilin ransomware attack in February, with around a terabyte of internal data stolen. And in April, Microsoft documented a Forest Blizzard campaign in which the Russian-military-linked group is compromising small-office and home-office routers, modifying DNS settings to pivot upstream into enterprise networks via less-monitored edge devices.
These incidents do not share threat actors or techniques. What they share is blast radius. The systems that financial institutions depend on — payment networks, identity providers, telecommunications carriers, cloud control planes, endpoint-protection vendors — are concentrated enough that a single compromise or operational failure can cascade across the financial sector and adjacent industries. AI sits inside this picture in two ways. On the attacker side, it accelerates target selection, multilingual social engineering, and post-exploitation movement; on the defender side, it is itself becoming a concentrated dependency through model providers, vector databases, and inference platforms whose own concentration risk is still poorly mapped.
FINMA's Guidance 05/2025 on operational resilience is the relevant Swiss frame here. The expectation is that institutions identify critical functions, define quantitative disruption tolerances against severe-but-plausible scenarios, and test against them. The data the regulator published in November 2025 — drawn from a 267-institution survey — showed that the market is unevenly prepared, with the largest gaps in the severity of the scenarios institutions consider. A 24-hour SWIFT outage is a useful baseline; a multi-week ransomware-driven outage of a core software vendor, a simultaneous mass factor-reset on managed corporate endpoints, or a national-carrier-level telecom intrusion are the harder scenarios, and the ones closer to what the recent incident record suggests. The 2026 supervisory cycle is the window in which institutions should be testing against the harder scenarios, not the easier ones.
The Swiss picture
The Swiss data tells a consistent story. The NCSC's annual report for 2025, published in February 2026, recorded 64,733 voluntary cyber-incident reports — about 2,000 more than 2024 — alongside 222 reports filed under the new mandatory reporting obligation for critical-infrastructure operators that took effect on 1 April 2025. Reports of malware-infected devices in Switzerland doubled to 2.35 million. Ethical-hacker vulnerability disclosures rose 41 per cent. Ransomware reports rose 59 per cent year-on-year in the second half, with Akira and LockBit the dominant strains. The growth is broad, not narrow.
The Mobiliar / digitalswitzerland / FHNW KMU Cybersicherheit 2025 survey of 515 Swiss SMEs and 336 IT service providers tells the more uncomfortable half of the same story. Only 42 per cent of SMEs feel well protected against cyber-attacks, down from 55 per cent a year earlier. Only 31 per cent run regular awareness training. Only 30 per cent have an incident-response plan. Among the SMEs that have already been attacked, 73 per cent suffered direct financial damage and 27 per cent lost customer data. PwC's 2026 Global Digital Trust Insights Switzerland edition reaches a similar conclusion from the large-enterprise end of the market: 52 per cent of Swiss firms are increasing cyber-risk investment in response to geopolitical volatility, but only one in three has implemented data controls across the entire data lifecycle, only 11 per cent has implemented quantum-readiness measures, and one in five reports no plans at all for responsible-AI practices — compared with only one in twenty globally. Urs Küderli, who leads PwC Switzerland's cybersecurity practice, frames the gap directly: "Only half of Swiss firms are stepping up cyber spend — far too few, given that criminals never stop investing."
The recent Swiss incident record fills in the texture. On 8 May 2026, the Akira ransomware group claimed Réseau Radiologique Romand, a medical-imaging network in French-speaking Switzerland, threatening to publish 48 GB of staff identity documents, patient data, payment details, and NDAs. Other Swiss ransomware victims in the past six months span the country's industrial base: Kühne + Nagel, Habib Bank AG Zurich, the precision-engineering firm MicroPrecision, and a long tail of construction, accounting, and wine-trade SMEs against whom the same groups — Qilin, Akira, RansomHouse, Dragonforce — recycle the same playbook. The common pattern is not novel exploits. It is unpatched edge devices, remote-access without two-factor authentication, and weak segregation between IT and OT — the same controls the NCSC has been highlighting since 2020.
For second-line teams in Switzerland the practical reading is straightforward: the volume and severity of incidents are rising on a base that is, by the regulators' own surveys, still under-invested. The compliance question is no longer whether cyber risk belongs in the same workload as financial-crime risk. It is how fast the typologies, the controls, and the reporting can converge.
Where AI helps defenders
The same technology cuts both ways. The applications that move the needle for defenders fall into three categories that we see paying off in operational use, and a fourth that is mostly hype.
The first is analyst leverage. LLMs turn a senior SOC analyst into a small team. They draft incident timelines, triage and de-duplicate alerts, write indicator-of-compromise (IOC) summaries against unstructured threat feeds, and produce first-pass remediation plans that the analyst can review and adjust rather than write from scratch. The same pattern applies to second-line investigations: a financial-crime investigator who used to spend three hours stitching together a customer's transaction history, KYC documents, adverse-media coverage, and counterparty data can have a structured first-draft case file ready in minutes.
LLMs draft incident timelines, triage alerts, and produce first-pass remediation plans for human review.
Highest immediate ROI — the senior analyst becomes a small team.
Surfacing off-tone messages, mismatched tickets, and pretext-family phrasings against classical detection.
Layered with structured log analytics, not as a replacement for it.
Internal red teams use LLMs to draft, vary, and operationalise phishing and social-engineering campaigns.
Quality bar on blue-team detection and awareness training rises in step.
Fully agentic threat hunting and response with the model in the decision loop.
Hallucination on edge cases, fragile under adversarial input, hard to evaluate when the right action is rare.
The second is anomaly detection on unstructured signals. Classical detection works on structured logs, network captures, and authentication events. The hard cases live in the unstructured tail: a Slack message that reads slightly off, an HR ticket that should not have come from this employee, a procurement request whose phrasing matches a known phishing-pretext family, a maintainer email asking developers to refresh their MFA. LLMs are useful for surfacing these signals, particularly when paired with classical detection in a layered architecture.
The third is adversary simulation. Internal red teams now use LLMs to draft, vary, and operationalise phishing campaigns, social-engineering scripts, and post-exploitation chains against their own organisations. Done well, this raises the quality bar on both blue-team detection and security-awareness training. Done badly, it produces realistic attack content with no controlled deployment plan, which is a different problem.
The fourth category — fully autonomous AI SOC, autonomous threat hunting, agentic defence — is where we are most sceptical. The challenges are familiar to anyone who has tried to put an LLM into a high-stakes production loop without humans in the path: hallucination on edge cases, fragility under adversarial input, evaluation difficulty when the right action is rare, and the absence of a feedback signal that does not itself depend on the model. Constrained agency — LLMs at specific decision points inside a deterministic structure, with explicit verification and human review — works. Open-ended agency does not work yet, and the gap is widest in security, where the cost of being subtly wrong is high.
What the second line should be building
Five practical lines of work follow.
First, integrate cyber and AML telemetry. Manipulated business communication, deepfake-augmented impersonation, and ransomware-driven extortion all produce fraud SAR candidates. The transaction-monitoring team and the security-operations team should share a working view of incidents that have a financial-crime tail, with a clear handover protocol when the cyber side closes its incident and the SAR pipeline takes over.
Second, harden the transaction layer against the message layer. Out-of-band verification of any change in payment instructions, four-eyes approval on supplier and beneficiary changes, and segregation of duty between the inbox that receives a request and the system that executes a payment are the controls that work in a world where the message itself can no longer be trusted. The work is unglamorous and process-heavy. It is also where the avoided losses are largest, as the CHF 1.5 million BEC case the NCSC reported in the second half of 2025 illustrates.
Third, treat supply-chain risk as monitoring, not procurement. The third-party risk register should be a live system that ingests SBOMs, signed-build attestations, vulnerability disclosures, and breach notifications continuously, not a spreadsheet refreshed at the annual due-diligence cycle. In a year where Axios, Shai-Hulud, and Mini Shai-Hulud have each moved from disclosure to widespread propagation inside a single day, anything slower than continuous monitoring is too slow. This is the kind of work that benefits most from LLM-assisted analyst leverage — the volume of signal is the constraint, not the depth of analysis at any single point.
Fourth, raise the severity ceiling of operational-resilience testing. FINMA 05/2025 institutions have the regulatory anchor for this. Test against scenarios that include a multi-week outage of a concentrated software supplier, a mass factor-reset on managed corporate endpoints of the kind Stryker absorbed, a sustained distributed-denial-of-service (DDoS) attack that exhausts inbound bandwidth, a deepfake-driven mass-impersonation campaign against payment-instruction approvals, and an LLM-provider outage that takes down the team's own AI-assisted workflows. The point of the exercise is not to model the threat exactly. It is to identify which critical functions degrade gracefully and which ones do not.
Fifth, deploy AI inside the second line with the same discipline you would expect of any other model. Where LLMs draft case files, surface anomalies, or augment investigations, log every prompt and output, evaluate the model against held-out test cases that include adversarial inputs, and keep a human in the path for any decision that affects a customer, a transaction, or a SAR. The same discipline that applies to a transaction-monitoring rule applies to a generative-AI assistant; the model just produces output of a different shape.
Where we sit
We build Bollwerk Frontier for the second line of defence. The cyber-enabled fraud that touches our customers' AML pipelines, the supply-chain risk that turns into counterparty exposure on their books, and the operational-resilience scenarios their regulators are now asking about are all part of the same workload. We pay attention to where compliance operations can be augmented by AI without losing the audit trail or the human-in-the-loop guarantees the regulator expects, and to where the defender side of the AI curve is real rather than aspirational. If your team is working through any of this and would find it useful to compare notes, write to hello@bollwerk.ai.