Audits start with code, not threats. This is backwards. A perfect Solidity review is useless if the systemic trust model is broken. The 2022 Wormhole and Nomad exploits were failures of architectural logic, not smart contract bugs.
Why Bridge Audits Must Start with Threat Models, Not Code
Conventional smart contract audits fail for cross-chain bridges. The real attack surface is systemic: validator collusion, governance capture, and incentive misalignment. This is a framework for threat-first security.
Introduction
Bridge security fails because audits prioritize code review over systemic risk analysis.
Threat modeling defines the attack surface. It forces you to enumerate every trust assumption in your bridging architecture, from oracles like Chainlink to relayers in Axelar. Code review only verifies the implementation of a potentially flawed design.
The industry standard is wrong. Projects like Across and LayerZero embed security in their intent-based and omnichain models, which are architectural choices. An audit that doesn't first validate these models is checking the paint on a collapsing bridge.
The Core Argument: Code is a Distraction
Auditing bridge code before establishing a threat model is a fundamental error that guarantees security gaps.
Code is a symptom of the underlying security model, not the source of truth. An audit that starts with Solidity or Rust reviews the implementation of assumptions, not the assumptions themselves. This misses systemic risks in the protocol's economic and cryptographic design.
Threat models define the battlefield. A proper model catalogs every actor (users, relayers, sequencers, L1 validators), their capabilities, and their incentives. Without this map, auditors for protocols like Across or Stargate cannot distinguish a critical flaw from a benign bug.
The industry's biggest hacks were failures of model, not syntax. The Wormhole and Nomad exploits bypassed code logic by attacking the oracle or governance assumptions the code blindly trusted. The code was 'correct' but the system was broken.
Evidence: Over $2.5B has been stolen from bridges. Post-mortems consistently reveal design-level oversights—invalid state transitions, weak crypto-economic security—that a pure code review would never catch.
The Systemic Attack Surface: Three Unaudited Layers
Traditional smart contract audits are insufficient for bridges, which fail at the architectural level. The real vulnerabilities are in the unmodeled assumptions.
The Problem: Auditing Code, Not Assumptions
Auditors check Solidity, not the off-chain logic that powers the bridge. This misses the core attack vector: oracle manipulation and relayer collusion.\n- Wormhole ($326M) and Ronin ($625M) were compromised via off-chain validator key theft, not contract bugs.\n- A perfect smart contract is useless if its underlying message-passing layer is corruptible.
Layer 1: The Economic Model
Bridge security is defined by its cryptoeconomic design, which is rarely stress-tested. Slashing conditions, bond sizes, and withdrawal delay games are the real protocol.\n- A $10B TVL bridge secured by $50M in bonds has a 200x mismatch.\n- Attackers game the economic delay (e.g., Nomad) not the cryptographic proof.
Layer 2: The Relayer Network
The liveness and honesty of the relayer set is a centralized fault line. Permissioned sets (e.g., Multichain) and permissionless but lazy networks (e.g., early Across) create systemic risk.\n- Single operator control led to the Multichain $130M exploit.\n- Solutions like Chainlink CCIP and LayerZero's Oracle/Relayer separation attempt to model this.
Layer 3: The Upgrade Mechanism
Bridge admin keys or DAO governance are backdoors that invalidate all other security. A timelock is not a threat model.\n- The Wormhole exploit was fixed via a centralized upgrade, proving the system's true trust model.\n- Nomad's upgrade introduced the fatal bug. Audits must map the upgrade privilege attack tree.
Post-Mortem Analysis: Code vs. System Failures
Comparing the failure modes and detection efficacy of code-first audits versus threat model-first audits for cross-chain bridges.
| Failure Mode / Metric | Code-First Audit (Traditional) | Threat Model-First Audit (Proposed) | Real-World Example (e.g., Wormhole, Ronin) |
|---|---|---|---|
Primary Focus | Syntactic correctness, logic bugs | Economic & systemic trust assumptions | Oracle manipulation, multisig compromise |
Catches Logic Bugs (e.g., reentrancy) | Poly Network ($611M exploit) | ||
Catches Systemic Risks (e.g., validator cartels) | Ronin Bridge ($625M, 5/9 multisig) | ||
Audit Scope Definition | Given codebase | Derived from trust boundaries & data flows | M-of-N signers, relayers, upgrade paths |
Time to First Critical Finding | Days 3-7 of engagement | Day 1 of engagement | N/A |
False Sense of Security | High - 'code is verified' | Low - explicit risk residualization | Post-audit exploits common |
Requires Protocol Design Docs | N/A | ||
Mitigation Strategy Output | Code patches | Architectural changes & failure simulations | Time-locks, fraud proofs, circuit breakers |
Implementing a Threat-First Audit Framework
Auditing a cross-chain bridge without a threat model is like testing a bank vault by checking the door hinges.
Start with the attack surface, not the Solidity. A threat-first framework maps the entire system boundary—relayers, oracles, governance, and off-chain components—before a single line of code is reviewed. This prevents auditors from missing systemic risks in protocols like LayerZero or Wormhole that live beyond the smart contract.
Code audits validate implementation, not design. They check if the written logic matches the spec. A threat model questions the spec itself, identifying flawed assumptions in economic incentives or trust models that code review cannot see. This is why Across Protocol explicitly models for validator collusion.
The framework must be adversarial by default. It forces the team to document and rank threats—from frontrunning and censorship to total private key compromise. This creates a prioritized test plan, moving beyond generic checklists to target the unique failure modes of arbitrary message bridges versus liquidity networks.
Evidence: 80% of major bridge exploits stem from design-level flaws, not simple coding bugs. The Poly Network and Nomad hacks were failures of system architecture and verification logic, vulnerabilities a pure code audit would likely miss.
Case Study: Deconstructing a Hypothetical Bridge
A post-mortem on a fictional bridge hack reveals why security reviews must begin with adversarial assumptions, not Solidity.
The Problem: The Multi-Sig Mirage
The bridge used a 9-of-12 multi-sig for upgrades, creating a false sense of security. Auditors checked signature logic but not the signer set.
- Key Flaw: 7 signers were controlled by a single VC fund.
- Result: A single legal or technical compromise could execute a malicious upgrade.
The Solution: Formalize Trust Assumptions
Before a single line of code, define the trust surface. This forces explicit acknowledgment of custodial, oracle, and upgrade risks.
- Action: Map all trusted entities (e.g., relayers, oracles, governance).
- Outcome: Auditors focus on the weakest link, not just the smart contract logic.
The Problem: The Oracle Black Box
The bridge relied on a proprietary off-chain relayer for state attestations. The audit scope was limited to the on-chain verifier.
- Key Flaw: The relayer's code and infrastructure were unaudited.
- Result: A bug in the off-chain prover could mint unlimited wrapped assets.
The Solution: Holistic System Review
Treat the entire stack—off-chain provers, sequencers, and data availability—as part of the protocol. This mirrors the approach needed for optimistic and zk-rollup bridges.
- Action: Require audits for all system components, not just on-chain contracts.
- Outcome: Eliminates blind spots between the blockchain and external dependencies.
The Problem: The Liquidity Silos
The bridge held $200M+ in a single vault on the destination chain. The audit verified vault security but not the economic assumptions.
- Key Flaw: No circuit breaker for mass withdrawals or oracle failure.
- Result: A successful attack drains the entire vault, creating systemic risk.
The Solution: Model Economic Attacks
Stress-test the protocol's financial assumptions. This includes liquidity runway, withdrawal limits, and incentive misalignment for validators/relayers.
- Action: Run simulations for bank runs, oracle manipulation, and governance attacks.
- Outcome: Protocols like Across and LayerZero design for these scenarios, making them more resilient.
Objection: "But Our Auditors Check Everything"
Traditional smart contract audits are insufficient for bridge security because they focus on code correctness, not system-level threat models.
Audits verify code, not architecture. They confirm the written code matches its specification. A bridge like Stargate or Across is a complex system of off-chain components, economic incentives, and upgrade mechanisms. An audit that only reads Solidity misses the attack surface in the relayer network, oracle design, and governance.
Threat modeling precedes code review. The process must first define adversaries, assets, and trust assumptions. Without this, auditors test for generic bugs but miss protocol-specific failure modes like validator collusion or liquidity manipulation. This is why the Nomad hack exploited a flawed initialization assumption, not a coding error.
Evidence: The $2B Tally. Over $2 billion has been stolen from bridges. The root cause of major exploits—Wormhole, Ronin, Poly Network—was flawed system design, not a missed require() statement. Audits existed for these protocols. The checklist approach fails for systems where the attacker defines the rules.
FAQ: Threat Modeling for Builders & Auditors
Common questions about why bridge security reviews must begin with threat modeling, not code analysis.
Threat modeling is a systematic process to identify and prioritize security risks before reviewing code. It maps the entire system—including relayers, oracles, and governance—to find attack vectors that code audits alone miss. This proactive approach is essential for complex systems like LayerZero or Wormhole, where the failure of a single component can compromise the entire bridge.
TL;DR: The Threat-First Audit Checklist
Traditional code-first audits fail for bridges. You must model the adversary first, because the attack surface is the protocol's design, not just its implementation.
The Oracle Manipulation Attack
Code reviews miss systemic risk. The real threat is a validator cartel or a single oracle feeding bad data to the bridge's off-chain component. This killed Wormhole ($325M) and Nomad ($190M).\n- Key Benefit 1: Identifies trust assumptions in external data feeds (Chainlink, Pyth).\n- Key Benefit 2: Forces design of slashing conditions and economic security for relayers.
The Upgrade Key Compromise
Auditing a single smart contract is pointless if a 3-of-5 multisig can replace it tomorrow. This is a governance and key management failure, not a Solidity bug. See Poly Network ($611M).\n- Key Benefit 1: Maps all privileged roles (admin, pauser, upgrader) and their attack vectors.\n- Key Benefit 2: Mandates time-locks, multi-sig thresholds, and on-chain governance checks.
The Liquidity Re-Entrancy
Bridges like Stargate and Synapse pool liquidity. A threat model asks: what if an attacker drains the pool via a crafted cross-chain message? This is a LayerZero VAA or CCIP message execution flaw.\n- Key Benefit 1: Tests message ordering, rate-limiting, and asset caps per transaction.\n- Key Benefit 2: Validates isolation between different chain liquidity pools.
The Economic Finality Assumption
Assuming a chain is "secure" after X blocks is naive. A threat model forces you to audit the underlying chain's consensus. A 51% attack on a minority chain can reverse a bridge transaction.\n- Key Benefit 1: Quantifies settlement risk per connected chain (vs. Ethereum).\n- Key Benefit 2: Drives implementation of optimistic verification periods or zero-knowledge proofs.
The Relayer Censorship Vector
If your bridge depends on permissioned relayers (e.g., Axelar, Wormhole Guardians), a threat model asks: what if they collude or are forced to censor? This is a liveness failure.\n- Key Benefit 1: Evaluates decentralization of the relayer set and incentive alignment.\n- Key Benefit 2: Proposes fallback mechanisms like permissionless verification or forced inclusion.
The Signature Verification Bypass
The LayerZero potential vulnerability wasn't in the main code, but in the default Oracle and Relayer libraries. A threat model scrutinizes all dependencies and default settings.\n- Key Benefit 1: Audits the entire software supply chain, not just the core contracts.\n- Key Benefit 2: Creates a security matrix for all configurable modules (Oracles, Relayers, Adapters).
Get In Touch
today.
Our experts will offer a free quote and a 30min call to discuss your project.