Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
layer-2-wars-arbitrum-optimism-base-and-beyond
Blog

Why the Two-Thirds Honest Assumption is Broken for Bridges

The Byzantine fault tolerance model fails for L2 bridges. Small, identified validator sets create a low-cost attack surface where collusion is economically rational, unlike in large PoS networks like Ethereum.

introduction
THE FLAWED FOUNDATION

Introduction

The fundamental security model for most cross-chain bridges relies on an assumption that is demonstrably false in practice.

The two-thirds honest assumption is broken. It requires a supermajority of validators to be honest, but bridges like Multichain and Wormhole have suffered catastrophic failures where this threshold was breached by a single entity or colluding minority.

Economic security is not computational security. A Proof-of-Stake chain's security derives from its token's market cap and slashing. A bridge's validator set often has negligible stake, creating a trivial cost-of-corruption versus the value secured.

Evidence: The Nomad bridge hack lost $190M because a single flawed upgrade allowed any message to be verified. This wasn't a cryptographic break; it was a failure of the trusted setup and upgrade governance that the two-thirds model pretends to secure.

thesis-statement
THE VALIDATOR FLAW

The Core Argument: Small Sets, Big Problems

The two-thirds honest assumption fails for bridges because their small, concentrated validator sets create systemic, non-statistical risks.

Small validator sets are the root vulnerability. Unlike a chain like Ethereum with ~1M validators, a bridge like Multichain or Wormhole relies on a handful of entities. A 2/3 honest assumption for 10 validators is a probabilistic model; for 10,000, it is a statistical certainty. Small sets cannot achieve the Byzantine Fault Tolerance guarantees of large networks.

Concentration risk is deterministic. In a small set, the identities and incentives of each validator are known and targetable. A state-level actor doesn't need to corrupt a random statistical majority; it needs to compromise a specific, finite list of companies or individuals. This shifts the threat model from probability to a tractable attack plan.

The multisig is a single point of failure. Most bridges like Stargate and Across ultimately settle to a multisig wallet. The security of billions in TVL depends on the key management of ~8 entities. This architecture inverts decentralization: the chain is trustless, but the bridge's off-chain governance creates a centralized bottleneck vulnerable to coercion, collusion, or technical failure.

Evidence: The Nomad hack exploited a single faulty configuration in a small upgrade committee. The Poly Network breach resulted from a compromise of a multi-party threshold signature. These are not statistical failures; they are engineering and operational failures inherent to small, opaque validator sets.

WHY THE 2/3 HONEST ASSUMPTION IS BROKEN

Attack Surface Analysis: Major L2 Bridges

Comparing the economic and operational realities that invalidate simple majority-trust models for bridge security.

Attack Vector / MetricOptimistic Rollup Bridge (e.g., Arbitrum, Optimism)ZK-Rollup Bridge (e.g., zkSync Era, Starknet)Third-Party Bridge (e.g., Across, LayerZero)

Validator/Prover Set Size

7-14 entities (whitelisted)

1 entity (single sequencer/prover)

Variable (e.g., 8-12 relayers)

Cost to Corrupt 2/3 (Est.)

$2B - $4B (based on staked ETH)

N/A (centralized control point)

$50M - $200M (based on token incentives)

Time to Finality for Withdrawal

7 days (challenge period)

< 1 hour (ZK proof verification)

3 min - 1 hour (optimistic verification)

Liveness Assumption Required

At least 1 honest validator

Single sequencer/prover honesty

Majority of relayers honest

Sovereign Escape Hatch

✅ (Force via L1)

❌ (Relies on L2 operator)

❌ (No native L1 escape)

Max Capital at Risk per TX

Full bridge TVL

Full bridge TVL

Relayer bond amount (~$2M)

Primary Failure Mode

Validator cartel censorship

Prover key compromise

Sybil attack on relayer set

deep-dive
THE INCENTIVE MISMATCH

From Theory to Theft: The Economic Attack Vector

The 2/3 honest assumption fails because bridge security is a financial, not a social, problem.

The honest majority assumption is a social coordination model for a financial game. It assumes validators prioritize network integrity over profit, which is false for capital-efficient attackers. A rational actor with a $200M exploit opportunity will bribe or acquire a $100M validator set.

Bridge security is priced. The cost to attack a bridge like Stargate or Across equals the cost to corrupt its economic security, not its validator count. This creates a direct arbitrage between staking rewards and potential loot, making large-scale collusion inevitable.

Proof-of-Stake economics invert security. In L1s like Ethereum, validators are financially aligned with the chain's long-term value. Bridge validators are financially aligned with the next cross-chain message. This misalignment turns the 2/3 threshold into a price target, not a security guarantee.

Evidence: The Wormhole exploit. The attacker needed only to compromise a few multisig signers, not a distributed validator set. This demonstrated that centralized trust points are the real security model, rendering Byzantine fault tolerance theory irrelevant for most bridges.

case-study
WHY THE 2/3 HONEST ASSUMPTION IS BROKEN

Case Studies in Centralized Trust

Theoretical security models fail when confronted with the concentrated, real-world capital and governance of major bridges.

01

The Wormhole Hack: A Single-Point-of-Failure Validator

The $326M exploit wasn't a cryptographic failure; it was a compromised guardian key. The 19/19 multisig model collapsed because the attacker only needed to compromise one validator node running outdated software.\n- Security Model: 19-of-19 guardian multisig.\n- Failure Mode: Centralized operational security for a decentralized façade.\n- Outcome: Jump Crypto made users whole, proving the trust was in the VC's balance sheet, not the protocol.

$326M
Exploit Value
1/19
Keys to Fail
02

Polygon's Plasma Bridge: The 5/8 M-of-N Reality

The $850M+ TVL bridge relies on an 8-of-8 multisig controlled by the Polygon Foundation and early team members. This isn't a decentralized federation; it's a corporate signing ceremony.\n- Security Model: 8-of-8 foundation multisig.\n- Failure Mode: Collusion or legal coercion of a known, KYC'd entity.\n- Outcome: Users implicitly trust the Polygon team's integrity over any cryptographic guarantee.

$850M+
TVL at Risk
5/8
To Compromise
03

The Arbitrum Token Bridge: Optimistic, But Centrally Secured

Arbitrum's L1 bridge uses a 9-of-12 multisig for its One-Proof Fraud Proof system. While the rollup itself is trust-minimized, the bridge's upgrade keys and emergency functions are held by Offchain Labs.\n- Security Model: 9-of-12 multisig + 7-day timelock.\n- Failure Mode: Foundation controls the upgrade path and can censor withdrawals.\n- Outcome: The "optimistic" security depends entirely on the honesty of a single legal entity.

9/12
Multisig Threshold
7 Days
Challenge Window
04

Ronin Bridge: The Social Engineering Endpoint

The $625M exploit bypassed cryptography entirely. Attackers used forged job offers to gain approval signatures for 5 of the 9 Ronin validator nodes. The 5/9 multisig threshold was met through human error, not a cryptographic break.\n- Security Model: 9 validator nodes, 5 to sign.\n- Failure Mode: Centralized operational control of validator keys.\n- Outcome: Sky Mavis (Axie Infinity) covered the loss, again transferring risk to a corporate entity.

$625M
Exploit Value
5/9
Socially Hacked
05

LayerZero & OFT: The Oracle-Attester Duopoly

LayerZero's security model depends on an Oracle (Chainlink) and an Attester (like Google Cloud) running independently. In practice, both are centralized services. The "2/2 honest" assumption requires neither entity to be malicious or coerced.\n- Security Model: Independent Oracle and Executor.\n- Failure Mode: Collusion between two known corporate entities or state-level coercion.\n- Outcome: Trust is placed in the brand reputation of Chainlink and a cloud provider, not decentralized consensus.

2/2
Must Be Honest
~$0
Slashable Stake
06

The Multichain Collapse: The Ultimate Keyholder Risk

The $1.5B+ TVL bridge evaporated when CEO Zhaojun disappeared, taking the MPC private keys with him. This wasn't a hack; it was a catastrophic failure of key custodian concentration. The "decentralized" MPC was controlled by a single individual.\n- Security Model: Multi-Party Computation (MPC) with centralized operation.\n- Failure Mode: Single point of human failure for key shards.\n- Outcome: Irrecoverable funds, proving MPC is only as strong as its operational governance.

$1.5B+
TVL Lost
1
Critical Person
counter-argument
THE INCENTIVE MISMATCH

The Rebuttal: "But They're Reputable Entities!"

Reputation is a weak security model that fails against the economic incentives of a compromised multisig.

Reputation is not cryptoeconomic security. A bridge's multisig signers are reputable because they are trusted, not because they are trustless. This creates a single point of failure for the entire validator set.

A single compromised entity breaks the model. The security of Across, Stargate, or Wormhole collapses if one key custodian (e.g., a VC fund or foundation) suffers a phishing attack or legal seizure.

Incentives shift under duress. A signer facing bankruptcy or a state actor's coercion has zero economic stake in the bridge's TVL. Their reputation is a worthless bond compared to the millions at stake.

Evidence: The Poly Network and Wormhole hacks were not 51% attacks on a decentralized chain. They were failures of privileged key management by 'reputable' entities controlling the bridge's core logic.

FREQUENTLY ASKED QUESTIONS

FAQ: Bridge Security for Builders

Common questions about why the two-thirds honest assumption is broken for cross-chain bridges.

The two-thirds honest assumption is a security model where a bridge is considered safe if at least two-thirds of its validators are honest. This model underpins many multi-signature and proof-of-authority bridges like Multichain and older versions of Wormhole. It assumes that a supermajority of validators will not collude to steal funds or approve fraudulent transactions.

future-outlook
THE FLAWED FOUNDATION

The Path Forward: Minimizing Trust, Maximizing Security

The industry-standard two-thirds honest assumption for bridge security is a systemic vulnerability, not a guarantee.

The 2/3 assumption is broken because it conflates economic security with social consensus. A bridge like Multichain or Wormhole relies on a committee of validators, where a 51% attack is catastrophic, not a 67% threshold. This model fails under sophisticated, low-cost bribery attacks that target validator identities, not just their stake.

Trust minimization is non-negotiable. The security of a LayerZero or Celer bridge is only as strong as its weakest oracle or relayer. The path forward requires cryptographic verification over social consensus, moving from multi-sigs to light client proofs that inherit security from the underlying chains like Ethereum or Solana.

Evidence: The $325M Wormhole hack exploited a single validator signature flaw, not a mass collusion. This proves that security surface area, not just validator count, determines real-world risk. Protocols like Across and Chainlink CCIP are now architecting for this by leveraging on-chain data and optimistic verification.

takeaways
BRIDGE SECURITY

TL;DR: Key Takeaways for Protocol Architects

The 'two-thirds honest' model is a dangerous anachronism for cross-chain systems, creating systemic risk and misaligned incentives.

01

The Problem: Honest Majority ≠ Economic Security

A 2/3 honest validator set is meaningless if the economic value at stake dwarfs the cost of corruption. Bridges like Multichain and Wormhole have suffered $1B+ exploits, proving that a simple majority of small validators is insufficient.\n- Attack Cost vs. Profit: Corrupting a few small validators is trivial compared to stealing $100M in TVL.\n- No Skin in the Game: Validator slashing is often negligible, creating a 'rent-a-majority' attack vector.

$1B+
Exploit Value
0
Effective Slashing
02

The Solution: Intent-Based Routing (UniswapX, CowSwap)

Shift from trusted verification to competitive execution. Users express an intent (e.g., 'swap 100 ETH for USDC on Arbitrum'), and a decentralized network of solvers competes to fulfill it optimally.\n- No Central Custody: Assets never sit in a bridge contract; they move via atomic swaps or optimistic relays.\n- Economic Security: Solvers are bonded and slashed for misbehavior, aligning incentives directly with correct execution.

100%
Non-Custodial
Competitive
Execution
03

The Solution: Optimistic & Light Client Verification (Across, Nomad)

Replace live consensus verification with fraud proofs or cryptographic proofs from the source chain. This moves the security assumption back to the underlying L1 (e.g., Ethereum).\n- L1 Security Inheritance: A watchtower (anyone) can submit fraud proofs during a challenge window, slashing the bonded relayers.\n- Cost Efficiency: No need to pay for 2/3 of a validator set to be online; pay only for proven fraud.

L1
Security Root
-90%
Relayer Cost
04

The Problem: Validator Set Collusion is Inevitable

As bridge TVL grows into the billions, the validator set becomes a high-value cartel target. Projects like LayerZero rely on an Oracle/Relayer duo, but this just creates a 1-of-2 corruption problem.\n- Cartel Formation: A small group can easily acquire stakes/positions in a decentralized-looking set.\n- Cross-Chain MEV: Validators can front-run large cross-chain transactions, a systemic incentive to collude.

Cartel
Incentive
Billions
TVL at Risk
05

The Solution: Zero-Knowledge Light Clients (zkBridge)

Use succinct cryptographic proofs (ZK-SNARKs/STARKs) to verify state transitions from another chain. The security assumption becomes 'is the zk proof valid?' not 'are 2/3 of these entities honest?'.\n- Trustless Verification: A single honest prover can generate a proof verifiable by anyone.\n- Future-Proof: Aligns with the endgame of Ethereum's roadmap and modular rollup security.

Trustless
Verification
~5 min
Proof Time
06

Actionable Audit Checklist for Architects

When evaluating a bridge design, demand answers to these first-principle questions:\n- Economic Finality: What is the exact cost to corrupt the system vs. the TVL it secures?\n- Liveness Assumption: Does security fail if 1/3 of validators go offline?\n- Incentive Alignment: Are operators' bonds meaningfully at risk for the value they secure?\n- Fallback: Is there a trust-minimized, user-initiated escape hatch?

4
Key Questions
First Principles
Audit
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team