Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
the-ethereum-roadmap-merge-surge-verge
Blog

Full Danksharding Is Not Infinite Throughput

A technical deconstruction of the fundamental limits of Ethereum's scaling roadmap. Full Danksharding increases data availability by ~100x, but hardware constraints, economic incentives, and rollup design create a hard ceiling far below 'infinite'.

introduction
THE BOTTLENECK

The Infinite Scaling Mirage

Full Danksharding's theoretical throughput is bounded by physical and economic constraints, not by protocol design.

Full Danksharding's advertised throughput is a theoretical maximum, not a sustainable operational target. The system's capacity is gated by the data bandwidth of the consensus layer and the cost of data availability sampling for nodes.

The final bottleneck is physical hardware. The network's speed is limited by the slowest participating validator's internet connection and storage I/O. This creates a practical ceiling far below the petabyte-scale theory.

Economic security imposes a hard cap. Increasing blob count dilutes the cost of data inclusion, which can degrade the cryptoeconomic security of the data availability layer. Protocols like Celestia and EigenDA compete on this exact trade-off frontier.

Evidence: Ethereum's current roadmap targets ~1.3 MB/s of data availability. This is a 1000x increase from today, but it is a finite, engineered constant, not an open-ended scaling solution.

deep-dive
THE REALITY CHECK

Deconstructing the Bottlenecks: From Blobs to Finality

Full Danksharding solves data availability, but finality and state growth remain fundamental constraints on Ethereum's throughput.

Blobs are not bandwidth. EIP-4844's proto-danksharding provides cheap data for L2s like Arbitrum and Optimism, but the network's consensus layer still processes and attests to every blob. The 16 MB per slot target for full Danksharding is a data availability limit, not a transaction processing guarantee.

Finality is the ultimate bottleneck. Even with infinite blobs, Ethereum's 12-second finality window is a hard protocol constant. High-frequency applications requiring sub-second finality, like those built on dYdX's Cosmos app-chain, will never run directly on L1.

State growth is the silent killer. Blobs expire, but L1 state is permanent. High throughput from rollups like zkSync and Starknet forces constant state expansion, increasing node hardware requirements and centralizing the validator set over time.

Evidence: The current blob count is capped at 6 per block. Even at the 16 MB target, this translates to ~1.3 MB/s of raw data, not the 'infinite' scaling often misrepresented.

FULL DANKS-HARDING

The Scaling Stack: Bottleneck Analysis

Comparing the fundamental throughput bottlenecks of Ethereum's scaling roadmap, highlighting that data availability is not the only constraint.

BottleneckCurrent Rollup (Base Case)Proto-Danksharding (EIP-4844)Full Danksharding (Post-4844)

Data Availability (DA) Throughput

~80 KB/block (Calldata)

~1.3 MB/block (Blobs)

~1.3 MB/block (Blobs)

State Growth Rate

~50 GB/year

~50 GB/year

~50 GB/year

State Witness Size (Per Block)

~1-10 MB

~1-10 MB

~1-10 MB

Execution Layer Compute (Gas)

30M gas/block

30M gas/block

30M gas/block

Settlement Throughput (Proof Verification)

~300-500 TPS (ZK) / ~100 TPS (OP)

~300-500 TPS (ZK) / ~100 TPS (OP)

~300-500 TPS (ZK) / ~100 TPS (OP)

Cross-Rollup Messaging Latency

12-20 min (L1 Finality)

12-20 min (L1 Finality)

12-20 min (L1 Finality)

Primary Constraint Post-Upgrade

Expensive DA (Calldata)

State Growth & Execution

State Growth & Execution

counter-argument
THE SCALING CEILING

Steelman: "But It's Enough for Global Scale"

Full Danksharding's theoretical throughput is immense but fundamentally capped, creating a predictable economic and architectural ceiling.

Full Danksharding is not infinite. Its design caps data availability at ~1.3 MB per slot per blob, scaling to ~128 blobs. This creates a hard, predictable throughput ceiling of ~1.3 TB/day. This is a feature, not a bug, establishing a known scaling limit for infrastructure planning.

This ceiling defines the market. A finite DA capacity creates a fee market for blobspace, similar to Ethereum's block space. Protocols like EigenDA and Celestia compete within this market, but the total supply is bounded by Ethereum's consensus.

Global scale requires off-chain execution. The ~1.3 TB/day DA layer is a data backbone for high-throughput L2s like Arbitrum and Optimism. It supports millions of TPS in execution, but only by pushing computation off-chain and settling proofs on-chain.

Evidence: Ethereum's current maximum is ~0.75 MB per slot. Full Danksharding's 128-blob target represents a ~170x increase, but this final multiplier is fixed by the protocol constants.

takeaways
THE REALITY CHECK

TL;DR for Builders and Investors

Full Danksharding is a massive scaling leap, but it's not a magic bullet for infinite, free transactions.

01

The Bottleneck Shifts to Consensus

Full Danksharding scales data availability (DA) to ~128 MB per slot, but the consensus layer (Beacon Chain) must still process and attest to this data. This creates a new, softer bottleneck.

  • Throughput is gated by validator bandwidth and voting latency.
  • The system is designed for ~1.33 MB/s of persistent data, not infinite blobs.
  • Builders must design for realistic finality windows, not theoretical peak throughput.
~128 MB
Per Slot DA
~1.33 MB/s
Sustained Rate
02

Data is Cheap, Execution is Not

While blob storage costs plummet, executing transactions (EVM ops) on Layer 2 rollups like Arbitrum, Optimism, and zkSync remains the dominant cost.

  • Blob fee markets will emerge, creating variable costs for high-demand blocks.
  • L2 economics shift from paying for DA on L1 to optimizing execution and proving costs.
  • Investors should evaluate L2s on proof efficiency (Validity vs. Fraud) and sequencer design.
>100x
Cheaper DA
Dominant Cost
Execution
03

The L2 Aggregation War

With abundant DA, the competitive edge for rollups moves to proving cost, interoperability, and user experience. This fuels projects like EigenDA, Celestia, and Near DA competing on cost, while Polygon, StarkWare, and zkSync compete on proof systems.

  • Shared sequencers (like Espresso, Astria) will become critical infrastructure.
  • Interoperability stacks (LayerZero, Chainlink CCIP, Wormhole) are essential for cross-L2 liquidity.
  • Build: Focus on vertical integration (app-specific L3) or horizontal aggregation (shared sequencer).
Multi-DA
Ecosystem
Key Battleground
Shared Sequencers
04

The Verkle Proof Challenge

Full Danksharding requires Verkle Trees for statelessness, allowing validators to verify blocks without storing full state. This is a massive, complex upgrade.

  • State expiry may be necessary, complicating contract design and UX.
  • Builders must prepare for new RPC patterns and witness data handling.
  • This is the final, critical dependency before maximal scaling is realized.
Critical Path
Dependency
~2025+
Timeline
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected direct pipeline
Full Danksharding Is Not Infinite Throughput: The Reality | ChainScore Blog