Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
the-ethereum-roadmap-merge-surge-verge
Blog

Full Danksharding and Ethereum’s Throughput Ceiling

Ethereum's Surge ends with Full Danksharding. This is the technical deep dive on how data availability sampling, KZG commitments, and a new fee market will finally break the L1 bottleneck for rollups like Arbitrum, Optimism, and StarkNet.

introduction
THE THROUGHPUT CEILING

The Scaling Lie We Keep Telling Ourselves

Full Danksharding's theoretical 100k TPS is a misleading metric that obscures the real, hard constraints on user experience.

The advertised throughput is a lie. Full Danksharding's 100k TPS figure is a data availability (DA) bandwidth metric, not a transaction execution promise. It measures how many blobs the network can store, not how fast L2s like Arbitrum or Optimism can process them.

The bottleneck shifts to L2s. The scaling ceiling becomes the compute and proving capacity of rollup sequencers and verifiers. A ZK-rollup's proving time, not Ethereum's blob space, dictates finality for users on zkSync or Starknet.

Execution is the new scarce resource. Even with infinite blobs, each rollup is a single-threaded execution engine. Parallel EVMs from Monad or Sei, not Ethereum's base layer, are the real path to scaling state execution.

Evidence: Arbitrum One currently processes ~40 TPS. To reach 1% of Danksharding's theoretical DA capacity, its sequencer would need a 250x performance increase, a hardware and proving challenge unsolved by the base layer.

thesis-statement
THE DATA BOTTLENECK

Thesis: Throughput is a Data Problem, Not an Execution Problem

Ethereum's scaling ceiling is defined by data availability, not computational speed.

Full Danksharding is the endgame. It transforms Ethereum into a data availability layer for rollups, decoupling throughput from mainnet execution. The blob-carrying capacity of the consensus layer becomes the primary scaling variable.

Rollups are execution engines. Chains like Arbitrum and Optimism already process thousands of transactions per second (TPS) off-chain. Their bottleneck is the cost and speed of posting compressed transaction data back to Ethereum for security.

The 1.3 MB/s target is the metric. Full Danksharding aims for ~1.3 MB of data per slot. This data bandwidth directly determines the aggregate TPS all rollups can sustain while remaining trust-minimized.

Evidence: Proto-Danksharding (EIP-4844) proves the model. Blobs reduced L2 transaction costs by over 90%. This empirical data cost reduction validated the core thesis before full implementation.

ETHEREUM'S ROAD TO 100K TPS

The Data Availability Bottleneck: EIP-4844 vs. Full Danksharding

A technical comparison of Ethereum's interim and final data availability scaling solutions, detailing the path from 0.1 MB to 1.3 MB per block.

Core Metric / FeaturePre-4844 (Calldata)Proto-Danksharding (EIP-4844)Full Danksharding

Data Capacity per Block

~0.09 MB (90 KB)

~0.75 MB

~1.3 MB (16 blobs * 0.125 MB)

Target Throughput (TPS)

~15-30

~100-200

~100,000+

Data Storage Duration

Permanent (on-chain)

~18 Days (ephemeral)

~18 Days (ephemeral)

Cost Reduction vs. Calldata

1x (Baseline)

~10-100x

~100-1000x

Consensus Layer Bloat

High (Linear growth)

Low (Pruned after 18d)

Negligible (Pruned after 18d)

Requires Data Availability Sampling (DAS)

Requires Proposer-Builder Separation (PBS)

Full Shard Implementation

deep-dive
THE SCALING ENGINE

Deconstructing the Danksharding Machine

Full Danksharding redefines Ethereum's scaling ceiling by separating data availability from execution, enabling a new class of high-throughput Layer 2s.

Data Availability Sampling (DAS) is the cryptographic breakthrough that makes Danksharding viable. It allows nodes to verify that data exists by randomly sampling tiny chunks, eliminating the need for any single node to download the entire 128 MB data blob. This creates a trust-minimized data layer.

Proto-Danksharding (EIP-4844) is the production testbed for this new architecture. It introduces blob-carrying transactions, providing L2s like Arbitrum and Optimism with a dedicated, low-cost data lane. This separates the economic model of data posting from gas fees, a critical prerequisite for the full system.

The throughput ceiling shifts from compute to bandwidth. Full Danksharding targets ~128 MB per slot, which translates to a theoretical 1.3 million TPS for ZK-rollups. The bottleneck is no longer Ethereum's execution layer but the network's ability to propagate and sample this data globally.

Evidence: The current target of 3 blobs per slot in EIP-4844 is a 0.375 MB/s data rate. Full Danksharding's 128 MB/s target represents a 340x increase in raw data bandwidth available to rollups, fundamentally changing the scaling calculus for protocols like StarkNet and zkSync.

risk-analysis
FULL DANKS HARDENING

The Bear Case: What Could Derail The Surge?

Full Danksharding is the endgame for Ethereum's scaling, but its multi-year roadmap is fraught with technical and economic risks that could cap throughput.

01

The Data Availability Bottleneck: Even 128 Blobs Aren't Enough

Full Danksharding targets 128 data blobs per slot, a ~64x increase over proto-danksharding. However, global demand for cheap, secure block space is nearly infinite.\n- Competing Ecosystems: Solana, Monad, and high-throughput L2s like zkSync Hyperchains will continue to siphon demand, but also set a competitive ceiling on acceptable fees.\n- Exponential Demand: A single viral app (e.g., a fully on-chain game or social feed) could saturate blobs, recreating the fee market dynamics EIP-4844 aimed to solve.

128
Blobs/Slot
~64x
DA Capacity
02

The Validator Hardware Crisis

Processing and propagating 1.3 MB of data every 12 seconds (128 blobs) demands a radical shift in validator infrastructure.\n- Minimum Specs Spike: Requirements will leap from consumer-grade hardware to professional, high-bandwidth setups, potentially centralizing consensus among large operators.\n- P2P Network Strain: The existing devp2p stack may buckle under the load, requiring a full transition to more complex networks like libp2p, a multi-year engineering challenge in itself.

1.3 MB
Data/Slot
12s
Propagation Window
03

The L2 Centralization Trap

Danksharding's success is predicated on a vibrant, decentralized L2 ecosystem. The current reality is trending opposite.\n- Sequencer Oligopoly: Major rollups like Arbitrum, Optimism, and Base run single, centralized sequencers. Full Danksharding makes them bigger, not more distributed.\n- Proposer-Builder Separation (PBS) for L2s: Without enforceable decentralized sequencing (e.g., based on EigenLayer restaking), the economic benefits of cheap blobs accrue to L2 treasuries, not end-users.

>80%
Sequencer Centralization
Oligopoly
L2 Market Structure
04

The Cross-Rollup Liquidity Fragmentation Endgame

While Danksharding enables thousands of rollups, it does nothing to solve liquidity fragmentation. A chain of 10,000 sovereign L2s is a UX and capital efficiency nightmare.\n- Interop Lag: Bridges like LayerZero, Axelar, and Wormhole add latency and trust assumptions, breaking composability.\n- Intent-Based Band-Aids: Systems like UniswapX and CowSwap abstract fragmentation but rely on centralized solvers, trading decentralization for UX.

10k+
Potential L2s
High Latency
Interop Cost
05

The Cryptoeconomic Security Dilution

Ethereum security is a function of ETH staked vs. Value Secured. Danksharding's primary goal is to reduce L2 costs, which will drive massive value onto L2s.\n- Security Liability Grows: If L2 TVL grows 100x but ETH staked only grows 10x, the economic security ratio deteriorates.\n- Restaking Overload: Projects like EigenLayer attempt to re-hypothecate security, but create systemic risk and may not scale to secure all L2s and AVSs.

TVL 100x
Value at Risk
Stake 10x
Security Backing
06

The Timeline Risk: Competitors Move Faster

Full Danksharding is a 5+ year roadmap. The market may not wait.\n- Solana's Execution: Already delivers ~5k TPS with synchronous composability, a benchmark Ethereum's rollup-centric model cannot match directly.\n- Modular Alternative Maturity: By the time Danksharding ships, Celestia, EigenDA, and Avail may have cemented themselves as the standard data availability layers, making Ethereum's integrated DA a costly premium option.

5+ Years
Roadmap
~5k TPS
Solana Today
future-outlook
THE THROUGHPUT CEILING

The Post-Danksharding Landscape: An L2 Superhighway

Full Danksharding redefines Ethereum's capacity, not as a direct scaling solution for L1, but as the foundational data layer for a new class of high-throughput L2s.

Full Danksharding is a data availability (DA) upgrade. It transforms Ethereum into a hyper-scalable data layer by introducing data availability sampling (DAS) and blob-carrying transactions. This separates data publishing from execution, allowing L2s to post massive amounts of data cheaply without congesting the main chain.

The throughput ceiling shifts to L2s. Ethereum L1 execution remains limited to ~15-45 TPS. The new bottleneck becomes the proving capacity of L2 sequencers and the bandwidth of validity/zk-proof systems. The competition moves to L2s like Arbitrum, Optimism, and zkSync to process and prove transactions derived from abundant, cheap blob data.

This creates a superhighway, not a faster car. The paradigm shifts from scaling a single chain to optimizing a modular stack. L2s become specialized execution lanes, while Ethereum provides unified security and settlement. This architecture enables massively parallel execution across hundreds of chains, with finality anchored to L1.

Evidence: Blob capacity is the new metric. Post-Danksharding, the key constraint is the number of 128 KB data blobs per slot (target: 64). This provides ~1.33 MB/sec of raw data bandwidth for L2s, a >100x increase from pre-Danksharding calldata costs, directly enabling sub-cent transaction fees on high-volume L2s.

takeaways
THE ROAD TO 100K TPS

TL;DR for the Time-Poor CTO

Full Danksharding is Ethereum's endgame scaling architecture, moving from monolithic to modular execution and data availability.

01

The Problem: Monolithic Data Bloat

Today, every Ethereum node must store all transaction data forever, creating a ~1 TB chain that grows by ~100 GB/year. This is the primary bottleneck, capping throughput at ~30-100 TPS and keeping fees volatile.

~100 GB/yr
Chain Growth
~30 TPS
Current Ceiling
02

The Solution: Data Availability Sampling (DAS)

Full Danksharding turns data storage into a probabilistic sampling problem. Light clients verify data availability by randomly sampling small chunks, enabling secure scaling without requiring full nodes.\n- Enables 100K+ TPS for L2s like Arbitrum, Optimism, zkSync\n- Reduces L2 fees by >100x by decoupling execution cost from L1 storage cost

100K+
Effective TPS
>100x
Cheaper L2 Fees
03

The Bridge: Proto-Danksharding (EIP-4844)

The critical interim step, introducing blob-carrying transactions. This creates a dedicated, cheap data market for rollups, separate from mainnet execution.\n- Blobs are ephemeral, deleted after ~18 days\n- Targets ~$0.001 per transaction for L2s post-full implementation\n- Directly enables validium and optimistic rollup scaling

~$0.001
Target L2 Tx Cost
~18 days
Blob Lifetime
04

The New Economic Model: Blob Gas

Introduces a separate blob gas market, decongesting the EVM execution gas market. This creates predictable, low-cost data availability for rollups while preserving Ethereum's fee market for settlement and consensus.\n- Prevents L2 spam from affecting L1 apps\n- Incentivizes professional blob data providers

Decoupled
Gas Markets
Predictable
L2 DA Pricing
05

The Security Trade-off: Data Availability Committees vs. DAS

Current validiums and sovereign rollups rely on trusted Data Availability Committees (DACs). Full Danksharding's cryptographic DAS makes these trust-minimized, removing a key security assumption for chains like Immutable X and dYdX v4.

Trust-Minimized
DA Security
Eliminates DACs
Key Benefit
06

The Timeline & Dependency Chain

This is a multi-year rollout. Proto-Danksharding (2024) is live. Full Danksharding requires:\n- EVM Object Format (EOF) for efficient state management\n- PeerDAS for robust peer-to-peer blob distribution\n- Full implementation unlikely before 2026

Live
EIP-4844
2026+
Full Target
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected direct pipeline
Full Danksharding: Ethereum's Final Throughput Solution | ChainScore Blog