Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
the-ethereum-roadmap-merge-surge-verge
Blog

Full Danksharding and Block Size Reality

A cynical but optimistic breakdown of how Full Danksharding redefines block space, why it's not about raw throughput, and what it actually means for rollups like Arbitrum, Optimism, and zkSync.

introduction
THE BLOCK SIZE REALITY

The Biggest Misconception in Ethereum Scaling

Full Danksharding does not increase Ethereum's base layer block size; it creates a marketplace for data availability.

Full Danksharding is not a block size increase. The core misconception is that Danksharding will make mainnet blocks larger. It will not. It creates a separate data availability layer where rollups post data blobs, while the execution layer's 30M gas limit remains the bottleneck for smart contract logic.

The scaling target is data bandwidth, not execution. The upgrade's goal is to lower data costs for rollups like Arbitrum and Optimism, not to make Ethereum L1 process more transactions. The metric that matters is blobs per second, which targets ~1.3 MB/s, decoupling data from execution gas.

Evidence: Post-Dencun, proto-danksharding introduced blob space as a separate resource. The current target is 3 blobs/block (~0.375 MB), but the design scales to 64 blobs/block (~8 MB) in Full Danksharding. This is a 20x increase in data capacity, while L1 gas limits see only marginal adjustments.

deep-dive
THE DA REALITY

From Execution to Attestation: The Data Availability Engine

Full Danksharding redefines block size by separating data attestation from execution, making scalability a function of network consensus.

Full Danksharding decouples execution and data. The protocol scales block size by distributing data across a Data Availability Sampling (DAS) network. Execution layers like Arbitrum or Optimism pull only the data they need, eliminating the requirement for every node to download the entire block.

The real limit is attestation bandwidth. The Ethereum consensus layer does not process transactions; it attests to the availability of data blobs. Scalability is capped by the bandwidth of the ~1 million validators performing DAS, not by any single node's capacity.

This creates a new economic model for rollups. Projects like Celestia and EigenDA pioneered external DA, but native Danksharding integrates the market. Rollups pay for blob space in EIP-4844 fee markets, competing for a resource now bounded by cryptographic attestation, not hardware.

FINALITY LAYER COMPARISON

Block Space Evolution: From Monolith to Modular

Comparing block space architectures by core scaling metrics and trade-offs.

Core Metric / FeatureMonolithic (e.g., Solana, BNB Chain)Rollup-Centric (e.g., Arbitrum, Optimism)Full Danksharding (Ethereum Roadmap)

Theoretical Max Block Size

~80 MB (Solana)

~2 MB (Arbitrum Nitro)

~1.3 MB (Current) → ~128 MB (Target)

Data Availability (DA) Source

Integrated into L1 consensus

Off-chain (e.g., EigenLayer, Celestia) or Calldata

Integrated via Proto-Danksharding (EIP-4844) blobs

State Growth Management

State rent, aggressive pruning

Sequencer-level compression, fraud proofs

Statelessness, Verkle trees, historical expiry

Validator/Proposer Hardware Requirement

High (Consumer-grade server, 128+ GB RAM)

Low (Rollup sequencer) / High (L1 DA consensus)

Moderate (Beacon chain validator) / High (Block builder)

Time to Data Finality (for L2s)

N/A (Single-layer finality)

~1 hour (via fraud/validity proof challenge window)

< 1 min (via blob confirmation + proof submission)

Cross-Domain Composability

Native, synchronous

Asynchronous via bridging (e.g., Across, LayerZero)

Native via shared consensus and synchronous cross-rollup proofs

Primary Scaling Bottleneck

Node hardware & network bandwidth

DA layer throughput & proof verification cost

Blob propagation bandwidth & proof aggregation latency

counter-argument
THE HARDWARE REALITY

The Validator's Burden: Steelmanning the Centralization Critique

Full Danksharding's data availability scaling creates a non-linear hardware burden that risks centralizing validator sets.

The 1.3 MB/s Baseline is the new minimum. A validator storing all blob data for 18 days must handle a constant 1.3 MB/s data stream. This is a fixed cost of participation that eliminates commodity hardware.

Proposer-Builder Separation (PBS) becomes mandatory, not optional. The computational load of constructing a valid 16 MB data block with 64 blobs centralizes block production to specialized builder entities like Flashbots.

Data Availability Sampling (DAS) shifts the burden to light clients. While validators sample, Ethereum light clients and Layer 2 sequencers like Arbitrum and Optimism must perform thousands of queries per block to verify availability.

Evidence: The current testnet requirement for a full Danksharding node is 2 TB of NVMe SSD and a 1 Gbps connection. This is a 10x increase over today's mainnet requirements, pricing out home validators.

takeaways
BLOCK SIZE REALITY CHECK

TL;DR for Protocol Architects

Full Danksharding isn't about bigger blocks; it's about re-architecting data availability to make them irrelevant.

01

The Problem: Monolithic Chains Hit a Wall

Scaling by simply increasing block size is a dead end. It centralizes validation, requiring nodes with terabytes of RAM and multi-gigabit connections, killing decentralization. This is the core scaling trilemma.

>1 TB
Node RAM
10+ Gbps
Bandwidth
02

The Solution: Data Availability Sampling (DAS)

Instead of downloading the whole block, nodes sample small, random chunks. With enough samples, they can probabilistically guarantee the data is available. This decouples security from block size.

  • Enables exponential scaling: Blocks can be ~1.3 MB → ~128 MB (blobs) without burdening nodes.
  • Preserves light clients: Enables trust-minimized bridges like Across and LayerZero.
~128 MB
Blob Capacity
30 Samples
For Security
03

The Reality: Blobs, Not Bigger Blocks

Full Danksharding introduces blob-carrying transactions. Data (blobs) is separate from execution and is only needed for a short window (~18 days). This creates a scalable data layer for rollups like Arbitrum and Optimism.

  • Radical fee reduction: L2s post cheap data blobs, not expensive calldata.
  • Modular future: Ethereum becomes a robust settlement + DA layer.
-99%
L2 Costs
18 Days
Data Window
04

The Architect's Mandate: Design for Blobs

Protocols must architect for a blob-native environment. This means batching transactions, optimizing for periodic data posting, and leveraging EIP-4844 (Proto-Danksharding) as the on-ramp. Think Celestia-inspired modular design on Ethereum.

  • Batch aggressively: Amortize costs across thousands of user ops.
  • Assume cheap DA: Enable micro-transactions and complex app logic.
EIP-4844
First Step
100k TPS
Rollup Potential
05

The Hidden Challenge: Proposer-Builder Separation (PBS)

Massive blocks create a MEV centralization risk. Builders need massive capital to construct them. PBS (via MEV-Boost today, enshrined later) is a non-negotiable prerequisite. It separates block building (resource-intensive) from proposing (decentralized).

  • Prevents validator cartels: No single entity controls the full block.
  • Enables crLists: Censorship resistance lists for fair inclusion.
PBS
Prerequisite
crLists
Censorship Fix
06

The Bottom Line: It's About Throughput, Not Size

The metric that matters is data bandwidth (MB/sec), not static block size. Full Danksharding targets 1.3 MB/sec of persistent data. Combined with rollup execution, this enables ~100,000 TPS for the Ethereum ecosystem. The chain's security budget scales with validator count, not blob size.

1.3 MB/s
Sustained DA
100k+ TPS
Ecosystem Scale
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected direct pipeline
Full Danksharding: The Block Size Reality Check | ChainScore Blog