Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
the-ethereum-roadmap-merge-surge-verge
Blog

What Full Danksharding Assumes About Network Bandwidth

Full Danksharding promises 100k+ TPS, but its architecture makes a critical, often overlooked assumption: ubiquitous, high-bandwidth, low-latency global networking. We dissect the data propagation requirements, the reality of global internet infrastructure, and the implications for validators, builders, and the end-user experience.

introduction
THE BANDWIDTH FICTION

The Unspoken Bottleneck

Full Danksharding's scaling promise assumes a global network bandwidth leap that current infrastructure cannot support.

Full Danksharding's core assumption is that every validator can download 128 MB of data every 12 seconds. This requires a sustained bandwidth of ~85 Mbps, a threshold that excludes most residential and even professional staking setups.

The validator decentralization trade-off becomes stark. To meet the spec, stakers must migrate to centralized data centers, creating a systemic reliance on AWS/GCP. This centralizes physical infrastructure, contradicting Ethereum's distributed security model.

Current network reality is insufficient. The average global fixed broadband speed is ~100 Mbps, but this is a mean, not a guaranteed minimum. The requirement for consistent, low-latency throughput is what breaks the model for a globally distributed node set.

Evidence from existing rollups. Today, running an Arbitrum or Optimism full node already demands significant bandwidth and storage, leading to operator centralization. Danksharding multiplies this data load by 64, making the bottleneck architectural, not just protocol-level.

deep-dive
THE BANDWIDTH BOTTLENECK

Anatomy of a Blob: From Proto to Full

Full Danksharding's viability is a direct function of global network capacity, not just protocol design.

Full Danksharding requires 1.3 MB/s per node. This is the non-negotiable data bandwidth needed to download all blob data in a 12-second slot, a 100x increase over today's proto-danksharding. The assumption is that consumer-grade internet will universally support this throughput by the late 2020s.

The bottleneck shifts from compute to I/O. Current scaling limits are CPU-bound by signature verification and state updates. Full Danksharding makes data availability sampling the primary constraint, demanding a new class of high-bandwidth, low-latency p2p networks.

This creates a tiered node hierarchy. Light clients sample data, full nodes validate headers, and a smaller set of builder-suggested relays must ingest the full 1.3 MB/s stream. This mirrors the specialized data availability layer separation seen in Celestia and EigenDA.

Evidence: The current Ethereum p2p layer handles ~0.01 MB/s. Scaling to 1.3 MB/s requires fundamental upgrades to libp2p's gossip protocols, a challenge parallel to the one Solana faced and solved with its Turbine block propagation.

NETWORK REQUIREMENTS

The Bandwidth Progression: From Today to Full Danksharding

A comparison of the bandwidth assumptions and requirements for Ethereum's scaling roadmap, from the current state to the final Danksharding vision.

Network MetricCurrent (Post-Dencun)Proto-Danksharding (EIP-4844)Full Danksharding

Blob Data Bandwidth per Node

~0.8 MB/min (Base Layer)

~1.6 MB/min (Target)

~1.6 MB/min (Target)

Blob Count per Block

3 (Target)

6 (Target)

64 (Target)

Blob Size

~128 KB

~128 KB

~128 KB

Data Availability Sampling (DAS) Required

Minimum Node Bandwidth (Download)

~50 Mbps

~100 Mbps

~1 Gbps

Historical Data Storage (Blobs)

~20 GB/year

~40 GB/year

~2.5 TB/year

Full Data Availability (DA) Guarantee

Full Nodes (100% data)

Full Nodes (100% data)

Light Clients (via DAS)

Time to Finalize Blob Data

~2 Weeks (Epochs)

~2 Weeks (Epochs)

< 1 Hour (Proposed)

risk-analysis
THE BANDWIDTH BOTTLENECK

The Validator's Dilemma and the L2 Mirage

Full Danksharding's scalability is predicated on a network bandwidth assumption that misaligns with global infrastructure reality.

Full Danksharding assumes universal gigabit bandwidth. The protocol design requires every validator to download 128 data blobs per slot. This mandates a baseline of ~1 Gbps download speed, a threshold that excludes most global node operators and centralizes validation.

The L2 scaling narrative is a mirage. Rollups like Arbitrum and Optimism depend on this data availability layer. If validators cannot keep up, the security model for all L2s fails, making their advertised 100k TPS a theoretical maximum, not a practical one.

This creates a validator's dilemma. Operators must choose between expensive infrastructure upgrades or delegating to professional staking pools like Lido and Rocket Pool, accelerating the very centralization Ethereum's consensus moved to Proof-of-Stake to avoid.

Evidence: Current Ethereum nodes average 50-100 Mbps. The leap to consistent 1 Gbps for 128 blobs represents a 10-20x increase in baseline demand, a requirement not met by major cloud providers' standard instances.

takeaways
NETWORK BANDWIDTH ASSUMPTIONS

TL;DR for Protocol Architects

Full Danksharding's scaling promise is predicated on a massive, non-linear increase in available network bandwidth. Here's what your protocol must assume.

01

The Bandwidth Bottleneck is Solved Off-Chain

Full Danksharding assumes the network can handle ~1.3 MB per slot of data availability, but only requires nodes to download a tiny fraction. This shifts the scaling bottleneck from on-chain consensus to the peer-to-peer (P2P) data availability network, akin to BitTorrent for blobs.\n- Assumption: The P2P layer can propagate 128 MB/s of data globally without degrading block times.\n- Implication: Protocol designs must be robust to variable data retrieval times from this external network.

~1.3 MB/slot
Data Target
128 MB/s
P2P Throughput
02

Data Sampling Enables Light Client Viability

The core innovation is Data Availability Sampling (DAS), where light clients verify data availability by randomly sampling small chunks. This assumes a high-fidelity, low-latency network for sampling requests.\n- Assumption: The network can serve thousands of concurrent, random sampling requests with sub-second latency.\n- Implication: Architectures relying on ultra-light clients (e.g., for wallets, oracles) become feasible, reducing reliance on centralized RPCs like Infura or Alchemy.

>99%
DA Security
Sub-Second
Sample Latency
03

Proposer-Builder Separation (PBS) is Non-Negotiable

To prevent centralization from mega-blocks, Full Danksharding requires an enforced PBS ecosystem (e.g., mev-boost, SUAVE). It assumes a competitive market of specialized builders who can assemble large blocks and propagate blob data efficiently.\n- Assumption: A robust builder market exists to handle the complexity and capital requirements of ~1.3 MB blocks.\n- Implication: Protocol economic models must account for builder/relayer markets and potential MEV extraction vectors at this new scale.

Enforced
PBS
Specialized
Builder Market
04

Rollups Become Trivial Data Publishers

For L2s like Arbitrum, Optimism, and zkSync, the model flips. Their primary job shifts from costly calldata publishing to cheap blob publishing. This assumes rollup sequencers have the bandwidth to post massive batches every Ethereum slot.\n- Assumption: Rollup nodes can ingest and process data at rates matching Ethereum's ~1.3 MB/slot output.\n- Implication: L2 throughput will be gated by their own execution environments, not Ethereum's data layer, making execution parallelization (e.g., EigenDA, Celestia) critical.

>100x
Cheaper Data
Execution Bound
New Bottleneck
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected direct pipeline