Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Glossary

Danksharding

Danksharding is a sharding architecture proposed for Ethereum, designed to scale data availability specifically for Layer 2 rollups by introducing large, temporary data blobs that validators attest to.
Chainscore © 2026
definition
ETHEREUM SCALING

What is Danksharding?

Danksharding is a proposed, multi-phase upgrade to Ethereum's architecture designed to massively increase network throughput and reduce transaction costs by implementing a novel data availability sampling (DAS) scheme.

Danksharding is a sharding design for Ethereum that fundamentally re-architects how transaction data is posted and verified, with the primary goal of providing cheap, abundant data availability (DA) for Layer 2 rollups. Unlike earlier sharding proposals that would have created multiple execution chains, Danksharding, named after researcher Dankrad Feist, introduces a single, unified block builder role and treats data blobs as a separate resource from execution. This design simplifies consensus and paves the way for rollups to post their data to Ethereum at a fraction of the current cost, which is the primary bottleneck for scaling.

The core innovation enabling Danksharding is data availability sampling (DAS). In this model, the network does not require every node to download all the data in a block. Instead, light clients and validators can perform multiple random samplings of small pieces of the data blob. Through statistical probability, if enough samples are successfully retrieved, the entire data is guaranteed to be available. This allows the network to securely scale its data capacity—proposed to reach 128 data blobs per slot (≈ 1.33 MB)—without forcing all participants to process the full dataset, maintaining decentralization.

Implementation occurs in distinct phases. Proto-Danksharding (EIP-4844), implemented with the Dencun upgrade, introduced blob-carrying transactions and a separate fee market for blobs, establishing the foundational architecture. Full Danksharding will later expand this by implementing the full DAS protocol, increasing the number of blobs per block, and distributing blob data across a committee of validators. This phased approach allows critical infrastructure like blob explorers and peer-to-peer networks for blob propagation to mature alongside the protocol changes.

The ultimate impact of Danksharding is to transform Ethereum into a robust data availability layer for Layer 2s. By providing a high-throughput, low-cost DA guarantee, it enables rollups to settle transactions more efficiently and offer users significantly lower fees. This scaling model, often called "rollup-centric scaling," keeps execution complex on L2s while leveraging Ethereum's consensus for security and data availability, creating a scalable and modular blockchain ecosystem.

etymology
ORIGIN OF THE TERM

Etymology

The name 'Danksharding' is a portmanteau that combines the surname of its primary researcher with a core blockchain scaling concept, reflecting its collaborative and evolutionary nature within Ethereum's development roadmap.

The term Danksharding is a compound word fusing Dankrad Feist, a prominent Ethereum researcher at the Ethereum Foundation, with sharding, the database partitioning technique it evolved from. It was coined informally within the Ethereum research community to distinguish this new, simplified design from the more complex data sharding plans that preceded it. The name stuck due to its specificity and the central role of Feist's proposals, notably EIP-4844 (Proto-Danksharding) and the full Danksharding specification.

This nomenclature follows a tradition in Ethereum of naming major upgrades after their key contributors or conceptual themes, such as the Merge or Surge. The 'Dank-' prefix specifically credits the architectural insights that reimagined sharding not as a consensus-layer change for execution, but as a data availability layer for rollups. The shift from traditional sharding to Danksharding represents a pivotal strategic simplification, focusing on providing cheap, abundant data blobs rather than attempting to fragment the state and execution across many chains.

The etymology underscores the proposal's practical intent: to solve the core scalability bottleneck—data availability for Layer 2s—with maximal efficiency. By moving the complexity to a specialized data availability sampling protocol and a new transaction type (blob-carrying transactions), Danksharding maintains the simplicity and security of the existing Ethereum consensus model. The name, therefore, is not just an attribution but a marker of a fundamental conceptual pivot in Ethereum's scaling strategy.

how-it-works
ETHEREUM SCALING

How Danksharding Works

Danksharding is a proposed data availability solution for Ethereum, designed to massively increase network throughput by separating block production from data availability sampling.

Danksharding is a sharding architecture for Ethereum that fundamentally rethinks the relationship between proposers and validators. Unlike earlier sharding proposals, it introduces a single proposer-builder separation (PBS) entity, the block proposer, who is responsible for constructing a complete block containing all transactions and data blobs. This centralization of block construction simplifies the consensus process, as validators no longer need to coordinate across multiple shard chains. Instead, their primary role shifts to verifying and attesting to the availability of the data within the proposed block through a process called Data Availability Sampling (DAS).

The core innovation enabling Danksharding is the use of data blobs—large packets of data (up to ~128 KB each) that are committed to the beacon chain but not executed by the Ethereum Virtual Machine (EVM). These blobs are temporarily stored by the network and are essential for Layer 2 rollups, which post their transaction data here. The block proposer uses a KZG polynomial commitment to create a cryptographic proof that all blob data is available. Validators then perform DAS by randomly sampling small portions of the data; if all samples can be retrieved, they can be statistically confident the entire dataset is available, without any single validator needing to download it all.

This design achieves data availability scaling by making the cost of verifying data availability independent of its size. The security model assumes that if a sufficient number of validators perform random sampling, a malicious proposer cannot hide a significant portion of the data without being detected. The implementation is phased, beginning with Proto-Danksharding (EIP-4844), which introduces the blob transaction format and a separate fee market for data, laying the groundwork for the full Danksharding specification where block size can expand to 16 MB or more per slot.

key-features
DANKSHARDING

Key Features

Danksharding is a proposed upgrade to Ethereum's data availability layer, designed to massively scale the network's capacity for rollups by separating data publication from block building.

02

Data Availability Sampling (DAS)

The core scaling mechanism. Instead of every node downloading all data, light clients and validators perform random sampling of small pieces of the data blob. If enough samples are successfully retrieved, the entire blob is statistically guaranteed to be available. This allows the network to securely handle data orders of magnitude larger than a single node could store or process.

03

Separated Block Proposer-Builder Roles

Danksharding introduces a clear separation of duties to prevent centralization and censorship:

  • Block Builders: Compete to create blocks with the most valuable blob transactions.
  • Proposers (Validators): Simply choose the most profitable block header, without seeing the transaction details inside. This is enabled by crLists (censorship resistance lists) to ensure transactions are included.
04

Blob Data vs. Execution Data

A key architectural separation. Blob data is high-volume data for rollup proofs and state commitments, stored off-chain but with guaranteed availability. Execution data is the traditional transaction data processed by the EVM. This separation allows the execution layer to remain lean and efficient while the data availability layer scales independently to support hundreds of rollups.

05

Commitment Schemes (KZG & Beyond)

To enable Data Availability Sampling, the system needs a way to commit to blob data compactly. KZG commitments (cryptographic polynomial commitments) create a short proof that can be used to verify any sample of the blob. Future upgrades may explore alternatives like Verkle trees or IPA for different security and efficiency trade-offs.

06

Impact on Rollups & L2 Scaling

Danksharding's primary goal is to become a high-throughput data layer for Layer 2 rollups (Optimistic and ZK). By providing cheap, abundant data availability, it drastically reduces the cost for rollups to post their data to Ethereum, which is their main expense. This enables rollups to scale to millions of transactions per second while inheriting Ethereum's security.

evolution
ETHEREUM ROADMAP

Evolution: From Proto-Danksharding to Full Danksharding

Danksharding is a multi-phase upgrade to Ethereum's architecture, designed to dramatically increase network capacity and reduce transaction costs through a novel data availability sampling approach.

Proto-Danksharding, implemented as EIP-4844 (also known as blobs), is the critical first step. It introduces a new transaction type that carries large "blobs" of data, which are significantly cheaper than calldata. These blobs are not accessible to the Ethereum Virtual Machine (EVM) and are automatically deleted after a short period (e.g., ~18 days). The primary goal is to establish the infrastructure for rollups to post data cheaply, enabling lower L2 fees, without yet implementing the full sharding logic or data availability sampling.

The core innovation paving the way for Full Danksharding is data availability sampling (DAS). This allows light nodes to verify that all data in a block is available by randomly sampling small pieces. If the data is available, the block is valid. This is made possible by encoding the data with erasure coding, specifically KZG polynomial commitments or Verkle trees, which allow the data to be reconstructed even if some pieces are missing. This lightweight verification is key to scaling without requiring every node to download the entire dataset.

Full Danksharding builds upon this foundation by fully distributing the data load. In this final stage, the block builder role is separated from the block proposer (a concept from proposer-builder separation). A single block proposer selects a block from builders, but the massive block data (e.g., 32-64 MB blobs) is split across a committee of validators. Each validator only stores and attests to a small, randomly assigned shard of the total data, relying on DAS for security. This creates a scalable data layer where throughput is limited only by the aggregate bandwidth of the network.

The transition is designed for minimal disruption. Proto-Danksharding's blob-carrying transactions use the same format planned for the full system, allowing rollups and infrastructure to develop immediately. Later upgrades simply increase the number of blobs per block and fully deploy the distributed validation architecture. This evolutionary path ensures Ethereum can scale its data capacity by orders of magnitude, targeting a goal where data availability is no longer a bottleneck for rollup-centric scaling.

examples
IMPLEMENTATION & IMPACT

Examples & Ecosystem Usage

Danksharding is a multi-phase Ethereum upgrade designed to massively scale data availability. Its components are being deployed incrementally, with significant ecosystem-wide implications.

02

Full Danksharding (The Endgame)

The final vision expands on EIP-4844 to enable data availability sampling (DAS). This allows the network to securely handle ~16 MB of data per slot (~1.3 MB per second). Core mechanics:

  • Sampling: Light clients and validators download small, random samples of blob data to probabilistically verify its availability.
  • 2D Reed-Solomon Erasure Coding: Data is encoded so the full blob can be reconstructed even if 50% of samples are missing, enhancing robustness.
  • Distributed Block Building: A single proposer-builder separation (PBS) block builder assembles the entire block, simplifying consensus.
03

Impact on Layer 2 Rollups

Danksharding is the foundational scalability solution for optimistic rollups (like Arbitrum, Optimism) and ZK-rollups (like zkSync, StarkNet).

  • Cost Reduction: By providing a dedicated, high-volume data layer, blob transaction fees are expected to be 10-100x cheaper than calldata, drastically lowering L2 transaction costs.
  • Throughput: Enables rollups to post more data per block, increasing their transaction capacity without compromising Ethereum's security.
  • Security: Maintains the critical property that rollup data is available for verification and fraud proofs on Ethereum.
04

Client & Infrastructure Changes

Implementing Danksharding requires significant upgrades across Ethereum's client software and node infrastructure.

  • Execution Clients (e.g., Geth, Nethermind): Must process blob transactions and their commitments.
  • Consensus Clients (e.g., Prysm, Lighthouse): Must validate blob data availability and manage the new blob sidecar network messages.
  • Node Operations: Nodes may need to upgrade bandwidth and storage to handle the initial blob data, though DAS in full Danksharding will reduce long-term storage burdens.
05

Data Availability Sampling (DAS) in Practice

The cryptographic primitive that makes full Danksharding secure for light clients. How it works:

  • A blob is expanded using erasure coding into a matrix of data chunks.
  • A validator or light client randomly selects a small set of coordinates (e.g., 30 samples) and requests those specific chunks from the network.
  • If all samples are received, the probability that the full data is available is extremely high (>99%). This allows trust-minimized verification without downloading gigabytes of data.
06

The Proposer-Builder Separation Model

Danksharding's architecture assumes a robust proposer-builder separation (PBS) ecosystem is in place. This is critical because:

  • Centralized Block Production: A single builder is responsible for constructing the entire large block (including all blobs), which is a complex optimization problem.
  • Decentralized Validation: Validators (proposers) simply choose the most profitable header from builders, preventing them from being forced to build massive blocks themselves.
  • crList: To prevent censorship, builders may be required to include transactions from a censorship resistance list (crList) provided by validators.
ARCHITECTURAL APPROACHES

Comparison: Danksharding vs. Execution Sharding

A technical comparison of two primary sharding paradigms for scaling blockchain data availability and execution.

Architectural FeatureDanksharding (Proto-Danksharding / EIP-4844)Traditional Execution Sharding

Core Function

Data Availability Scaling

Transaction Execution & State Scaling

Consensus Layer Changes

Minimal (new transaction type, blob-carrying)

Extensive (shard coordination, cross-links)

Node Resource Requirements

Low (all nodes verify all data availability)

High (nodes may only validate a single shard)

Cross-Shard Communication

Not required for core function

Complex, requires messaging protocols

Data Persistence

Temporary (blobs expire ~18 days)

Permanent (state is persisted indefinitely)

Developer Experience

Simplified (single execution layer)

Complex (must account for sharded state)

State of Implementation

Partially deployed (EIP-4844)

Largely theoretical for Ethereum

DANKSHARDING

Common Misconceptions

Danksharding is a complex Ethereum scaling proposal, often leading to confusion about its purpose, timeline, and relationship to existing technologies. This section clarifies the most frequent misunderstandings.

No, Danksharding is not live; it is a future upgrade planned for Ethereum, following the completion of Proto-Danksharding (EIP-4844). Proto-Danksharding, which introduced blob-carrying transactions, was a critical prerequisite deployed to lay the groundwork. Full Danksharding, which involves partitioning the network into multiple data-availability sampling (DAS) shards, is a later-phase upgrade that requires significant further research and development. It is part of Ethereum's long-term rollup-centric roadmap to scale data availability for Layer 2 solutions.

DANKSHARDING

Frequently Asked Questions

Danksharding is a major evolution in Ethereum's scaling roadmap, designed to drastically increase network capacity. These questions address its core concepts, timeline, and impact on users and developers.

Danksharding is a proposed data availability and scaling design for Ethereum that separates block production from block validation, using blob-carrying transactions to provide cheap, high-volume data for Layer 2 rollups. It works by having a single block proposer (originally called a "builder") create a block containing both regular transactions and large data blobs. A decentralized committee of validators then attests to the data availability of these blobs using Data Availability Sampling (DAS), ensuring the data can be reconstructed without any single validator downloading it entirely. This architecture allows Layer 2s to post transaction data cheaply while keeping Ethereum's consensus layer lightweight and secure.

Key components include:

  • Proto-Danksharding (EIP-4844): The initial implementation introducing blob transactions.
  • Blobs: Large (~128 KB) packets of data that are not accessible to the EVM but are guaranteed available.
  • KZG Commitments: Cryptographic proofs that allow validators to verify blob data efficiently.
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team