Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Glossary

Danksharding

Danksharding is a sharding design for Ethereum that separates data availability from execution, using a single proposer to order all transactions and data blobs, with validators performing data availability sampling.
Chainscore © 2026
definition
ETHEREUM SCALING

What is Danksharding?

Danksharding is a proposed, multi-phase upgrade to Ethereum's architecture designed to massively increase network throughput and reduce transaction fees by implementing a novel data availability sampling (DAS) scheme.

Danksharding is a sharding design for Ethereum that fundamentally re-architects how transaction data is posted and verified, prioritizing data availability over execution. Proposed by Ethereum researcher Dankrad Feist, its core innovation is the blob-carrying transaction (introduced in EIP-4844, or "Proto-Danksharding"), which attaches large data "blobs" to blocks. These blobs are cheap to post and are automatically deleted after a short period, but their cryptographic commitments ensure the data was available, enabling Layer 2 rollups to settle transactions cheaply and securely.

The architecture separates the roles of builders and proposers through a mechanism called proposer-builder separation (PBS). Builders compete to construct the most valuable block, including ordering transactions and attaching data blobs. A single block proposer then selects the best block. This design prevents centralization risks and MEV exploitation while enabling the efficient handling of massive data payloads. The system's security relies on validators performing data availability sampling (DAS), where they randomly check small pieces of the blob data to probabilistically guarantee its entirety is published.

The full vision, full Danksharding, will expand this model to multiple blobs per block, potentially increasing data capacity to 128 blobs of 128 KB each. This creates a dedicated data layer with bandwidth exceeding 16 MB per slot, upon which rollups and other scaling solutions can be built. Unlike traditional execution sharding, Danksharding does not shard Ethereum's execution or state; it shards only the data, keeping consensus simple and secure while delegating complex computation to Layer 2 networks.

The implementation is happening in stages. Proto-Danksharding (EIP-4844) deployed the core blob transaction format. Future upgrades will introduce the full PBS, a decentralized builder market, and the scaling of the number of blobs. This incremental approach allows the network to test components like DAS in production while maintaining Ethereum's core security properties, ensuring a smooth transition to a scalable, rollup-centric ecosystem.

etymology
TERM ORIGIN

Etymology and Origin

The name 'Danksharding' is a portmanteau that reveals its technical lineage and primary architect. It is not a generic term but a specific proposal tied to Ethereum's scaling roadmap.

The term Danksharding is a compound of 'Dankrad' and 'sharding.' It is named directly after Dankrad Feist, a researcher at the Ethereum Foundation who authored the original proposal. The 'sharding' component refers to the foundational blockchain scaling technique of partitioning the network's data and computational load into smaller, parallel chains called shards. This naming convention follows a tradition in Ethereum development of crediting core researchers, similar to how 'Casper' refers to the proof-of-stake protocol proposed by Vitalik Buterin and Virgil Griffith.

The concept evolved from earlier, more complex sharding models. Initial Ethereum sharding designs involved multiple execution shards with distinct block producers and complex cross-shard communication. Danksharding, specifically Proto-Danksharding (EIP-4844) and full Danksharding, represents a paradigm shift by introducing a data availability sampling (DAS) model. Instead of numerous execution shards, it proposes a single, unified block builder and focuses sharding purely on data availability, making blob-carrying transactions the primary scalable data layer for Layer 2 rollups.

The 'Dank' prefix has become a recognizable namespace within Ethereum's technical lexicon. Proto-Danksharding (the initial, partial implementation) and the envisioned final stage of Full Danksharding are both subsumed under this title. This distinguishes the Feist-inspired data-availability-centric approach from the older execution sharding paradigms. The name's adoption underscores how a researcher's specific proposal can redefine an entire technical trajectory for a major blockchain.

how-it-works
ETHEREUM SCALING

How Danksharding Works

Danksharding is a major evolution in Ethereum's data availability architecture, designed to dramatically increase network throughput for rollups.

Danksharding is a proposed data availability scheme for Ethereum that fundamentally restructures block production to massively scale data capacity for Layer 2 rollups. Unlike earlier sharding designs, it introduces a proposer-builder separation (PBS) model where a single block proposer selects a complete block from competitive block builders, who assemble transactions and the critical data blobs. This centralizes the task of creating a data-available block, simplifying consensus and enabling the network to support up to 32 data blobs per slot, each containing ~128 KB of call data for rollups.

The core innovation is the separation of consensus from data availability sampling (DAS). Validators do not download the entire massive block; instead, they perform DAS by randomly sampling small chunks of the data blobs via the distributed hash table (DHT). Using erasure coding (specifically, Reed-Solomon codes), the data is redundantly encoded so that if a sufficient number of samples are collected, the entire dataset can be reconstructed. This allows light clients and validators to cryptographically verify data availability with minimal bandwidth, ensuring that data promised by the block builder is actually published and accessible.

A key component is the blob-carrying transaction type, introduced with EIP-4844 (Proto-Danksharding). These transactions carry large data blobs that are not accessible to the Ethereum Virtual Machine (EVM) and are stored only for a short period (~18 days). This separates expensive, permanent calldata storage from cheap, temporary data availability, drastically reducing costs for rollups. The blob gas market independently prices this ephemeral data space, preventing it from competing with standard transaction gas.

The full Danksharding vision involves several technical upgrades. The data availability committee (DAC) from earlier designs is replaced by a cryptoeconomic guarantee enforced by the validator set through DAS. KZG polynomial commitments (or potentially other commitment schemes) create a compact cryptographic proof that the erasure-coded data is available and consistent. The system also requires an efficient peer-to-peer network for propagating blob samples and a robust mechanism for builders to attest to data availability, with severe penalties (slashing) for provable failures.

The rollout is phased, with Proto-Danksharding implementing the core transaction format and a scaled-down version (e.g., 6 blobs/slot) as a crucial first step. Full Danksharding will follow, incrementally increasing blob count and implementing all sampling and proof systems. This evolutionary approach de-risks development and allows the ecosystem—especially optimistic rollups and ZK-rollups—to adapt. The end goal is to position Ethereum as a secure data availability layer, enabling rollups to process tens of thousands of transactions per second at minimal cost.

key-features
ETHEREUM SCALING

Key Features of Danksharding

Danksharding is a proposed data availability sampling (DAS) scheme for Ethereum that separates block building from block proposal to massively scale data capacity for rollups.

01

Proposer-Builder Separation (PBS)

A core architectural change where the role of block proposer (choosing transactions) is separated from the block builder (constructing the block). This prevents proposers from censoring or manipulating the contents of data blobs, which are crucial for rollups. Builders compete in a marketplace to create the most valuable block, including these blobs.

02

Data Availability Sampling (DAS)

A technique that allows light nodes to verify data availability without downloading an entire block. Nodes perform multiple random samplings of small pieces of the data. If all samples are available, they can be statistically confident the entire data blob exists. This is the security foundation that enables large data blobs (e.g., 1.3 MB each) without requiring full nodes to store them all.

03

Blob-Carrying Transactions

Special transactions that carry large data blobs (originally called 'data shard blobs'). These blobs are:

  • Inexpensive: Priced separately from gas, optimized for bulk data.
  • Ephemeral: Stored by consensus nodes for only ~18 days, as rollups only need short-term data availability for fraud/validity proofs.
  • Inaccessible to the EVM: The EVM can only commit to the blob via a KZG commitment; it cannot read the data directly, keeping execution layer simple.
04

KZG Polynomial Commitments

A cryptographic primitive used to create a short, binding commitment to a blob's data. This commitment allows for:

  • Efficient Proofs: Enabling cheap verification that a specific piece of data is part of the original blob.
  • Data Availability Sampling: The mathematical structure of the commitment is essential for the erasure coding and sampling process that underlies DAS security.
05

Proto-Danksharding (EIP-4844)

The initial, partial implementation of Danksharding, deployed as a hard fork. It introduces blob-carrying transactions and the core blob data market, but does not yet implement full Data Availability Sampling or 2D erasure coding. It lays the foundational transaction format and consensus rules, scaling data availability to ~0.375 MB per block as a stepping stone to full Danksharding.

06

Impact on Rollups (L2s)

Danksharding's primary goal is to provide cheap, abundant data availability for Layer 2 rollups. By moving rollup data from calldata to dedicated blobs, it drastically reduces transaction costs for end-users. This transforms Ethereum into a secure data availability layer, allowing rollups to scale to 100,000+ transactions per second while inheriting Ethereum's security.

proto-danksharding
ETHEREUM SCALING

Proto-Danksharding (EIP-4844)

An interim scaling upgrade for Ethereum that introduces a new transaction type and data structure to significantly reduce Layer 2 rollup costs.

Proto-danksharding, implemented via Ethereum Improvement Proposal EIP-4844, is a precursor to full danksharding designed to drastically lower data costs for Layer 2 rollups. It achieves this by introducing blob-carrying transactions, which temporarily store large batches of data—known as blobs—in the Beacon Chain for approximately 18 days. Unlike calldata, this blob data is not accessible to the Ethereum Virtual Machine (EVM) and is pruned after this period, which keeps the historical data burden on nodes manageable while providing the data availability that rollups require for security.

The core innovation is the blob, a ~125 KB packet of data attached to a transaction. Rollups use these blobs to post their compressed transaction data, which validators and full nodes must download and make available. The separation of blob data from executable EVM data is key: it allows for a dedicated blob gas market distinct from the standard gas for computation, enabling more predictable and cheaper data posting fees for rollups. This mechanism directly addresses the primary cost driver for end-users on networks like Optimism and Arbitrum.

Proto-danksharding establishes the foundational architecture for the future danksharding vision. It implements the data availability sampling (DAS) client logic, allowing nodes to verify data availability without downloading entire blobs. While full danksharding will expand this to multiple blobs per block and a decentralized validator sampling network, EIP-4844 provides the immediate scaling benefits and real-world testing ground for these critical components, marking a major step toward Ethereum's scalable, rollup-centric roadmap.

benefits-impact
DANKSHARDING

Benefits and Impact

Danksharding is a proposed scaling architecture for Ethereum that fundamentally rethinks data availability to massively increase network throughput. Its primary benefits are centered on enabling cheaper, higher-volume Layer 2 rollup transactions.

01

Exponential Scalability for Rollups

Danksharding's core benefit is providing massive, dedicated data capacity for Layer 2 rollups (like Optimism, Arbitrum, zkSync). By guaranteeing data availability for millions of transactions per second, it allows rollups to post their data to Ethereum at minimal cost, which directly translates to cheaper fees for end-users. This creates a modular scaling paradigm where execution happens off-chain, and Ethereum acts as a secure, high-throughput data layer.

02

Dramatic Reduction in Transaction Fees

The primary driver of high gas fees on Ethereum is competition for limited block space. Danksharding expands this space by orders of magnitude for data. By separating data publication from execution, it drastically lowers the cost for rollups to settle on Ethereum. This cost reduction is passed to users, making activities like DeFi swaps, NFT minting, and gaming transactions economically feasible for a global audience.

03

Enhanced Security Through Data Availability Sampling

Danksharding introduces Data Availability Sampling (DAS), a cryptographic technique that allows light nodes to verify data availability with minimal resources. A node only needs to download a few random samples of the data blob to be confident the entire dataset is available. This maintains Ethereum's decentralized security model while supporting massive data blobs, preventing malicious validators from hiding transaction data.

04

Simplified Consensus with Proposer-Builder Separation

The architecture is built around Proposer-Builder Separation (PBS), where specialized block builders assemble blocks with data blobs, and block proposers (validators) simply choose the most profitable one. This separation streamlines consensus, reduces MEV-related complexities, and is essential for efficiently handling the large data loads of Danksharding. PBS is a prerequisite implemented in earlier upgrades like EIP-4844 (proto-danksharding).

05

Paving the Way for Statelessness

By providing robust, verifiable data availability, Danksharding is a critical enabler for Verkle Trees and stateless clients. Future clients will not need to store the entire state; they can rely on the guaranteed availability of data in shard blobs to reconstruct state proofs on-demand. This drastically reduces hardware requirements for node operators, furthering decentralization and network resilience.

ARCHITECTURAL COMPARISON

Danksharding vs. Traditional Sharding

A technical comparison of the two primary sharding paradigms for scaling blockchain data availability and execution.

Architectural FeatureDanksharding (Proto-Danksharding / EIP-4844)Traditional Sharding (Execution Sharding)

Core Scaling Focus

Data Availability (DA)

Execution & Data Availability

Shard Data Structure

Blobs (Binary Large Objects)

Independent Blockchains (Shard Chains)

Consensus & Finality

Managed by Beacon Chain consensus

Cross-shard consensus required

Validator Complexity

Low (All validators verify Beacon Chain)

High (Validators assigned to specific shards)

Cross-Shard Communication

Not required for DA; execution is monolithic

Complex, requires messaging protocols

Developer Experience

Simplified (Single execution layer)

Complex (Must design for sharded state)

Data Availability Sampling (DAS)

Enabled via Blob Propagation

Enabled per shard

State Management

Unified state on Layer 1

Fragmented state across shards

DANKSHARDING

Technical Deep Dive

Danksharding is a proposed, multi-phase scaling architecture for Ethereum designed to massively increase data availability for rollups. This section breaks down its core components, mechanics, and evolutionary path.

Danksharding is a data availability scaling solution for Ethereum that separates block building from block proposal to create high-capacity data blobs for rollups. It works by introducing a new transaction type, blob-carrying transactions, where validators do not execute the data but only attest to its availability. A specialized actor, the block builder, assembles a block containing these blobs, while a separate block proposer (selected via proposer-builder separation, PBS) chooses the most profitable header. The core innovation is data availability sampling (DAS), where light clients and validators sample small, random pieces of the blob to probabilistically verify the entire dataset is published without downloading it all.

DANKSHARDING

Common Misconceptions

Danksharding is a complex, multi-phase upgrade to Ethereum's data availability layer, often misunderstood due to its technical depth and evolving roadmap. This section clarifies the most frequent points of confusion.

No, Danksharding is a specific, simplified form of sharding focused solely on data availability, not on executing transactions. Traditional sharding, as originally proposed for Ethereum, involved splitting the network into multiple chains (shards) that would each process transactions and smart contracts. Danksharding, named after researcher Dankrad Feist, abandons the concept of execution shards. Instead, it proposes a single, high-throughput block builder that produces a block containing blobs of data. The network's validators are then only responsible for confirming the availability of this data via data availability sampling (DAS), making the scaling solution much simpler to implement and secure.

DANKSHARDING

Frequently Asked Questions

Danksharding is a major upgrade to Ethereum's data availability layer, designed to dramatically increase network scalability for rollups. These questions address its core concepts, timeline, and impact.

Danksharding is a data availability scheme for Ethereum that provides massive, cheap data capacity for Layer 2 rollups by partitioning the network's data load into blobs. It works by introducing a new transaction type, blob-carrying transactions, where validators are randomly assigned to committees to attest to the availability of data blobs without having to download or execute them. The core innovation is proposer-builder separation (PBS), where specialized block builders construct blocks with blobs, and a single block proposer selects the most profitable one. Validators only need to sample small, random pieces of each blob to cryptographically guarantee its availability using data availability sampling (DAS) and Erasure Coding.

further-reading
DANKSHARDING

Further Reading

Danksharding is a proposed scaling architecture for Ethereum that fundamentally rethinks how data is made available for Layer 2 rollups. Explore its core components and the evolutionary path from its predecessor, Proto-Danksharding (EIP-4844).

05

Comparison to Traditional Sharding

Danksharding represents a paradigm shift from earlier Ethereum sharding plans. Key differences:

  • Traditional Sharding: Split the chain into multiple shards, each with its own execution and state. Complex cross-shard communication required.
  • Danksharding (Data-Only Shards): All execution remains on the main Beacon Chain. Shards exist purely as data availability layers. Rollups post their data to these shards and execute transactions off-chain. This simplifies the consensus model and leverages the proven security of Layer 2 systems for scaling.
06

The Path to Full Danksharding

The rollout is a multi-phase process to ensure security and stability:

  1. Proto-Danksharding (EIP-4844): Live. Introduces blobs and the basic framework.
  2. Full Danksharding: Future upgrade. Expands blob count from ~6 to 64+ per block (targeting ~16 MB/slot). Implements full Data Availability Sampling and requires Proposer-Builder Separation. The goal is to reduce Layer 2 transaction costs by orders of magnitude by providing massively scalable, cheap data availability, making rollups the primary scaling solution.
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team