Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Glossary

Danksharding

Danksharding is Ethereum's full, long-term design for data sharding, which uses data availability sampling and a distributed validator network to massively scale data capacity for rollups.
Chainscore © 2026
definition
ETHEREUM SCALING

What is Danksharding?

Danksharding is a proposed, multi-phase upgrade to Ethereum's architecture designed to massively increase network throughput and reduce transaction costs by implementing a novel data availability sampling scheme.

Danksharding is a data sharding design for Ethereum that fundamentally separates data availability from transaction execution to scale the network. Proposed by Ethereum researcher Dankrad Feist, its core innovation is the use of blob-carrying transactions (introduced in EIP-4844, "Proto-Danksharding") and a data availability sampling (DAS) scheme. This allows the network to securely confirm that large amounts of data are available without requiring any single node to download it all, enabling high-throughput layer 2 rollups like Optimism and Arbitrum to post data cheaply.

The architecture relies on a proposer-builder separation (PBS) model, where specialized block builders assemble blocks containing blobs of data, and validators then attest to the availability of that data through random sampling. This design ensures that even validators with modest hardware can participate in securing a chain with vastly increased data capacity. The full vision, sometimes called "full Danksharding," aims for a target of 16 MB per slot and eventually 1.3 MB per second of dedicated data space for rollups.

Danksharding's primary goal is not to execute more transactions directly on Ethereum's base layer but to become a high-throughput data availability layer for rollups. By providing cheap and abundant space for rollup data—such as Optimistic rollup fraud proofs or Zero-Knowledge rollup validity proofs—it allows these layer 2 solutions to offer users extremely low fees while inheriting Ethereum's security. This modular scaling approach is central to Ethereum's roadmap, often summarized as the "rollup-centric" vision.

The implementation is happening in key phases. The first, Proto-Danksharding (EIP-4844), introduced the foundational blob transaction type and a rudimentary fee market for data. The next phase involves scaling the number of blobs per block and fully implementing data availability sampling and proposer-builder separation. This incremental rollout allows the core protocol and client software to evolve and stabilize while delivering tangible scaling benefits at each step.

Compared to earlier sharding designs that proposed multiple execution shards, Danksharding is considered simpler and more aligned with the rise of rollups. It avoids the complexity of cross-shard communication for execution by focusing solely on scalable data availability. This makes Ethereum's scaling trajectory more straightforward, concentrating execution innovation in the rollup layer while the base layer provides global settlement and data guarantees.

etymology
TERM ORIGINS

Etymology and Origin

The name 'Danksharding' is a portmanteau that combines the surname of its principal researcher with a core blockchain scaling concept, reflecting its specific architectural approach to data availability.

The term Danksharding is a compound word derived from Dankrad Feist, a prominent Ethereum researcher at the Ethereum Foundation, and sharding, the database partitioning technique adapted for blockchain scaling. This nomenclature follows a tradition within the Ethereum ecosystem of naming major upgrades after their key contributors, similar to how the "Merge" transition was previously referred to as "Serenity." The name specifically denotes the particular sharding architecture Feist proposed, which diverged significantly from earlier, more complex multi-shard execution models.

The conceptual origin of Danksharding lies in the long-standing Ethereum roadmap goal to scale the network through sharding. Initial plans involved splitting the chain into multiple shard chains, each processing its own transactions and smart contracts. However, this introduced immense complexity around cross-shard communication and composability. Danksharding emerged as a streamlined alternative, proposing that shards should not execute transactions but should instead function purely as data availability layers. This pivot was heavily influenced by earlier work on data availability sampling (DAS) and proto-danksharding, which laid the technical groundwork.

The evolution from proto-danksharding (EIP-4844) to full Danksharding represents a phased deployment strategy. Proto-danksharding, implemented as part of the Dencun upgrade, introduced blob-carrying transactions and temporary data blobs, establishing the market and client infrastructure for blob data. Full Danksharding will expand this system by increasing the number of blobs per block from ~6 to ~64 and implementing data availability sampling across a network of nodes. This origin story highlights Ethereum's iterative, research-driven development process, where complex visions are broken down into practical, incremental upgrades.

key-features
DANKSHARDING

Key Features and Design Principles

Danksharding is a proposed data availability and scaling architecture for Ethereum, designed to massively increase network throughput by separating block production from data availability sampling.

01

Data Availability Sampling (DAS)

The core mechanism enabling secure scaling. Validators download only small, random samples of the large data block, allowing them to probabilistically verify its availability without downloading it entirely. This enables blocks to be orders of magnitude larger while keeping node hardware requirements low.

  • Key Innovation: Breaks the linear relationship between block size and node workload.
  • Security Guarantee: With enough samples, validators can detect data withholding attacks with near-certainty.
02

Proposer-Builder Separation (PBS)

A prerequisite architectural separation where block builders (specialized entities) construct full blocks, and block proposers (validators) simply choose the most profitable header. This is critical for Danksharding's security model.

  • Builder Role: Competes to create the most valuable block, including ordering transactions and committing to a large data blob.
  • Proposer Role: Selects the header with the highest bid, without needing to process the full block data.
03

Blob-Carrying Transactions

The new transaction type that carries large data 'blobs' (e.g., for Layer 2 rollups). These blobs are stored separately from the main execution payload and are subject to different gas pricing and retention rules.

  • Purpose: Dedicated, low-cost data space for rollup proofs and calldata.
  • Ephemeral Storage: Blobs are pruned after ~18 days, as their primary purpose is short-term data availability for Layer 2 state verification.
04

Two-Dimensional Fee Market

Separates the pricing of execution (computation) from data availability (blob space). This prevents congestion in one resource from spilling over and inflating costs for the other.

  • Execution Gas: Pays for EVM computation and storage, as in Ethereum today.
  • Blob Gas: A new fee market specifically for the data blobs in blob-carrying transactions, with its own base fee and priority fee.
05

Cryptographic Commitments (KZG)

Uses KZG polynomial commitments (or potentially other schemes) to create a compact cryptographic proof that a large data blob is available and consistent. This allows the data to be verified as 'available' once the commitment is posted, enabling efficient sampling.

  • Function: Creates a short 'fingerprint' (commitment) of the entire data blob.
  • Sampling Foundation: Validators use this commitment to verify their random data samples are correct.
06

Evolution from Proto-Danksharding (EIP-4844)

Danksharding is the full realization of a multi-phase roadmap. Proto-Danksharding (EIP-4844) was the first major step, introducing the core architecture—blob-carrying transactions and a separate fee market—without yet implementing data availability sampling.

  • Phase 1 (EIP-4844): Lay the groundwork with blobs and a new transaction type.
  • Full Danksharding: Later phases will increase blob count and implement DAS, enabling the full scaling vision.
how-it-works
ETHEREUM SCALING

How Danksharding Works

Danksharding is a proposed data availability architecture for Ethereum designed to massively increase network throughput and reduce transaction costs by separating block production from data attestation.

Danksharding is a data availability-focused scaling solution for Ethereum that fundamentally restructures the relationship between block builders and validators. Its core innovation, proposer-builder separation (PBS), creates a specialized role: the block builder who assembles a block with transactions and a large "blob" of data. A separate block proposer (a validator) then simply selects the most profitable block from a builder's auction, attests to the availability of its data, and publishes it to the network. This separation allows builders to handle the computationally intensive task of constructing massive blocks without requiring validators to process them fully.

The system's scalability stems from its use of data availability sampling (DAS). Instead of downloading and verifying an entire large data blob, validators and light clients randomly sample small, random chunks of the blob's erasure-coded data. Through cryptographic proofs like KZG commitments, they can statistically guarantee with high probability that the entire data is available for reconstruction. This allows the network to securely support blobs up to ~128 MB per slot, a massive increase from the ~0.1 MB of calldata in pre-Danksharding blocks, without requiring any single node to process the full dataset.

A critical component is the blob-carrying transaction. Users pay for "blob gas" to post data, which is only stored by the network for a short data availability window (currently ~18 days) but is permanently verified as available. This temporary storage, combined with the separation of data from execution, enables layer-2 rollups like Optimism and Arbitrum to post their transaction data (calldata) at a fraction of the previous cost. The full implementation occurs in phases: Proto-Danksharding (EIP-4844) introduced the core blob framework, while full Danksharding will later expand blob capacity and fully decentralize the builder role.

visual-explainer
DANKSHARDING CORE MECHANISM

Visual Explainer: The Data Availability Sampling Process

A step-by-step breakdown of how validators in a Danksharding system verify the availability of large data blobs without downloading them in full, ensuring the security and scalability of Ethereum's rollup-centric roadmap.

Data Availability Sampling (DAS) is a cryptographic technique where a network of validators probabilistically verifies that all data for a block is published and accessible by downloading only small, random chunks. This process is the security backbone of Danksharding, enabling the blockchain to scale data capacity far beyond what any single node could store or process. Instead of requiring every node to download the entire multi-megabyte data blob, each validator performs dozens of lightweight sampling requests. If the data is fully available, all samples will be successfully returned; if not, the missing data is detected with high probability, and the block is rejected.

The process begins when a block builder creates a block containing a large data blob, which is erasure-coded and dispersed across the network. Erasure coding, specifically using a Reed-Solomon code, expands the original data with redundancy, allowing the full data to be reconstructed even if a significant portion of the coded chunks are missing. The encoded data is then committed to via a KZG polynomial commitment or a similar scheme, creating a concise cryptographic fingerprint that validators use to verify the correctness of any sampled chunk without needing the whole dataset.

Validators then begin the sampling phase. Each validator randomly selects multiple unique positions within the data blob and requests the corresponding coded chunk from the peer-to-peer network. They cryptographically verify that each received chunk is consistent with the block's data commitment. This random sampling is repeated over many rounds by hundreds of validators, creating a high statistical confidence that the data is fully available. The data availability committee (DAC) model is replaced by this decentralized, trust-minimized sampling process.

A key security property is that the probability of a validator failing to detect missing data decreases exponentially with the number of samples taken. If even a small fraction of the data is withheld, the chance of all validators randomly sampling only available chunks becomes infinitesimally small. This robust detection mechanism ensures that data availability failures are caught before a block is confirmed, preventing scenarios where transaction data is lost and making the chain unable to progress.

For the end-user or rollup, this process is entirely abstracted away. Rollups simply post their compressed transaction data (blobs) to the consensus layer, relying on the validator set's sampling to guarantee its availability. This secure, scalable data layer allows rollups to offer extremely low transaction fees while maintaining Ethereum's security guarantees, fulfilling the vision of a rollup-centric roadmap where execution is decentralized to L2s and the L1 provides robust consensus and data availability.

evolution
ETHEREUM SCALING ROADMAP

Evolution: From Proto-Danksharding to Full Danksharding

Danksharding is a multi-phase upgrade to Ethereum's data availability layer, designed to massively increase network capacity for rollups. This evolution occurs in distinct, incremental stages.

02

Data Availability Sampling (DAS)

The core innovation enabling full Danksharding. Light nodes and validators can probabilistically verify that all data in a large block is available by sampling small, random pieces. This allows the network to safely scale block size to ~16 MB per slot (or more) because no single participant needs to download the entire block to trust its contents.

03

Full Danksharding

The final stage, building upon Proto-Danksharding and DAS. It fully implements the Danksharding design, characterized by:

  • Massive Data Capacity: Target of 16 MB per slot per blob, with potential for 64 blobs, enabling ~1 GB of data per slot.
  • Decoupled Validation & Builder Roles: The Proposer-Builder Separation (PBS) model is essential, separating the entity that proposes a block from the one that builds it, ensuring efficient block construction.
  • Universal Scalability: Aims to provide abundant, cheap data availability for all Layer 2 rollups.
04

The Role of Consensus & Execution Clients

Danksharding changes the responsibilities of Ethereum's client software. The consensus client (e.g., Prysm, Lighthouse) becomes responsible for propagating and validating blob data and KZG commitments. The execution client (e.g., Geth, Nethermind) only receives a small reference to the blob data, keeping its resource requirements manageable. This separation is key to maintaining node decentralization.

06

Impact on Rollups & L2 Economics

The primary beneficiary of Danksharding is the Layer 2 rollup ecosystem. By providing orders of magnitude more dedicated data bandwidth at lower cost, it directly reduces the largest cost component for optimistic and ZK rollups. This translates to significantly lower transaction fees for end-users and enables new high-throughput decentralized applications (dApps) that were previously cost-prohibitive on Ethereum.

ARCHITECTURAL COMPARISON

Danksharding vs. Traditional Execution Sharding

A technical comparison of the two primary sharding paradigms for scaling blockchain data availability and execution.

Architectural FeatureDanksharding (Proto-Danksharding & Full Danksharding)Traditional Execution Sharding

Core Scaling Focus

Data Availability (DA)

Transaction Execution

Consensus & Finality Layer

Single Unified Beacon Chain

Multiple Shard Chains with Separate Consensus

Validator Operation

All validators attest to the entire block's data availability

Validators are assigned to specific shards, creating security fragmentation

Cross-Shard Communication

Not required for core scaling; execution is monolithic

Complex, requires asynchronous messaging and receipts

Developer Experience

Simplified; apps interact with a single execution layer

Complex; developers must manage shard-aware smart contracts and state

Data Sampling

Enabled via Data Availability Sampling (DAS) by light clients

Not a native primitive; relies on full nodes per shard

State Management

Centralized execution state on the main chain (e.g., Ethereum L1)

Fragmented state across multiple independent shard chains

Implementation Complexity

High (novel cryptographic constructs like KZG commitments)

High (complex cross-shard consensus and state sync)

ecosystem-impact
DANKSHARDING

Impact on the Ethereum Ecosystem

Danksharding is a major Ethereum scaling upgrade that fundamentally re-architects how data is processed, dramatically increasing network capacity and reducing costs for Layer 2 rollups.

02

Radical Cost Reduction for Rollups

By separating data (blobs) from execution, Danksharding decouples the cost of data availability from the cost of EVM computation. This makes posting transaction data from Optimistic Rollups and ZK-Rollups orders of magnitude cheaper.

  • Direct Impact: Significantly lowers the fixed cost for Layer 2 sequencers, enabling cheaper transaction fees for end-users.
  • Economic Shift: Transforms Ethereum's primary scaling model to a rollup-centric roadmap, where execution happens off-chain and settlement/data availability happen on-chain.
03

Simplified Consensus with Proposer-Builder Separation (PBS)

Danksharding is built on a proposer-builder separation (PBS) model, where block builders (specialized actors) assemble complex blocks with blobs, and block proposers (validators) simply choose the most profitable one. This simplifies the validator role and prevents centralization pressures.

  • Validator Role: Validators no longer need powerful hardware to construct data-heavy blocks; they only propose and attest.
  • crList Mechanism: Ensures censorship resistance by allowing proposers to force inclusion of certain transactions.
04

Enabling Verifiable Off-Chain Computation

The cheap, abundant data provided by Danksharding is the critical input for ZK-proof verification. High-performance ZK-Rollups can post massive batches of proven transactions as compact blob data, making Ethereum a ultra-efficient settlement layer.

  • Synergy with ZK Tech: Blobs are the perfect vehicle for calldata in ZK-proof systems, enabling scalable, private applications.
  • Future-Proofing: Lays the groundwork for advanced applications like fully on-chain games and complex DeFi primitives that require cheap state diffs.
05

The Path Through Proto-Danksharding (EIP-4844)

Danksharding is implemented in phases, starting with Proto-Danksharding (EIP-4844). This introduced the core blob transaction type and a separate fee market for data, delivering ~80% of the benefits without requiring full DAS or increased block sizes.

  • Incremental Upgrade: EIP-4844 established the architectural framework and immediate fee reductions.
  • Foundation for Full Danksharding: The blob market and transaction format are now live, allowing the network to later upgrade to full data sampling and larger blobs seamlessly.
06

Strengthening Ethereum's Core Value Proposition

By specializing as a secure settlement and data availability layer, Danksharding reinforces Ethereum's position as the base layer for a multi-chain ecosystem. It enhances security and decentralization while enabling scalable user experiences.

  • Security Inheritance: Rollups leveraging Ethereum's data availability inherit its strong security guarantees.
  • Ecosystem Cohesion: Prevents fragmentation by keeping critical data on Ethereum, ensuring composability and a unified liquidity layer.
security-considerations
DANKSHARDING

Security Model and Considerations

Danksharding is a proposed data availability and scaling solution for Ethereum that introduces a new security model centered on data availability sampling and a single proposer-builder separation block.

01

Data Availability Sampling (DAS)

The core security mechanism where light clients and validators randomly sample small, redundant pieces of blob data to probabilistically verify its availability without downloading the entire dataset. This prevents data withholding attacks by ensuring that if any data is missing, a sample will fail with high probability.

  • Enables secure scaling by allowing nodes to trust the network, not just a single block producer.
  • Relies on erasure coding to guarantee data can be reconstructed even if some samples are unavailable.
02

Single Proposer-Builder Separation (PBS)

Danksharding integrates a single, auction-based block builder role for the entire sharded data layer, separating it from the block proposer. This design mitigates centralization risks and MEV (Maximal Extractable Value) exploitation by creating a competitive market for block construction.

  • The proposer (validator) selects the highest-value block from builders.
  • The builder assembles transactions and blob data, committing to make it available.
  • This separation is enforced cryptoeconomically, reducing the power of any single entity.
03

Blob Data & Expiry

Rollup data is posted as large data blobs that are separate from main execution block data. These blobs are ephemeral, with a fixed expiry period (e.g., 18 days in EIP-4844/proto-danksharding). This creates a clear security and cost model:

  • Security: Data must be available long enough for fraud or validity proofs to be challenged.
  • Cost: Expiry reduces the perpetual storage burden on consensus nodes, lowering fees.
  • After expiry, data availability responsibility shifts entirely to rollups and third-party services.
04

Cryptoeconomic Security & Penalties

The system is secured by slashing conditions and attestation penalties that target data availability failures. Validators are required to attest to the availability of blob data.

  • A builder who withholds data or publishes an invalid blob can be slashed.
  • Validators who incorrectly attest to data availability face inactivity penalties.
  • This creates strong economic disincentives against malicious behavior, aligning security with Ethereum's existing proof-of-stake model.
05

Trusted Setup Requirement

Danksharding's use of KZG polynomial commitments for blob verification requires a one-time trusted setup ceremony. This cryptographic primitive allows for efficient proof of correct erasure coding.

  • Security Assumption: The ceremony must be performed honestly by a decentralized set of participants to generate secure parameters.
  • If compromised, an attacker could create fake proofs for unavailable data, breaking the sampling security.
  • The Ethereum community conducted the KZG Ceremony (EIP-4844) to mitigate this risk.
06

Comparison to Traditional Sharding

Danksharding differs from earlier execution sharding plans, focusing security efforts on a single, robust data layer rather than multiple execution environments.

  • Legacy Sharding: Multiple chains with separate proposers and consensus, complex cross-shard communication.
  • Danksharding: Single data availability layer for all rollups, simpler security model.
  • This consolidation reduces consensus overhead and attack surface, making the security properties easier to analyze and enforce for the primary goal of scaling data availability.
ETHEREUM SCALING

Common Misconceptions About Danksharding

Danksharding is a core component of Ethereum's scaling roadmap, but its technical nature leads to frequent misunderstandings. This section clarifies the most common points of confusion.

Danksharding is a proposed data availability and scaling design for Ethereum that introduces a new transaction type, blob-carrying transactions, to provide cheap, high-volume data space for Layer 2 rollups. It works by having validators and attesters on the Beacon Chain directly attest to the availability of large data blobs (approx. 128 KB each) without needing to execute them. This is secured through data availability sampling (DAS), where light clients can probabilistically verify data is available by sampling small, random chunks. The system uses a proposer-builder separation (PBS) model where block builders assemble blobs and a single block proposer selects the most profitable bundle, eliminating the need for complex shard block auctions.

DANKSHARDING

Frequently Asked Questions (FAQ)

Danksharding is a major Ethereum scaling upgrade. These FAQs address common questions about its purpose, mechanics, and impact on the network.

Danksharding is a data availability scaling solution for Ethereum that separates block building from block proposing to massively increase network throughput. It works by introducing blobs—large packets of data attached to blocks—that are only temporarily stored by nodes. A proposer-builder separation (PBS) model allows specialized builders to construct blocks containing these blobs, while a single validator (the proposer) selects the most profitable block. Data Availability Sampling (DAS) enables light nodes to cryptographically verify that all blob data is available without downloading it entirely, ensuring security and enabling rollups to post data cheaply.

ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team