Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Glossary

Data Blob

A data blob is a large, temporary packet of data attached to a blockchain transaction, designed for cost-efficient data availability, as introduced by Ethereum's EIP-4844 (Proto-Danksharding).
Chainscore © 2026
definition
BLOCKCHAIN STORAGE

What is a Data Blob?

A data blob is a large, unstructured packet of data temporarily stored on a blockchain, primarily to reduce transaction costs for layer-2 scaling solutions.

In blockchain terminology, a data blob (often called a blob or blob-carrying transaction) is a dedicated data packet attached to a block that is not processed by the Ethereum Virtual Machine (EVM). Introduced with EIP-4844 (Proto-Danksharding), blobs provide a new transaction type for rollups to post their compressed transaction data at a significantly lower cost than using calldata. The data within a blob is only accessible for a short period (approximately 18 days) before being pruned by nodes, making it a temporary but verifiable data availability layer. This mechanism is a precursor to full Danksharding, which will scale data availability further.

The primary technical innovation of data blobs is their separation from regular block gas limits. Blobs reside in a new section of the block called the blob sidecar, and their pricing is governed by a separate, dynamically adjusted blob gas market. This decoupling prevents competition for block space between blob data and standard transactions, ensuring that the cost of L2 data posting remains low and predictable. Each Ethereum block following the Dencun upgrade has a target of 3 blobs and a maximum of 6, providing a dedicated bandwidth of roughly 0.375 MB per block for rollup data.

For developers and users, the implementation of data blobs has a direct impact on transaction fees. Layer-2 rollups like Optimism, Arbitrum, zkSync, and Base use blobs to post their state commitments and proofs to Ethereum Mainnet. By moving from expensive calldata to inexpensive blobs, these rollups have seen a dramatic reduction in their operational costs, which is passed on to end-users as lower L2 transaction fees. This makes applications built on rollups—from DeFi to gaming—more economically viable.

The role of data blobs is fundamentally about data availability. By guaranteeing that the data is published to the network and verifiable for a critical window, Ethereum ensures that rollups remain secure and trustless. Anyone can download the blob data within its retention period to verify the correctness of an L2's state transitions or to reconstruct the L2 chain if needed. This temporary availability is sufficient because the cryptographic commitments to the blob data (the blob commitments) are permanently stored in the block header, providing a lasting proof of publication.

Looking forward, data blobs represent the first major step in Ethereum's scaling roadmap. Proto-Danksharding (EIP-4844) establishes the architectural framework and transaction format. The next phase, full Danksharding, aims to increase the number of blobs per block to 64 or more, distributing the data across a committee of validators. This will scale Ethereum's data availability capacity to over 30 MB per second, enabling hundreds of low-cost rollups and solidifying the blockchain's modular architecture, where execution, settlement, and data availability are distinct layers.

etymology
TERM ORIGIN

Etymology

The term 'data blob' has evolved from a generic computing concept to a precise technical term in blockchain architecture, particularly with the advent of Ethereum's scaling solutions.

The word blob is a common computing acronym for Binary Large OBject, a data type used to store large, unstructured binary data like images or multimedia files in databases. In blockchain contexts, this generic term was adopted to describe large, opaque packets of data attached to transactions. The key innovation was redefining these blobs not as permanent on-chain storage, but as temporary, inexpensive data attachments designed for layer-2 rollups to post their transaction proofs and call data.

The specific implementation known as EIP-4844 or proto-danksharding on Ethereum formally established the blob-carrying transaction. This proposal, named after researchers Proto Lambda and Dankrad Feist, created a new transaction format that includes one or more blobs. These blobs are stored in the Beacon Chain consensus layer for a short period (approximately 18 days) rather than permanently in the Ethereum Virtual Machine (EVM) state. This separation of consensus and execution is central to the term's modern definition, distinguishing it from older, more generic uses.

The etymology reflects a shift in design philosophy: from storing data to broadcasting it for temporary availability. Related terms include blob gas, the fee for attaching blobs, distinct from standard execution gas, and KZG commitments, the cryptographic proofs that allow nodes to verify blob data without downloading it entirely. As the ecosystem evolves toward full danksharding, the term 'data blob' is now inextricably linked to scalable, cost-effective data availability for rollups like Optimism and Arbitrum.

how-it-works
DATA BLOB

How It Works

A Data Blob is a core component of modern blockchain scaling, representing a large, temporary packet of raw data attached to a transaction.

A Data Blob (Binary Large Object) is a dedicated packet of raw data attached to a transaction, designed to be posted to a blockchain but not permanently stored or executed by its main execution layer. This architecture is central to rollup scaling solutions, where the blob's data—containing compressed transaction batches—is made available for a short period, allowing a secondary layer (like an Optimistic or ZK Rollup) to verify proofs and enforce correctness without burdening the base layer with permanent storage costs. The concept was popularized by Ethereum's EIP-4844 (Proto-Danksharding), which introduced a new transaction type and a dedicated fee market for this temporary data.

The technical implementation involves a new transaction format, such as Ethereum's BlobTransaction. This format carries the standard transaction payload and one or more blobs, which are posted to a new, separate data storage area called the blobspace. Crucially, consensus clients (like Ethereum's Beacon Chain nodes) only need to store this blob data for a short data availability window—typically 18 days—rather than indefinitely. During this window, any verifier can download the blob data to reconstruct the rollup's state and validate its cryptographic proofs, ensuring security and finality.

The economic model for blobs is distinct from standard gas fees. A separate blob gas market determines pricing based on supply and demand for blobspace, decongesting the main execution gas market. After the data availability window expires, nodes prune the blob data, leading to significant long-term storage savings for the network. This model makes data-intensive operations, particularly for Layer 2s, orders of magnitude cheaper than posting the same data as calldata to mainnet execution, which is stored permanently.

The primary use case is enabling cheap, high-throughput Layer 2 rollups. By posting transaction data via blobs, rollups can offer users extremely low fees while still inheriting the base layer's security guarantees for data availability. Future upgrades, like full Danksharding, aim to scale this system further by increasing blob capacity per block and distributing the storage and validation workload across the entire validator set, paving the way for massive scalability while preserving decentralization.

key-features
DATA BLOB

Key Features

Data blobs are a core scaling innovation, acting as temporary, low-cost data storage for transaction data on Layer 2 rollups.

01

Cost-Effective Data Availability

Blobs provide a low-cost data availability layer for rollups by storing transaction data off-chain for a short period (≈18 days). This is significantly cheaper than storing the same data permanently as calldata on the Ethereum mainnet, reducing transaction fees for end-users.

  • Mechanism: Data is posted to the Beacon Chain consensus layer, not execution.
  • Key Benefit: Enables high-throughput, low-cost rollup transactions.
02

Temporary Storage (Pruning)

Unlike permanent on-chain storage, data blobs are designed to be pruned after approximately 18 days. This temporary model is viable because:

  • Rollups only need the data available long enough for fraud or validity proofs to be submitted.
  • Data availability sampling by nodes verifies the data's existence during this window.
  • Historical data can be stored by third-party services (e.g., blob explorers, block explorers) for archival purposes.
03

EIP-4844 (Proto-Danksharding)

Data blobs were introduced to Ethereum via EIP-4844, also known as Proto-Danksharding. This upgrade created a new transaction type that carries these blobs.

  • Blob-Carrying Transaction: A transaction with attached blob data.
  • Blob Gas Market: A separate fee market for blob space, decoupled from standard execution gas.
  • Foundation for Danksharding: Serves as the foundational architecture for the full Danksharding scaling roadmap.
04

Separation of Consensus & Execution

Blobs are a key part of Ethereum's post-Merge architecture, cleanly separating data availability from execution.

  • Consensus Layer (Beacon Chain): Validators attest to the availability of blob data.
  • Execution Layer: Processes transactions but does not permanently store blob data.
  • Result: This separation allows the execution layer to scale efficiently while the consensus layer provides secure data guarantees.
05

Enabler for Rollup Scaling

The primary purpose of data blobs is to scale Layer 2 rollups (Optimistic and ZK). They solve the data availability problem—ensuring anyone can verify the rollup's state is correct.

  • Optimistic Rollups: Require available data to submit fraud proofs during the challenge window.
  • ZK-Rollups: Require available data to reconstruct state and verify proofs.
  • Without Blobs: Rollups used expensive mainnet calldata, limiting scalability.
06

Blob Gas & Fee Market

Blob transactions consume a new resource called blob gas, which has its own independent fee market. This prevents competition between blob data and standard EVM transactions.

  • Target & Limit: The network targets 3 blobs per block with a maximum of 6.
  • Dynamic Pricing: Blob gas price adjusts via an EIP-1559-style mechanism based on demand for blob space.
  • Fee Burning: Base fees for blob gas are burned, similar to EIP-1559.
DATA STORAGE MECHANISMS

Comparison: Blob vs. Calldata

A technical comparison of Ethereum's primary methods for posting data on-chain, highlighting cost, capacity, and use-case differences.

FeatureCalldataBlob (EIP-4844)

Primary Purpose

Function execution input

Cheap, temporary data for Layer 2s

Data Location

Permanently stored in block body

Stored in separate blob sidecar for ~18 days

Cost Model

Priced per non-zero byte (gas)

Priced per blob (fixed fee in blob gas)

Typical Capacity

~100 KB per block (variable)

~1.3 MB per block (6 blobs * 128 KB each)

Persistence

Permanent on-chain history

Temporary; nodes prune after ~18 days

Accessibility

Fully readable by EVM

Not directly EVM-readable; only commitment is verified

Primary User

Smart contracts, general dApps

Layer 2 rollups for batch data

Gas Impact on L1

High, competes with execution

Minimal, separated fee market

evolution
EVOLUTION & DANKSHARDING

Data Blob

A data blob is a standardized, temporary data package introduced by Ethereum's EIP-4844 (Proto-Danksharding) to dramatically reduce Layer 2 transaction costs.

A data blob (or blob-carrying transaction) is a new transaction type that carries large, off-chain data packets for a fixed fee, separate from Ethereum's expensive execution gas market. Each blob is approximately 128 KB of calldata and is designed to be cheap and ephemeral, stored by the consensus layer for only ~18 days (4096 epochs). This structure is the foundational building block for Proto-Danksharding, the precursor to full Danksharding, which aims to scale Ethereum's data availability capacity by orders of magnitude.

The primary purpose of a data blob is to provide cost-effective data availability (DA) for Layer 2 rollups like Optimism and Arbitrum. Instead of posting their transaction data directly to Ethereum's execution layer as expensive calldata, rollups post compressed data in blobs. This allows anyone to reconstruct the rollup's state and verify proofs while keeping base-layer costs low. The blob's data is not accessible to the Ethereum Virtual Machine (EVM); it is only referenced by a commitment (a KZG polynomial commitment) and verified for availability.

The economic model for blobs uses a separate blob gas market with a multidimensional EIP-1559 fee mechanism. This creates a distinct fee market from execution gas, protecting regular users from fee spikes caused by high blob demand. Excess blob gas is burned, similar to base fee burns. The fixed, short storage period (the blob retention window) is sufficient for fraud proofs and data sampling, after which nodes can prune the data, ensuring the chain's long-term growth remains manageable.

Technically, a blob's data is encoded into a KZG commitment, which is included in the beacon block. Validators and light clients can efficiently verify the availability of this data through data availability sampling (DAS). This cryptographic guarantee that data is published and accessible is crucial for the security of optimistic and zk-rollups, which rely on the ability to challenge state transitions or verify validity proofs.

The introduction of data blobs with EIP-4844 marks a critical evolutionary step toward Ethereum's scaling roadmap. It delivers immediate L2 fee reductions by decoupling data costs from execution, while establishing the technical and economic framework for full Danksharding. In the final vision, Danksharding will expand the number of blobs per block from ~3 to 64, transforming Ethereum into a powerful data availability layer for a multi-rollup ecosystem.

ecosystem-usage
DATA BLOB

Ecosystem Usage

Data blobs, introduced by EIP-4844, are a dedicated data layer for Layer 2 rollups, enabling cheaper transaction data posting by separating it from the main Ethereum execution.

02

Layer 2 Rollup Data Availability

Rollups use data blobs as their primary data availability (DA) layer. Instead of posting compressed transaction data as expensive calldata on Ethereum, they post it to a blob. This drastically reduces the cost for users while maintaining Ethereum's security guarantees. The sequencer publishes a blob commitment (a KZG polynomial commitment) to the L1, which validators can use to verify data availability and for fraud/validity proofs.

04

Impact on L2 Transaction Fees

The primary user-facing impact is a dramatic reduction in L2 transaction fees. By moving data posting from calldata (~16-68 gas per byte) to blobs (~1 gas per byte estimated), the cost of settling data on Ethereum is reduced by ~10-100x. This makes L2s like Optimism, Arbitrum, Base, and zkSync Era significantly cheaper for end-users.

05

Blob Market Dynamics

Blob supply and demand are managed by a dedicated blob gas market. The protocol targets 3 blobs per block, with a blob base fee that adjusts via an EIP-1559-style mechanism. During periods of high demand (e.g., NFT mints, airdrops), blob gas prices can spike, but the separate market prevents congestion from spilling over into the main execution gas market.

06

Future: Full Danksharding

Proto-danksharding is a stepping stone to full danksharding. The future vision expands the blob capacity from ~0.375 MB per block to ~16-32 MB by distributing data across a committee of validators. This will further scale data availability, enabling hundreds of rollups and reducing costs toward the goal of $0.01 transactions.

security-considerations
DATA AVAILABILITY LAYER

Security & Data Availability

Data Blobs are a core component of modern blockchain scaling solutions, designed to decouple transaction execution from data publication to significantly reduce costs while maintaining security.

01

Core Definition & Purpose

A Data Blob (or Blob) is a large, temporary packet of transaction data published to a consensus layer (like Ethereum) but not processed by its execution engine. Its primary purpose is to provide data availability for Layer 2 rollups, allowing them to prove the integrity of their state transitions without incurring the high cost of permanent on-chain storage. This separation of data publication from execution is the foundation of proto-danksharding (EIP-4844).

02

Key Technical Mechanism

Blobs are stored in the Beacon Chain consensus layer for a short, fixed period (currently 18 days on Ethereum). They are referenced in a block via blob-carrying transactions which include blob versioned hashes. The data itself is not accessible to the EVM; only commitments (KZG commitments) and hashes are. This design ensures verifiers (like Layer 2 nodes) can download and check the data's availability during the storage window, which is sufficient for fraud or validity proofs.

03

EIP-4844: Proto-Danksharding

EIP-4844 introduced Data Blobs to Ethereum, implementing a precursor to full danksharding. Key specifications include:

  • Blob Size: Each blob is ~128 KB.
  • Target per Block: 3 blobs, with a maximum of 6.
  • Storage: Temporary, with automatic pruning after the blob data availability window.
  • Fee Market: A separate blob gas market decouples blob pricing from regular transaction gas fees, making L2 costs more predictable and stable.
04

Benefits for Rollups & Scaling

Blobs provide critical infrastructure for optimistic rollups and zk-rollups:

  • Cost Reduction: L2s publish data for ~10-100x less cost than calldata.
  • Security Maintenance: Data remains available for the full challenge window of optimistic rollups, ensuring security guarantees are upheld.
  • Throughput Enablement: By providing cheap, dedicated data bandwidth, blobs allow L2s to scale transaction throughput without compromising on decentralization or security.
05

Data Availability Sampling (DAS)

In the future, full danksharding will rely on Data Availability Sampling. Light nodes or validators will be able to verify blob availability by randomly sampling small portions of the data. If a blob is available, a few samples are sufficient for high confidence. If it's withheld, sampling will detect its absence. This allows the network to securely scale blob capacity without requiring any single node to download all data.

06

Related Concepts & Ecosystem

  • Data Availability Committee (DAC): A trusted alternative used by some L2s, where a committee signs off on data availability.
  • Celestia: A modular blockchain network specialized as a data availability layer.
  • KZG Commitments: Cryptographic commitments used to create compact proofs for blob data.
  • Blob Gas: The resource unit for publishing blobs, with its own pricing mechanism.
  • Data Availability Proof: A proof that specific data was published and is retrievable.
DATA BLOBS

Common Misconceptions

Data blobs, introduced by EIP-4844, are a core scaling component for Ethereum's rollup-centric roadmap. This section clarifies frequent misunderstandings about their purpose, cost, and permanence.

No, data blobs are a distinct and cheaper data type designed specifically for rollups. While both can carry transaction data, calldata is stored permanently on the Ethereum execution layer, making it expensive. Data blobs (or blob-carrying transactions) are posted to the Beacon Chain consensus layer and are only stored for a short period (approximately 18 days) before being pruned, which drastically reduces their cost. Rollups use blobs to post their transaction batches and state commitments, while calldata remains for direct contract calls and other execution-layer needs.

DATA BLOBS

Frequently Asked Questions

Data blobs are a core scaling technology for blockchains like Ethereum. This FAQ addresses common questions about their purpose, mechanics, and impact on the ecosystem.

A data blob (or blob) is a large packet of off-chain data temporarily posted to a blockchain, where only a small cryptographic commitment to the data is stored permanently on-chain. Introduced by EIP-4844 (Proto-Danksharding) on Ethereum, a blob is designed to carry data for Layer 2 rollups (like Optimism, Arbitrum, zkSync) at a much lower cost than traditional calldata. The blob data itself is pruned by nodes after approximately 18 days, but its commitment ensures the data's availability and integrity can be cryptographically verified during that critical window.

ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team
Data Blob: Definition & Role in Ethereum Scaling | ChainScore Glossary