Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Glossary

Blob Size Limit

The blob size limit is the maximum allowable size for a single unit of data (a blob) that can be posted to a blockchain's data availability layer, enforced by protocol rules.
Chainscore © 2026
definition
ETHEREUM EIP-4844

What is Blob Size Limit?

The blob size limit is a protocol-enforced constraint on the data payload of a blob-carrying transaction, designed to manage network capacity and node storage requirements.

The blob size limit is a core parameter in Ethereum's EIP-4844 (Proto-Danksharding) that restricts each blob-carrying transaction to a maximum of approximately 128 KB of data, specifically defined as 4096 field elements of 32 bytes each. This limit is not a target but a hard cap enforced by consensus rules, ensuring that the data blobs attached to transactions remain bounded and manageable for network nodes. The primary purpose is to prevent any single transaction from consuming excessive block space or imposing unsustainable data storage burdens on the network before full danksharding is implemented.

Mechanically, the limit is enforced by Ethereum clients, which will reject blocks containing transactions with blobs exceeding the specified size. This constraint works in tandem with other EIP-4844 parameters like the blob gas pricing mechanism and the target per-block blob count to regulate the network's overall data bandwidth. The 128 KB size was chosen as a balance between providing meaningful scale for layer-2 rollups—which use blobs for cheap, temporary data availability—and maintaining the practical operational limits of nodes that must temporarily store and propagate this data.

For developers and users, the blob size limit directly impacts how data is batched and submitted from layer-2 networks to Ethereum. Rollup sequencers must structure their calldata or state diffs into chunks that fit within this constraint. Exceeding the limit requires splitting data across multiple blob transactions, which incurs additional gas costs. Understanding this limit is crucial for optimizing data submission strategies and cost calculations in the post-EIP-4844 ecosystem.

how-it-works
EIP-4844 MECHANICS

How the Blob Size Limit Works

An explanation of the technical mechanism that enforces a maximum data capacity for blob-carrying transactions on Ethereum, a core component of proto-danksharding.

The blob size limit is a protocol-enforced constraint that restricts each blob-carrying transaction to a maximum of approximately 128 KB of data, specifically 131,072 bytes. This limit is not a simple size check but is enforced through a dedicated gas pricing mechanism for data. Each blob consumes a fixed amount of blob gas, and the total blob gas per block is capped, creating a separate, secondary block size limit specifically for this new type of data. This design prevents blob data from competing with or congesting the gas market for standard Ethereum execution.

The enforcement occurs in two primary layers. First, consensus clients validate that the total blob gas consumed by all transactions in a proposed block does not exceed the per-block limit, which is a target of 3 blobs (or 384 KB) with a maximum of 6 blobs (768 KB). Second, execution clients verify that each individual transaction's blob data, when encoded, does not surpass the 131,072-byte hard cap. This layered validation ensures the network consensus rules are upheld and that the data is available for data availability sampling by Layer 2 rollups and other clients.

The limit is intrinsically linked to the KZG commitment scheme used in EIP-4844. Each blob is a vector of 4096 field elements, with each element representing 32 bytes, resulting in the 128 KB total. The commitment to this data is a single KZG polynomial commitment, which is small and efficient to verify. The size limit ensures the polynomial operations required for commitment and proof generation remain computationally feasible for the network, maintaining the scalability benefits of the design.

Adjusting the blob size limit is a governance decision requiring a network upgrade. Parameters like the max blobs per block (6) and the field elements per blob (4096) are defined as constants in the Ethereum protocol specification. Future upgrades, such as full danksharding, are designed to increase this limit significantly—potentially to 16 MB or more per blob—by scaling the number of data samples validators must check rather than requiring them to download the entire dataset.

key-features
EIP-4844 MECHANICS

Key Features of Blob Size Limits

Blob size limits are a core parameter in EIP-4844 (Proto-Danksharding) that define the capacity and cost structure of the new data layer for Ethereum Layer 2 scaling.

01

Fixed Per-Block Capacity

Each Ethereum block can carry a target of 3 and a maximum of 6 data blobs. This creates a predictable, dedicated data bandwidth for Layer 2 rollups, separate from regular transaction calldata. The limit ensures blob data does not congest the main execution layer.

02

128 KB Per Blob

A single blob is precisely 128 kilobytes (131,072 bytes). This size was chosen as a balance between efficient data availability sampling for future sharding and practical encoding overhead. Each blob can hold the equivalent of roughly ~0.375 MB of transaction data when compressed.

03

Gas vs. Blob Gas Separation

Blob transactions use a separate fee market called blob gas. This prevents competition for block space between regular EVM operations and data availability. Blob gas prices are determined by a dedicated EIP-1559-style mechanism, with its own base fee and priority fee.

04

Temporary Data Storage

Blob data is not stored permanently on the Ethereum execution layer. It is persisted in the beacon chain consensus layer for approximately 18 days (4096 epochs). This temporary storage is sufficient for fraud proof and validity proof windows, drastically reducing long-term node storage costs.

05

Impact on Layer 2 Costs

By providing a dedicated, lower-cost data channel, blob size limits directly reduce the cost of submitting data to Ethereum for Optimistic Rollups and ZK-Rollups. The cost per byte of data is typically 10-100x cheaper than using calldata, making Layer 2 transactions significantly more affordable.

06

Future-Proofing for Danksharding

The 128 KB size and data structure are designed for compatibility with full Danksharding. In the final design, the network will scale to 64 blobs per block, and validators will use Data Availability Sampling (DAS) to securely verify blob availability without downloading all data.

EIP-4844 DATA STORAGE

Blob Data vs. Calldata: A Comparison

A technical comparison of the two primary methods for storing transaction data on Ethereum, focusing on cost, capacity, and use cases.

FeatureBlob Data (EIP-4844)Calldata (Legacy)

Primary Purpose

High-volume, temporary data for Layer 2s

Persistent contract execution input

Storage Duration

~18 days (4096 epochs)

Permanent (on-chain forever)

Cost Model

Separate blob gas fee, designed to be cheap

Main execution gas, scales with data size

Data Capacity per Tx

Up to 6 blobs (~768 KB total)

Block gas limit dependent (~100-200 KB practical)

Accessibility

Off-chain, via beacon node

On-chain, directly in transaction

EVM Accessible

No (only commitment is verified)

Yes (fully readable by contracts)

Typical Use Case

Layer 2 rollup batch data

Contract function arguments, event logs

ethereum-implementation
BLOB SIZE LIMIT

Implementation in Ethereum (EIP-4844)

An examination of the technical constraints and design rationale for the maximum data size of blobs introduced by Ethereum's Proto-Danksharding upgrade.

The blob size limit in EIP-4844, also known as Proto-Danksharding, is a protocol-enforced maximum of 128 KB per individual blob-carrying transaction. This limit is a critical parameter that balances the goals of scaling Ethereum's data availability for Layer 2 rollups with the practical constraints of network propagation and storage. Each blob is a dedicated data packet attached to a transaction, stored separately from the main execution chain and subject to a short pruning period. The 128 KB cap ensures that these large data packets can be efficiently broadcast across the peer-to-peer network without causing excessive latency or overwhelming individual nodes, acting as a safeguard during the initial phase of Ethereum's sharding roadmap.

This limit is enforced through consensus rules and gas economics. The BlobTransaction network wrapper and the execution layer's ExecutionPayload contain a maximum of six blobs per block in the initial specification, creating a theoretical per-block data availability limit of 768 KB. The gas cost for a blob, defined by the blob_gasprice mechanism, is independent of execution gas and is designed to dynamically adjust based on demand for blob space. This two-dimensional fee market prevents congestion in blob data from spilling over and affecting the gas fees for standard Ethereum transactions, ensuring that the core network's performance remains stable even as blob usage fluctuates.

The choice of 128 KB is a deliberate stepping stone. It provides a substantial increase in cheap data availability for rollups—each blob can hold roughly the equivalent of ~4 full blocks of calldata—while remaining manageable for the existing network. All consensus nodes (validators) are required to download and validate blob data for a short window (currently ~18 days), after which the data can be pruned. This model, known as data availability sampling (DAS), is foundational for the future full Danksharding implementation, where the blob count per block will scale massively and validators will only sample small portions of the total data, relying on the security provided by the size limit and erasure coding.

design-considerations
BLOB SIZE LIMIT

Design Considerations & Trade-offs

The blob size limit is a critical protocol parameter that balances scalability, cost, and network security. These cards explore the key engineering trade-offs involved in its design.

01

Scalability vs. Node Resource Burden

A larger blob size increases data availability (DA) throughput but places a heavier burden on full nodes and validators, who must download and propagate all blob data. This creates a trade-off between network capacity and the hardware requirements for participation, potentially impacting decentralization.

  • Pro: More DA capacity per block.
  • Con: Higher bandwidth and storage costs for nodes.
02

Cost Efficiency vs. Spam Prevention

The limit, combined with a separate blob gas market, determines the cost of data publication. A higher limit with low gas costs makes Layer 2 rollups cheaper but risks cheap spam filling blocks with low-value data. A lower limit or high cost secures the network but can make L2 transactions prohibitively expensive.

  • Key Mechanism: EIP-4844's blob gas model dynamically prices blob space separately from execution gas.
03

Data Availability Sampling (DAS) & Security

The blob size is intrinsically linked to the efficiency of Data Availability Sampling. Light clients and validators sample small random chunks of blob data to verify its availability. A larger blob size requires more samples to achieve the same statistical security guarantee, increasing verification time and complexity.

  • Design Goal: The limit must align with practical DAS parameters to ensure lightweight, secure verification.
04

Forward Compatibility & Proto-Danksharding

EIP-4844's 128 KB per blob limit is a starting point for proto-danksharding, a stepping stone to full danksharding. The design anticipates future increases. The current limit allows the network to test the blob mechanism and blob gas market with lower risk before scaling to 16+ blobs per block and larger sizes.

  • Evolution: The limit is a parameter that can be increased via future Ethereum Improvement Proposals (EIPs).
05

Interoperability with Layer 2 Architectures

The limit directly constrains rollup throughput. Optimistic rollups and ZK-rollups batch transactions into calldata or blobs. A 128 KB blob can hold ~4-8x more data than the equivalent cost in calldata, but rollup designs must optimize their batch compression and submission logic to fit within this new, larger-but-still-limited unit of data.

06

Comparison to Calldata Storage

Blobs provide a data availability solution distinct from using transaction calldata. This table highlights the core trade-offs:

  • Persistence: Blobs are pruned after ~18 days; calldata is stored in Ethereum history forever.
  • Cost: Blob gas is designed to be significantly cheaper for equivalent data.
  • Node Load: Blobs are not fully executed by all nodes, reducing computational load versus calldata.
ecosystem-impact
BLOB SIZE LIMIT

Impact on the Ecosystem

The Blob Size Limit is a key parameter in Ethereum's data availability layer, directly influencing network capacity, costs, and the viability of Layer 2 solutions.

03

Blob Gas Market Dynamics

Blob transactions consume blob gas, a separate resource from execution gas. The limit creates a capped supply for blob space. When demand exceeds the per-block limit, a blob gas fee market activates, increasing costs. This mechanism dynamically prices and rations the scarce blob resource across competing rollups and users.

05

Trade-off: Throughput vs. Decentralization

Increasing the limit improves throughput but raises the hardware burden for nodes. Larger blobs require more bandwidth, storage, and processing power. The ecosystem must carefully adjust the limit to scale without compromising node decentralization, as fewer participants could afford to run full nodes if requirements become too high.

BLOB SIZE LIMIT

Frequently Asked Questions (FAQ)

Common questions about the technical constraints and implications of blob size limits in blockchain protocols, particularly Ethereum's EIP-4844.

The blob size limit is a protocol-enforced constraint on the amount of data that can be included in a blob-carrying transaction under EIP-4844 (Proto-Danksharding). A single blob is fixed at 128 KB (131,072 bytes), and each block has a target of 3 blobs and a maximum limit of 6 blobs. This limit is a critical consensus parameter that prevents blocks from becoming too large and ensures network stability by capping the data load on nodes.

Key Details:

  • Per Blob: 128 KB (4096 field elements * 32 bytes each).
  • Per Block Target: ~384 KB (3 blobs).
  • Per Block Max: ~768 KB (6 blobs). This structure allows Layer 2 rollups to post cheaper data commitments while protecting the Ethereum network from being overwhelmed by data.
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team