Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
green-blockchain-energy-and-sustainability
Blog

Why Ethereum's Merge Was Just the First Step in a Long Efficiency Journey

The Merge solved consensus energy waste, but true blockchain efficiency requires scaling execution. This analysis breaks down why rollups and EIP-4844 are the next non-negotiable upgrades for a sustainable, high-throughput future.

introduction
THE EFFICIENCY FRONTIER

Introduction

The Merge shifted Ethereum's consensus mechanism but left its core scalability and user experience bottlenecks unresolved.

Proof-of-Stake was table stakes. The Merge's primary achievement was replacing energy-intensive mining with staking, a prerequisite for future upgrades like proposer-builder separation (PBS) and single-slot finality. It did not reduce gas fees or increase transaction throughput.

Scalability remains execution-layer problem. Post-Merge, the bottleneck moved from consensus to execution. This created the market for optimistic rollups (Arbitrum, Optimism) and ZK-rollups (zkSync, Starknet), which handle computation off-chain.

User experience is still broken. High base-layer fees and fragmented liquidity across L2s necessitate complex bridging via Across or Stargate. The next phase, The Surge, directly targets this with proto-danksharding (EIP-4844) for cheaper L2 data.

Evidence: Ethereum's average transaction fee post-Merge remains volatile, often exceeding $10, while rollups like Arbitrum process over 200,000 daily transactions at a fraction of the cost.

key-insights
THE EFFICIENCY FRONTIER

Executive Summary

The Merge transitioned Ethereum to Proof-of-Stake, but the core constraints of monolithic architecture—high fees, low throughput, and state bloat—remain the primary bottlenecks to global adoption.

01

The Problem: Monolithic Inefficiency

Ethereum's single-threaded execution layer creates a scalability trilemma: you can't optimize for decentralization, security, and scalability simultaneously. This manifests as:\n- ~15-30 TPS base layer capacity\n- $10+ gas fees during congestion\n- Exponential state growth burdening all nodes

~15 TPS
Base Capacity
$10+
Peak Gas
02

The Solution: Modular Execution

Decoupling execution into specialized layers (rollups) is the only viable path to scale without sacrificing security. Optimistic Rollups like Arbitrum and Optimism and ZK-Rollups like zkSync and StarkNet handle computation off-chain, posting compressed proofs or fraud proofs to Ethereum.\n- 1000-4000 TPS per rollup\n- ~$0.01-0.10 transaction costs\n- Inherits Ethereum's $50B+ security

1000x+
Throughput Gain
-99%
Cost vs L1
03

The Next Hurdle: Data Availability

Rollups need cheap, abundant space to post their transaction data. Ethereum's calldata is expensive, creating a new bottleneck. Proto-Danksharding (EIP-4844) introduces blob-carrying transactions, a dedicated data layer.\n- ~1.3 MB per slot data capacity\n- ~100x cheaper data posting vs calldata\n- Enables $0.001 L2 fees

~1.3 MB
Per Slot Data
-99%
Data Cost
04

The Endgame: Verifiable Compute

The final architectural shift is moving from re-executing transactions to verifying cryptographic proofs. ZK-EVMs (like those from Scroll, Taiko) and zk coprocessors (like Risc Zero) enable trustless off-chain computation.\n- Instant finality for cross-rollup bridges\n- Native privacy via zk-proofs\n- Enables parallel execution at the VM level

Instant
Finality
Parallel
Execution
thesis-statement
THE EXECUTION LAYER

The Real Bottleneck Was Never Consensus

The Merge solved energy waste, but the fundamental constraint on Ethereum's throughput and cost is the monolithic execution layer, not the consensus mechanism.

Proof-of-Stake was a prerequisite, not a solution. The Merge's primary achievement was slashing energy consumption by ~99.95%. It did not increase transaction throughput or reduce gas fees, which remain bound by the single-threaded execution of the EVM.

The real bottleneck is execution. Ethereum's consensus layer now handles thousands of validators efficiently, but the EVM processes transactions sequentially. This creates a hard cap on blockspace, making gas fees a pure auction for a scarce resource.

Rollups are the execution escape hatch. Protocols like Arbitrum and Optimism bypass this bottleneck by executing transactions off-chain and posting compressed proofs to L1. They demonstrate that decoupling execution from consensus is the scaling path.

Evidence: Post-Merge, average gas fees have not structurally declined. The L2 ecosystem, however, now processes over 90% of user transactions, proving demand shifted to superior execution environments.

EXECUTION LAYER BOTTLENECKS

The Post-Merge Efficiency Stack: Where the Work Happens

Comparing the core architectural approaches to scaling Ethereum's execution capacity post-Merge, moving from monolithic to modular designs.

Architectural MetricMonolithic L1 (Ethereum Mainnet)Optimistic Rollup (Arbitrum, Optimism)ZK-Rollup (zkSync Era, StarkNet)Modular Execution Layer (EigenDA, Celestia)

Data Availability Cost per MB

$1,200 - $8,000

$3 - $40 (via calldata)

$0.25 - $3 (via blobs)

< $0.10 (off-chain)

State Growth per TPS

~15 KB/sec

~0.5 KB/sec (fault proofs)

~0.05 KB/sec (validity proofs)

0 KB/sec (stateless)

Time to Finality

12.8 minutes (256 blocks)

~1 week (challenge period)

~10 minutes (ZK proof generation)

< 2 minutes (based on DA layer)

Trust Assumption

Ethereum Validator Set

1-of-N Honest Actor (Security Council)

Cryptographic (ZK-SNARK/STARK)

Cryptographic + Economic (Data Availability Sampling)

Max Theoretical TPS (EVM)

~15 - 45

~1,000 - 4,000

~2,000 - 20,000+

100,000 (horizontally scalable)

Proposer-Builder Separation (PBS) Integration

Manual via MEV-Boost

Centralized Sequencer

Centralized Sequencer

Native (Decentralized Sequencing via EigenLayer)

Client Diversity Criticality

Extreme (Prysm < 33% goal)

High (Single Sequencer failure)

High (Single Prover failure)

Low (Modular fault isolation)

deep-dive
THE EXECUTION LAYER

Layer 2 Rollups: The Execution Efficiency Engine

The Merge shifted Ethereum's consensus to proof-of-stake, but did not solve its core constraint of limited, expensive block space for execution.

The Merge was consensus-layer optimization. It made Ethereum's proof-of-stake consensus more secure and energy-efficient, but it did not increase the network's capacity to process transactions. The fundamental bottleneck of execution on Layer 1 remained.

Rollups are the execution layer. They move computation and state storage off-chain, posting only compressed transaction data and validity proofs to Ethereum. This creates a massive scalability multiplier while inheriting Ethereum's security, unlike sidechains like Polygon PoS.

Optimistic vs. ZK Rollups diverge on trust. Optimistic Rollups (Arbitrum, Optimism) assume transactions are valid, using a fraud-proof challenge window. ZK-Rollups (zkSync, Starknet) use validity proofs (ZK-SNARKs/STARKs) for instant finality, creating a superior trust model but with higher computational overhead.

Evidence: Arbitrum processes 40x more TPS. Ethereum mainnet handles ~15-20 TPS. The Arbitrum One rollup consistently processes over 200 TPS, demonstrating the execution efficiency of moving computation off the base layer.

protocol-spotlight
THE DATA AVAILABILITY BOTTLENECK

The Next Leap: Proto-Danksharding (EIP-4844)

The Merge shifted consensus to Proof-of-Stake, but did not solve Ethereum's core scalability constraint: the cost and size of on-chain data.

01

The Problem: L2s Are Paying for Storage They Don't Need

Rollups like Arbitrum and Optimism post transaction data to Ethereum for security, but they must pay for permanent storage on the execution layer. This is a massive, unnecessary cost passed to users.\n- Cost Structure: Up to ~90% of an L2 transaction fee is for this permanent data posting.\n- Inefficiency: Data is only needed for a few weeks for fraud/validity proofs, not forever.

~90%
Fee Overhead
Permanent
Storage Waste
02

The Solution: Blob-Carrying Transactions

EIP-4844 introduces a new transaction type that carries large data "blobs" separate from execution. These blobs are stored by consensus nodes for ~18 days and then pruned.\n- Separation of Concerns: Execution layer processes transactions; consensus layer provides temporary data availability.\n- Order of Magnitude Cheaper: Blob space is priced via a separate fee market, decoupling it from mainnet congestion.

~18 Days
Data Retention
10-100x
Cheaper Data
03

The Architecture: A Stepping Stone to Full Danksharding

Proto-Danksharding is not the final form. It's a minimal, production-ready spec that establishes the blob market and paves the way for scaling to 64 blobs per block.\n- Backwards Compatibility: Requires no changes to existing EVM or smart contracts.\n- Foundation for Rollups: Enables zkEVMs and optimistic rollups to scale without protocol changes, directly benefiting ecosystems like zkSync, Starknet, and Base.

64 Blobs
Future Target
0 Changes
To EVM
04

The Impact: Unlocking Hyper-Scalable L2s

With data costs reduced by orders of magnitude, the economic model for L2s fundamentally changes. Sub-cent transactions become sustainable, enabling new use cases.\n- New App Possibilities: Micro-transactions for gaming, high-frequency DeFi, and cheap social interactions.\n- Ecosystem Flywheel: Cheaper L2s attract more users and developers, increasing the value captured by the Ethereum security base.

<$0.01
Target Tx Cost
1000+ TPS
Per L2 Chain
counter-argument
THE MONOLITHIC THESIS

The Solana Counter-Narrative: A Different Tradeoff

Solana's architecture demonstrates that the Merge's shift to Proof-of-Stake was necessary but insufficient for scaling.

The Merge was table stakes. It solved Ethereum's energy problem but not its data problem. The core bottleneck remains the monolithic execution model, where a single node sequentially processes all transactions. This design caps throughput regardless of consensus mechanism.

Solana's bet is on hardware. Its monolithic architecture optimizes for single-machine performance using techniques like Pipelining and Sealevel. This approach delivers high throughput now but faces physical scaling limits dictated by Moore's Law and network latency.

The tradeoff is decentralization. Solana's high hardware requirements for validators create a centralizing pressure and operational risk. The network's past outages prove that pushing a single state machine to its limits introduces systemic fragility.

Evidence: Solana's theoretical peak of 65k TPS requires specialized hardware, while Ethereum's modular roadmap (via rollups like Arbitrum and Optimism) distributes execution load across thousands of nodes, trading raw specs for resilience.

FREQUENTLY ASKED QUESTIONS

FAQ: The Post-Merge Efficiency Roadmap

Common questions about why Ethereum's transition to Proof-of-Stake was just the beginning of its scaling and efficiency upgrades.

The next major upgrade is The Surge, focusing on scaling via rollups and data sharding. This phase introduces proto-danksharding (EIP-4844) to create cheap data blobs, drastically lowering L2 transaction fees for chains like Arbitrum, Optimism, and zkSync.

takeaways
THE EFFICIENCY JOURNEY

Takeaways: The Path to a Green, High-Throughput Ethereum

The Merge solved energy waste, but true scalability requires a multi-layered architectural overhaul.

01

The Problem: The Data Availability Bottleneck

Rollups are throttled by the cost and speed of posting data to Ethereum for security. This is the primary constraint on scaling and cost reduction today.\n- Cost: Data posting can constitute >90% of a rollup's L1 expense.\n- Throughput: Limited to ~80 KB/s of calldata on mainnet, capping total TPS.

~80 KB/s
Data Cap
>90%
Rollup Cost
02

The Solution: Proto-Danksharding (EIP-4844)

Introduces blob-carrying transactions, a dedicated data channel for rollups that is cheap and ephemeral. This is the next concrete step on the roadmap.\n- Impact: Expected 10-100x cost reduction for rollup data fees.\n- Mechanism: Data blobs are stored for ~18 days by consensus nodes, not execution clients, minimizing state growth.

10-100x
Cheaper Data
~18 days
Blob Storage
03

The Problem: Synchronous Composability is Expensive

Today's dApps interact within a single, congested block space, forcing all transactions to compete for the same global resource. This creates volatile fees and limits complex, cross-contract applications.\n- Result: Gas auctions and $100+ transaction fees during peak demand.\n- Constraint: Limits innovation in DeFi and on-chain gaming.

$100+
Peak Fees
Single
Resource Layer
04

The Solution: A Rollup-Centric Future with Parallel Execution

Ethereum becomes a settlement and data availability layer for hundreds of specialized rollups (e.g., Arbitrum, Optimism, zkSync). Throughput scales by adding new execution lanes.\n- Architecture: Enables parallel execution across rollups, breaking the single-lane bottleneck.\n- Ecosystem: Drives specialization (DeFi rollup, gaming rollup, social rollup) with interoperability bridges like LayerZero and Axelar.

100s
Execution Lanes
Parallel
Execution
05

The Problem: Full Nodes are Becoming Inaccessible

The hardware requirements to run an Ethereum node are rising with state growth, threatening decentralization. If only large entities can run nodes, the network becomes more trusted and less trustless.\n- Current State: Requires ~2 TB SSD and fast sync times.\n- Risk: Centralization of node operation to a few professional providers.

~2 TB
SSD Required
Rising
Hardware Cost
06

The Solution: Verkle Trees & Stateless Clients

A cryptographic upgrade that allows validators to verify blocks without storing the entire state. This is the final piece for long-term sustainability.\n- Mechanism: Uses vector commitments to create tiny, constant-sized proofs of state.\n- Outcome: Enables light clients with phone-level hardware to fully participate, preserving decentralization at scale.

Constant
Proof Size
Phone
Client Hardware
ENQUIRY

Get In Touch
today.

Our experts will offer a free quote and a 30min call to discuss your project.

NDA Protected
24h Response
Directly to Engineering Team
10+
Protocols Shipped
$20M+
TVL Overall
NDA Protected Directly to Engineering Team
Ethereum's Merge Was Just the First Step to Efficiency | ChainScore Blog