Dencun solved data, not execution. The upgrade slashed L2 transaction costs by 90% via proto-danksharding, but it only addressed data availability. The execution bottleneck remains on the L1, where all L2 state roots must be verified.
Why Execution Performance Still Bottlenecks Ethereum
The Merge and Dencun solved consensus and data availability. Now, the EVM's single-threaded execution is the final, hardest bottleneck for scaling Ethereum's throughput and reducing user costs.
Introduction: The Post-Dencun Reality Check
Dencun's data availability gains are nullified by persistent execution layer constraints that throttle all L2s.
L2 performance is L1-gated. Every Arbitrum or Optimism batch competes for the same L1 block space for finalization. This creates a hard throughput ceiling; even with infinite cheap data, L2s cannot scale execution independently of Ethereum's ~15 TPS base layer.
Parallel EVMs are a distraction. Chains like Monad and Sei tout 10k TPS, but their performance is isolated. The interoperability tax means any cross-chain action, via LayerZero or Axelar, ultimately hits the L1 execution wall, making isolated speed metrics misleading.
Evidence: Post-Dencun, average L1 base fee spiked 300% during peak NFT mints, causing corresponding finality delays for Starknet and zkSync Era batches. Cheap data increases demand, exacerbating the core execution constraint.
The Three Pillars of the Execution Bottleneck
Ethereum's scalability is gated by the EVM's fundamental design, not just block space. Here's what's actually limiting throughput.
The Problem: Single-Threaded EVM
The EVM processes transactions sequentially on a single core, creating a hard physical limit on compute per block. This is the root cause of high gas fees during congestion.
- Sequential Processing: No parallel execution of unrelated transactions.
- Gas Limit Ceiling: Current ~30M gas/block caps total compute, regardless of hardware.
- Inefficient State Access: All transactions compete for the same global state tree.
The Problem: State Growth & Access
Ethereum's state (account balances, contract storage) grows linearly with usage, making it slower and more expensive for nodes to sync and validate.
- State Bloat: The global state is now over 1 TB, growing ~50 GB/year.
- Random Access Bottleneck: Reading/writing to this massive, random-access Merkle-Patricia tree is I/O intensive.
- Centralization Pressure: High hardware requirements push out smaller node operators.
The Problem: Synchronous Composability
Ethereum's killer feature—atomic composability—is also its bottleneck. Every DeFi transaction must be sequenced and settled on-chain, blocking other operations.
- Atomic Blocking: A single complex transaction (e.g., a large DEX swap) can consume significant block space and time.
- No Pre-Execution: Transactions cannot be processed speculatively or in parallel due to strict state dependencies.
- Congestion Spillover: One popular app (e.g., NFT mint) can congest the entire network for all users.
Deep Dive: The Anatomy of a Bottleneck
Ethereum's fundamental design trades raw speed for decentralized security, creating a hard performance ceiling at the execution layer.
Sequential EVM processing is the primary bottleneck. The Ethereum Virtual Machine processes transactions in a single-threaded sequence, preventing parallel execution of unrelated transactions. This creates a deterministic but slow state transition function.
State growth and access compounds the problem. Every transaction must read and write to a massive, shared global state. This creates immense I/O pressure, as seen in the gas cost spikes for operations like SLOAD and SSTORE.
Rollups like Arbitrum and Optimism are execution layer offshoots, not fixes. They inherit the EVM's sequential model and state access patterns, merely shifting the bottleneck to a centralized sequencer before posting proofs to L1.
Evidence: The theoretical maximum for the current EVM is ~100 TPS. Even with full blocks, this limit is constrained by the 30 million gas block size and the computational weight of standard operations.
Execution vs. Data: The Scaling Dichotomy
Comparing the performance constraints of execution (compute) versus data availability (storage) on Ethereum's scalability.
| Performance Metric | Ethereum L1 (Status Quo) | Data-Availability Scaling (e.g., Celestia, EigenDA) | Execution Scaling (e.g., Arbitrum, Optimism, zkSync) |
|---|---|---|---|
Peak Transactions Per Second (TPS) | ~15-30 |
| 2,000 - 20,000+ (off-chain) |
Block Gas Limit (Main Constraint) | 30M gas (compute & storage) | ~0.125 MB per blob (data only) | N/A (sovereign gas limits) |
Cost Driver for End-User | Gas auction for global block space | Data publishing fee (~$0.001 - $0.01 per 125KB) | Sequencer fee + L1 settlement/data cost |
Primary Bottleneck | In-block execution & state growth | Bandwidth & storage of full nodes | Prover/Sequencer hardware & fraud proof window |
State Growth Impact | High (permanent, global state) | Low (data is prunable after ~18 days) | Medium (compressed, but still accumulates) |
Time to Finality (L1 Inclusion) | ~12 minutes (probabilistic) | ~12 minutes (for data attestation) | < 1 second (soft confirm) + ~1 hour (hard finality) |
Trust Assumption for Security | Ethereum validators | Data Availability Sampling (DAS) committee | Parent chain (Ethereum) for fraud/validity proofs |
Developer Overhead for Migration | N/A (baseline) | Low (modular stack integration) | High (new VM, tooling, bridging) |
Counter-Argument: "But L2s Solve Everything"
Layer 2s shift the data availability problem but create new execution bottlenecks that constrain the entire ecosystem.
L2s export execution, not finality. They batch transactions and post compressed data to Ethereum L1 for settlement. The L1 data availability layer becomes the global, non-negotiable constraint for all L2s, limiting their aggregate throughput.
Sequencer centralization is a systemic risk. Networks like Arbitrum and Optimism rely on a single sequencer for transaction ordering. This creates a single point of failure for liveness and exposes users to censorship, undermining decentralization.
Cross-L2 interoperability is the new bottleneck. Moving assets between Arbitrum and Polygon zkEVM requires slow, expensive bridges like Across or Hop. This fragmented liquidity and poor UX negates the performance gains of individual chains.
Evidence: The 2024 Dencun upgrade reduced L2 data posting costs by ~90%. This immediately revealed the next bottleneck: sequencer capacity. During peak demand, Arbitrum's sequencer experiences mempool congestion, causing transaction delays despite low L1 fees.
Key Takeaways for Builders and Investors
Ethereum's consensus is robust, but its execution layer is the primary constraint for scalability and user experience.
The Single-Threaded EVM
Ethereum processes transactions sequentially, creating a hard throughput cap. This serial execution is the root cause of high gas fees during congestion and limits complex dApp logic.
- Bottleneck: ~12-15M gas per block ceiling.
- Impact: Congestion auctions drive fees to $50+ for simple swaps.
- Builder Focus: Architect for gas efficiency; explore parallelizable state models.
State Growth & Access Latency
The global state is a massive, ever-growing database. Reading and writing to it is I/O intensive, slowing down execution clients like Geth and Erigon.
- Problem: Full node state size exceeds 1TB+, growing ~50GB/month.
- Consequence: High latency for state-heavy operations (e.g., DeFi arbitrage).
- Investor Lens: Back infra that tackles state (Verkle trees, stateless clients, EigenLayer AVSs).
MEV as a Systemic Tax
Maximal Extractable Value is not just leakage; it's a direct tax on user transactions imposed by execution bottlenecks. Searchers and builders compete for block space, front-running and sandwiching users.
- Cost: $1B+ extracted annually from users.
- Solution Path: Proposer-Builder Separation (PBS), encrypted mempools, Flashbots SUAVE.
- Action: Integrate MEV protection (e.g., CowSwap, UniswapX) or risk user funds.
The L2 Execution Stack
Rollups (Optimism, Arbitrum, zkSync) offload execution but inherit Ethereum's constraints as their data/security layer. Their performance is gated by Ethereum's data bandwidth and proving costs.
- Limit: ~80 KB per block data availability budget.
- Result: L2 TPS is theoretically capped; proofs (ZK) are computationally expensive.
- Opportunity: Invest in alternative DA (Celestia, EigenDA) and proof acceleration hardware.
Client Diversity & Centralization
Execution client dominance (Geth > 75%) creates a systemic risk. A bug in the majority client could halt the chain. Performance optimization is bottlenecked by a lack of competitive client development.
- Risk: Critical consensus failure if Geth has a bug.
- Builder Mandate: Run minority clients (Nethermind, Besu, Erigon).
- Investor Play: Fund next-gen clients (Reth, Silkworm) focusing on performance.
Parallel Execution Futures
The endgame is breaking the single-threaded paradigm. Solutions like Monad (parallel EVM), Solana (Sealevel), and Ethereum's own roadmap (EIP-6480) aim for concurrent transaction processing.
- Promise: 10-100x theoretical throughput gains.
- Challenge: Requires re-architecting state access patterns and consensus.
- Verdict: The next major infra battle will be won by execution parallelism.
Get In Touch
today.
Our experts will offer a free quote and a 30min call to discuss your project.