ZK rollups scale Ethereum by executing transactions off-chain and submitting validity proofs to the mainnet. The core infrastructure consists of several key components: a sequencer that orders transactions, a prover that generates cryptographic proofs (ZK-SNARKs or ZK-STARKs), a verifier smart contract on L1 that checks these proofs, and a data availability solution to post transaction data. Setting this up requires configuring both off-chain services and on-chain contracts. Popular frameworks like Starknet (with Cairo) and zkSync Era (with its ZK Stack) provide SDKs that abstract much of this complexity.
Setting Up ZK Infrastructure for Rollups
Setting Up ZK Infrastructure for Rollups
A practical guide to deploying the core components of a ZK rollup, from the prover and verifier to the sequencer and data availability layer.
The first step is choosing a proving system and framework. For a SNARK-based rollup, you might use Circom for circuit design and SnarkJS for proof generation. For STARKs, Cairo is the native language for Starknet. After writing your circuit or program logic, you compile it to generate a verification key and a proving key. The verification key is deployed to a verifier contract on Ethereum, while the proving key is used by your off-chain prover service. This separation is critical for security—the lightweight verifier on L1 can cheaply confirm proofs without re-executing the entire batch.
Next, you need to run the off-chain components. A sequencer node receives user transactions, orders them into batches, and executes them to compute a new state root. This state transition is fed into the prover service, which generates a succinct validity proof. This proof, along with the new state root and batch data, is sent to the L1. The verifier contract checks the proof against the verification key; if valid, it finalizes the state update. You must also decide on data availability: posting full transaction data to Ethereum Calldata (expensive but secure) or using a separate DA layer like Celestia or EigenDA.
Here's a simplified flow using a hypothetical setup with Circom and SnarkJS. First, define a circuit that verifies a batch of transactions:
circomtemplate BatchVerifier() { // ... circuit signals for transaction hashes and state roots // ... constraints ensuring valid state transition }
Compile it and setup the trusted ceremony: snarkjs powersoftau new ... and snarkjs plonk setup .... Export the verification key (verification_key.json) and deploy a verifier contract generated by SnarkJS. Your rollup node would then use the proving key to generate proofs for each batch and submit them to this contract.
Operational considerations are paramount. The prover is computationally intensive; you may need specialized hardware (GPUs/ASICs) for performance. The sequencer must be highly available and resistant to censorship. You'll also need a bridge contract for asset deposits/withdrawals and a mechanism for users to submit fraud proofs in optimistic-rollup-style dispute scenarios, although pure ZK rollups typically don't require these. Monitoring proof generation times, gas costs of L1 verification, and data availability costs is essential for estimating operational expenses and user fees.
Finally, test thoroughly on a testnet before mainnet deployment. Use frameworks like Foundry or Hardhat to simulate L1 interactions. Stress-test your prover with high transaction loads and ensure your system can handle reorgs and malicious transaction sequences. The end goal is a secure, decentralized sequencer set and a robust, cost-effective proving pipeline that provides users with fast, cheap transactions backed by Ethereum's security. For production, consider leveraging established stacks like the ZK Stack or Polygon CDK to reduce development time and audit surface.
Prerequisites and System Requirements
A practical guide to the hardware, software, and foundational knowledge required to build and operate a zero-knowledge rollup.
Building a zero-knowledge rollup requires a specific technical stack. The core components are a prover for generating validity proofs, a verifier smart contract deployed on the settlement layer (like Ethereum), and a sequencer for ordering transactions. You'll need a development environment capable of handling cryptographic computations and interacting with blockchain networks. This guide assumes familiarity with blockchain fundamentals, smart contract development (Solidity), and basic command-line operations.
Your hardware must be optimized for computationally intensive proving tasks. A minimum of 16GB RAM is required, but 32GB or more is recommended for production environments. A modern multi-core CPU (Intel i7/i9 or AMD Ryzen 7/9) is essential, and a high-performance GPU (NVIDIA RTX 3080 or better) can accelerate proof generation by 10-100x for certain proving systems like zk-SNARKs. Fast SSD storage (NVMe) is critical for handling large proving keys and circuit data, which can exceed 100GB.
The software foundation starts with a Linux distribution (Ubuntu 20.04/22.04 LTS is standard) and a package manager. You must install Docker and Docker Compose for containerized deployment of node software. Node.js (v18+) and npm/yarn are needed for tooling, while Rust and Cargo are mandatory for compiling many ZK circuits and provers, such as those used by zkSync Era or Starknet. Python 3.8+ is also commonly required for scripting and testing.
You will interact with several key development tools. The Foundry toolkit (forge, cast, anvil) is preferred for Solidity development and testing. For Ethereum interaction, install the Ethereum Execution Client (geth or nethermind) and a Beacon Client (prysm, lighthouse) if testing full L1 integration. Familiarity with Git for version control and a code editor like VS Code with Solidity/Rust extensions completes the core setup.
Before writing code, understand the cryptographic primitives. You don't need to be a cryptographer, but you should grasp the purpose of elliptic curve pairings (used in Groth16), polynomial commitments, and hash functions (Poseidon, Keccak). Decide on a proving system: zk-SNARKs (Groth16, PLONK) offer small proof sizes but require a trusted setup, while zk-STARKs (used by Starknet) are trustless but generate larger proofs. This choice will dictate your circuit language (e.g., Circom, Noir, Cairo).
Finally, set up your testing environment. Use a local testnet like a Foundry Anvil instance or Hardhat Network for initial development. For more realistic testing, deploy to a public testnet (Sepolia, Holesky) or a ZK rollup devnet (zkSync Era In-memory node, Starknet testnet). Allocate test ETH for gas fees and ensure your RPC endpoints are configured. With these prerequisites met, you can proceed to circuit design and node deployment.
Setting Up ZK Infrastructure for Rollups
A practical guide to the essential software and hardware components required to generate zero-knowledge proofs for Layer 2 rollups.
Zero-knowledge rollups (ZK-rollups) rely on a specialized proving infrastructure to generate cryptographic proofs of valid state transitions. The core software stack typically includes a prover, a verifier, and a state manager. The prover, often written in Rust or C++, executes transactions and generates a ZK-SNARK or ZK-STARK proof. Popular proving systems include PLONK, Groth16, and Starky. The verifier is a lightweight component, usually a smart contract on Layer 1, that checks the proof's validity. Efficient state management is critical for tracking user balances and contract data between batches.
Hardware selection significantly impacts proving performance and cost. Proof generation is computationally intensive, making GPU acceleration essential for production systems. High-end NVIDIA GPUs (e.g., A100, H100) are commonly used due to their parallel processing capabilities for cryptographic operations. For maximum throughput, operators deploy proving clusters that distribute the workload across multiple machines. The choice between a centralized prover and a decentralized prover network involves trade-offs in latency, cost, and censorship resistance. Services like Aleo's snarkOS and Espresso Systems are building decentralized proving markets.
Setting up the environment involves installing dependencies like Rust, CMake, and specific GPU drivers. For a zkEVM like Scroll or Polygon zkEVM, you would clone the prover repository, configure the circuit parameters (e.g., KZG ceremony files, trusted setup), and set environment variables for the witness generator and proof aggregator. A basic local test setup can be initiated with Docker, using commands like docker-compose up to run the sequencer, prover, and verifier services. The initial trusted setup ceremony is a one-time, multi-party computation that generates the necessary public parameters for your chosen proving system.
Integrating with a rollup stack requires connecting your prover to the sequencer and data availability layer. The sequencer orders transactions and outputs a witness—the data needed for proof generation. The prover reads this witness, executes the batch, and generates the proof. This proof and the new state root are then posted to the L1 rollup contract. You must configure the L1 RPC endpoint (e.g., to Ethereum Mainnet or a testnet like Sepolia) and ensure the prover's verification key is registered on-chain. Monitoring tools are needed to track proof generation times, GPU utilization, and gas costs for L1 submissions.
Optimizing performance is an ongoing process. Techniques include pipelining witness generation and proof computation, using custom circuit gates to reduce constraint count, and implementing recursive proofs to aggregate multiple rollup batches into a single verification. The choice of proof system dictates trade-offs: Groth16 has small proof sizes but requires a circuit-specific trusted setup, while STARKs have larger proofs but are post-quantum secure and don't need a trusted setup. Benchmarking against real workloads is crucial; proving a batch of 1000 simple transfers will have different requirements than proving a batch containing complex DeFi transactions.
Comparison of ZK Stack Frameworks
A technical comparison of major frameworks for building zkEVMs and zkVMs, focusing on developer experience, performance, and ecosystem maturity.
| Feature / Metric | zkSync Era (ZK Stack) | Polygon zkEVM | Starknet (Cairo) | Scroll |
|---|---|---|---|---|
Primary Language | Solidity/Vyper | Solidity | Cairo | Solidity/Vyper |
EVM Equivalence Level | Bytecode-level | Bytecode-level | Language-level (Cairo VM) | Bytecode-level |
Proving System | Boojum (SNARK) | Plonky2 (SNARK) | STARK | zkEVM (GPU-accelerated) |
Time to Finality (L1) | ~1 hour | ~30-45 minutes | ~3-4 hours | ~1 hour |
Prover Cost (est. per tx) | $0.10 - $0.30 | $0.05 - $0.20 | $0.50 - $1.50 | $0.08 - $0.25 |
Native Account Abstraction | ||||
Permissionless Provers | ||||
Mainnet Launch Date | Mar 2023 | Mar 2023 | Nov 2021 | Oct 2023 |
Step 1: Setting Up the ZK Prover
The ZK prover is the computational engine that generates validity proofs for rollup transactions. This guide covers the initial setup using the Plonky2 framework.
A ZK prover is the core component responsible for generating cryptographic proofs that attest to the correct execution of a batch of transactions. For a ZK rollup, this proof is submitted to the L1 (e.g., Ethereum) to finalize state updates. The prover's performance directly impacts the cost and finality time of the rollup. We will use Plonky2, a SNARK implementation written in Rust, known for its fast proving times and recursive proof composition capabilities.
First, ensure your system meets the requirements. You'll need Rust (version 1.70 or later) and Cargo installed. Clone the Plonky2 repository and build the project: git clone https://github.com/mir-protocol/plonky2.git && cd plonky2 && cargo build --release. This compiles the core proving libraries and binaries. The build process may take several minutes as it compiles cryptographic dependencies like the BLS12-381 curve and the Keccak hash function.
After building, you can run a simple test to verify the installation. Navigate to the examples directory and execute a basic proof generation: cargo run --example plonky2_demo. This demo creates a circuit for a simple computation (e.g., verifying a hash preimage), generates a proof, and verifies it. A successful run confirms your toolchain and dependencies are correctly configured. For production, you would integrate these libraries into your own node software.
The next step is to define your own circuit logic. In Plonky2, you construct a circuit using its builder API, specifying constraints that represent your state transition function. Key objects include CircuitBuilder for defining gates and Target types for representing variables. Your circuit must accurately encode the rules of your rollup's virtual machine, such as balance checks and signature verifications. The complexity of this circuit is the primary factor in proving time.
Finally, configure the prover's parameters for your use case. This involves selecting a proof system (Plonky2 uses FRI-based SNARKs), setting the degree of the polynomial (which affects proof size and speed), and choosing a hash function for the Merkle tree commitments. You can benchmark different configurations using the cargo bench command. For a production rollup, you would likely run the prover as a separate, high-performance service that receives batches from a sequencer and outputs proofs to a verifier contract.
Step 2: Deploying the Verifier Smart Contract
This step covers the deployment of the core cryptographic verification logic on-chain, enabling your rollup to prove the validity of state transitions.
The verifier smart contract is the on-chain component that validates zero-knowledge proofs (ZKPs) submitted by your rollup's sequencer. It contains the cryptographic verification key and the logic to check proof correctness. When a new proof is submitted—typically alongside a batch of compressed transactions—the contract executes a fixed computation. If the proof verifies, the contract accepts the new state root, finalizing the batch on the base layer (like Ethereum). This is the security bedrock: invalid state transitions cannot be confirmed.
Before deployment, you must generate the verification key. This is done by your chosen proving system, such as Groth16, PLONK, or STARK. Using a circuit compiler like circom or snarkjs, you compile your circuit (which encodes your rollup's state transition logic) to produce a verification_key.json file. This key is unique to your circuit and must be hardcoded into or initialized by your verifier contract. The contract itself is often generated by the same tooling, resulting in a Solidity or Vyper file containing the verifyProof function.
Deployment involves careful testing and configuration. First, deploy the verifier contract to a testnet (like Sepolia or Holesky). You should write and run comprehensive tests that submit valid and invalid proofs to ensure it rejects faulty batches. Key parameters to set at deployment include the allowed prover address (often a multisig or the sequencer contract) and any timelocks or challenge periods. For mainnet deployment, consider using a proxy upgrade pattern (like OpenZeppelin's TransparentUpgradeableProxy) to allow for future circuit upgrades, as changing the verification key requires a new contract.
Integration with the broader rollup architecture is critical. The verifier contract must be called by your rollup manager or sequencer contract. A typical flow is: 1) Sequencer posts batch data to data availability layer, 2) Sequencer generates a ZKP for the batch, 3) Sequencer calls verifierContract.verifyProof(proof, publicInputs), where publicInputs include the old and new state roots. Upon success, the manager contract finalizes the state update. Gas optimization is vital here, as verification can be expensive; techniques include using precompiles for elliptic curve operations and optimizing the circuit itself.
Security considerations are paramount. Audit both the generated verifier code and the underlying cryptographic libraries. Use trusted setup ceremonies if your proving system requires one (like Groth16). Ensure the contract has strict access controls to prevent unauthorized proof submission. Monitor for vulnerabilities in the circuit logic, as a bug there could allow invalid proofs to verify. Resources like the ZK Security Standard and audits from firms like Trail of Bits or OpenZeppelin provide essential guidance for securing this critical component.
Step 3: Building the Sequencer and Data Availability Layer
This step details the core operational components for your rollup: the sequencer that orders transactions and the data availability layer that publishes transaction data.
The sequencer is the primary node responsible for ordering user transactions into blocks. It receives raw transactions, executes them to compute a new state root, batches them, and submits them to the base layer (L1). A basic sequencer implementation involves running a modified Ethereum client like Geth or Erigon, configured with a custom transaction pool and block-building logic that defers execution to a proving system. For a production system, you must implement fault tolerance (e.g., a leader-follower consensus among multiple sequencers) and economic security (e.g., staking and slashing) to prevent malicious behavior like transaction censorship or reordering.
The data availability (DA) layer is non-negotiable for rollup security. It ensures transaction data is published and accessible so anyone can reconstruct the chain state and verify proofs. While Ethereum calldata is the canonical choice, high costs drive the need for alternatives. Solutions include EigenDA, Celestia, or Avail, which provide cheaper, dedicated data availability. Integrating a DA layer requires modifying your sequencer's batch submission logic to post data to the chosen DA network and include only a data commitment (like a Merkle root or KZG commitment) and a data availability attestation in the L1 rollup contract.
Here is a simplified code snippet showing a sequencer's core loop for batching transactions and submitting to an L1 rollup contract and an external DA layer:
python# Pseudocode for sequencer batch submission batch_data = encode_transactions(pending_txs) state_root, zk_proof = prove_batch_execution(batch_data) # Post data to external DA layer (e.g., Celestia) da_submission_id = post_to_da_layer(batch_data) # Submit commitment and proof to L1 rollup_contract.submitBatch({ stateRoot: state_root, daCommitment: hash(batch_data), daReference: da_submission_id, proof: zk_proof })
A critical design decision is the data availability sampling (DAS) scheme. With pure Ethereum calldata, data is guaranteed available. When using an external DA layer, you must ensure the rollup contract can cryptographically verify that the data is available. This often involves using KZG commitments or Validity proofs from the DA layer. Without this, users cannot challenge invalid state transitions. Furthermore, you must implement a data retrieval module in your node software so verifiers can fetch batch data from the DA layer to synchronize.
Finally, the sequencer and DA layer must be economically integrated. The sequencer pays fees for DA publishing and L1 settlement. A well-designed system uses a unified fee market where users pay for both execution and data costs, which the sequencer then uses to cover its operational expenses. Monitoring tools are essential to track DA layer latency, submission costs, and data availability guarantees to ensure the rollup remains secure and cost-effective for end-users.
Data Availability Layer Options
A comparison of primary data availability solutions for ZK rollups, covering security, cost, and performance trade-offs.
| Feature / Metric | Ethereum (Calldata) | Celestia | EigenDA | Avail |
|---|---|---|---|---|
Security Model | Ethereum Consensus | Celestia Consensus | Restaked Ethereum | Polkadot / Substrate |
Data Availability Guarantee | Highest (L1 Finality) | High (Separate Chain) | High (Restaking) | High (Separate Chain) |
Cost per KB (Est.) | $0.50 - $2.00 | $0.01 - $0.10 | $0.02 - $0.15 | $0.03 - $0.12 |
Throughput (MB/s) | ~0.06 | ~10 | ~10 | ~7 |
Finality Time | ~12 minutes | ~2-6 seconds | ~12 minutes | ~20 seconds |
Proven Mainnet Usage | ||||
Requires Native Token | ||||
Integration Complexity | Low (Native) | Medium | Medium | Medium |
Common Issues and Troubleshooting
Addressing frequent challenges developers face when building and deploying zero-knowledge proof systems for rollups.
Slow proof generation is often the primary bottleneck in ZK rollup infrastructure. The main culprits are:
- Inefficient circuit design: Complex constraints and excessive non-deterministic witnesses increase proving time. Use profiling tools like
gnark's profiler orcircom'sr1csanalyzer to identify hotspots. - Suboptimal proving backend: The choice of proving system (e.g., Groth16, PLONK, STARK) and its implementation (e.g., arkworks, bellman) drastically affects performance. For example, Groth16 has fast verification but slower proving, while STARKs have faster proving but larger proof sizes.
- Hardware limitations: Proving is computationally intensive. For production, dedicated high-core-count CPUs or GPU acceleration (using frameworks like CUDA for bellman) are often necessary. A circuit that takes 2 minutes on a laptop may take 10 seconds on optimized hardware.
- Memory constraints: Large circuits can exhaust RAM, causing swapping to disk. Monitor memory usage during proving.
Essential Resources and Tools
Core building blocks, frameworks, and tooling required to set up zero-knowledge infrastructure for rollups, from circuit design to proving, verification, and node operation.
Prover Infrastructure and Optimization
The prover is the most resource-intensive component of a ZK rollup. It generates validity proofs for batches of L2 transactions and directly impacts throughput and costs.
Key infrastructure considerations:
- Hardware acceleration: GPUs and high-memory CPUs are commonly required for production provers
- Parallelization: Splitting circuits into multiple proving segments reduces latency
- Recursive proofs: Aggregating multiple proofs into one reduces L1 verification costs
Common patterns in production rollups:
- Separate provers per circuit type (execution, state diff, aggregation)
- Job queues and autoscaling for burst traffic
- Offloading prover workloads from the sequencer
Teams often underestimate prover costs. At scale, proving can become the dominant operational expense. Benchmarking circuit runtime early and stress testing with realistic batch sizes is critical before mainnet launch.
Verification and Onchain Integration
ZK rollups rely on onchain verifier contracts to confirm proofs and finalize state roots on Ethereum or other settlement layers.
Key components:
- Verifier smart contracts generated from the proving system
- L1 contracts that manage state roots, deposits, withdrawals, and fraud handling
- Upgradeability and key management for emergency fixes
Developer considerations:
- Gas cost per proof verification
- Finality assumptions and challenge windows (if any)
- Contract upgrade strategy without compromising trust assumptions
Examples of deployed implementations include zkSync Era and Polygon zkEVM, both of which expose their verifier logic publicly. Verifier bugs are irreversible once deployed, so formal verification and multiple audits of the onchain components are standard best practice.
Rollup Nodes, Sequencers, and Data Availability
Beyond ZK proving, rollups require a full node stack that processes transactions, orders them, and publishes data for verification.
Core components:
- Sequencer: Orders L2 transactions and produces batches for proving
- Rollup node: Re-executes transactions and verifies proofs
- Data availability layer: Ensures transaction data is accessible to verifiers
Common architectures:
- Ethereum calldata for data availability
- External DA layers such as Celestia for lower costs
- Centralized sequencer with decentralization roadmap
Operational concerns:
- Censorship resistance
- Sequencer liveness guarantees
- Safe recovery if the sequencer fails
A rollup is only as trust-minimized as its weakest component. Teams must make explicit tradeoffs between decentralization, performance, and operational complexity when designing the node and DA architecture.
Frequently Asked Questions
Common questions and solutions for developers implementing zero-knowledge proof systems for Layer 2 rollups.
A zkEVM (Zero-Knowledge Ethereum Virtual Machine) is a specialized virtual machine designed to generate proofs for EVM-compatible execution. It understands EVM opcodes directly, enabling high compatibility with existing Ethereum smart contracts. Examples include Polygon zkEVM, Scroll, and zkSync Era.
A zkVM (Zero-Knowledge Virtual Machine) is a more general-purpose proving system for arbitrary instruction sets, like RISC-V or custom WASM environments. It's not natively EVM-compatible. StarkNet's Cairo VM is a prominent example. The key trade-off is compatibility vs. performance: zkEVMs prioritize developer familiarity, while zkVMs often achieve higher proving efficiency for custom logic.
Conclusion and Next Steps
You have now configured the core components for a zero-knowledge rollup. This guide covered the essential setup, but building a production-ready system requires further steps.
The infrastructure you've deployed—a sequencer, prover, and data availability layer—forms the operational backbone. The next phase involves rigorous testing and optimization. Begin by simulating high transaction loads using tools like Ganache or a local testnet fork. Monitor key metrics: proof generation time (should be under 5 minutes for user experience), sequencer throughput (transactions per second), and the cost of posting data to your chosen DA layer (e.g., Celestia, EigenDA, or Ethereum calldata). This baseline is critical for identifying bottlenecks.
Security must be your primary focus before any mainnet deployment. Engage a reputable auditing firm to review your circuit logic, bridge contracts, and sequencer code. Concurrently, establish a bug bounty program on platforms like Immunefi to incentivize external researchers. For the prover, consider implementing a multi-prover system where proofs are generated by different software (e.g., one using zk-SNARKs and another using zk-STARKs) to guard against a single point of failure in the proving stack.
To evolve your rollup, explore advanced architectural patterns. Validiums like StarkEx offer scalability by keeping data off-chain, trading off some security for lower costs. Optimistic ZK rollups (like the proposed zkEVM roadmap) combine fraud proofs with validity proofs for a transitional security model. Investigate proof aggregation services from providers like Succinct or Ingonyama to reduce operational overhead. Staying current with EIP-4844 (proto-danksharding) is also essential, as it will drastically reduce DA costs on Ethereum.
Finally, integrate with the broader ecosystem. Ensure your rollup is compatible with cross-chain messaging protocols like LayerZero, Wormhole, or the Chainlink CCIP for asset transfers. List your chain on block explorers (Blockscout), indexers (The Graph), and wallet providers (MetaMask) to improve developer and user accessibility. The journey from a functional setup to a robust, adopted network is iterative—continue to benchmark against solutions from Arbitrum, zkSync, and Polygon zkEVM to guide your development roadmap.