Zero-knowledge rollups (ZK-rollups) like zkSync, StarkNet, and Polygon zkEVM achieve scalability by bundling transactions and submitting a single cryptographic proof (a ZK-SNARK or ZK-STARK) to the base layer. The security and finality of the entire batch depend on the integrity of this proof. Currently, these systems use elliptic curve cryptography (ECC), such as the BN254 or BLS12-381 curves, which are vulnerable to future attacks by sufficiently powerful quantum computers. The transition to post-quantum cryptography (PQC) is therefore a security imperative, but it comes with a direct cost to performance and cost-per-transaction.
How to Design a Scalability Strategy with PQC and ZK-Rollups
Introduction: The PQC Scalability Challenge for ZK-Rollups
Post-quantum cryptography introduces new cryptographic primitives that are secure against quantum computers but are significantly larger and slower than their classical counterparts. This creates a fundamental tension with the scalability goals of ZK-rollups, which rely on efficient proof generation and verification.
The core challenge is that PQC algorithms, such as those selected in the NIST standardization process (e.g., CRYSTALS-Dilithium for signatures, CRYSTALS-Kyber for key encapsulation), have much larger key and signature sizes and require more computational power. For a ZK-rollup, this impacts two critical components: the prover and the verifier. The prover, which generates the validity proof, must perform more complex operations, increasing proof generation time and hardware requirements. The verifier, often a smart contract on Ethereum, must execute more expensive on-chain computations to verify the proof, increasing gas costs and potentially negating rollup scalability benefits.
Designing a scalability strategy requires a layered approach. You cannot simply swap ECC for PQC in an existing ZK-rollup stack. Instead, developers must evaluate trade-offs across the entire stack: the choice of PQC algorithm (lattice-based, hash-based, etc.), the design of the zkVM or circuit, and the data availability layer. For instance, a hash-based signature scheme like SPHINCS+ has a large signature but is considered a conservative safety net, while lattice-based schemes offer better performance but rely on newer mathematical assumptions. The strategy must balance quantum resistance, proof generation latency, on-chain verification gas cost, and trust assumptions.
A practical strategy often involves hybrid cryptography. In this model, a system uses both classical ECC and a PQC algorithm simultaneously. For example, a ZK-rollup's state transition proof could be signed with both an ECDSA signature and a Dilithium signature. This provides defense-in-depth: security against classical attacks today and protection against quantum attacks in the future. The transition can be gradual, allowing the ecosystem to mature while PQC algorithms are further optimized and audited. The eventual goal is a pure PQC system once performance overhead is reduced to an acceptable level for mass adoption.
Implementation requires careful benchmarking. Developers should profile their proving system with PQC primitives integrated into the arithmetic circuit. Tools like circom or Halo2 need libraries for PQC operations. The on-chain verification contract must be audited for both correctness and gas efficiency. A successful PQC scalability strategy is not a one-time migration but a continuous process of monitoring algorithmic advancements, participating in community standards like the Ethereum PQC Working Group, and planning for iterative upgrades to the rollup protocol.
Prerequisites and Core Dependencies
Before designing a system that integrates Post-Quantum Cryptography (PQC) with ZK-Rollups, you must establish a robust technical foundation. This section outlines the core concepts, tools, and architectural decisions required to begin.
A scalable strategy begins with a clear understanding of the two core technologies. Post-Quantum Cryptography (PQC) refers to cryptographic algorithms designed to be secure against attacks from both classical and quantum computers. The National Institute of Standards and Technology (NIST) is standardizing these algorithms, with frontrunners like CRYSTALS-Kyber for key encapsulation and CRYSTALS-Dilithium for digital signatures. In parallel, ZK-Rollups are Layer 2 scaling solutions that batch thousands of transactions off-chain, generate a cryptographic proof (a ZK-SNARK or ZK-STARK), and post it to the underlying Layer 1 (e.g., Ethereum). The security of the rollup's state transitions depends entirely on the soundness of this proof and the cryptographic primitives used to create it.
Your primary dependency is a deep familiarity with existing ZK-Rollup frameworks and their proving systems. For development, you will interact with frameworks like StarkWare's Cairo (for STARKs) or zkSync's zkEVM and Scroll's architecture (for SNARKs). Each has a specific toolchain, such as the Cairo compiler or Circom for circuit design, and requires knowledge of their respective virtual machines and proof systems. You must also understand the current cryptographic dependencies of these systems, which typically rely on elliptic curve cryptography (e.g., BN254, BLS12-381) that is not quantum-resistant. The strategic goal is to identify where these classical components can be replaced or augmented with PQC alternatives without breaking the proving system's logic or performance profile.
The final prerequisite is establishing a local development and testing environment capable of simulating this integration. This involves setting up a local Ethereum testnet (using Foundry's Anvil or Hardhat), deploying a rollup's smart contracts (like the verifier and bridge contracts), and running the rollup's node software or prover client. You will need to integrate PQC libraries, such as liboqs (Open Quantum Safe) or language-specific implementations, into this stack. Initial testing should focus on creating a hybrid system where non-critical components (e.g., transaction signing for user onboarding) use PQC signatures, while the core proof system remains unchanged, allowing for incremental validation and performance benchmarking against the classical baseline.
Key Concepts: PQC Overhead and ZK-Rollup Architecture
This guide explains the computational overhead introduced by Post-Quantum Cryptography (PQC) and how ZK-Rollup architecture can be designed to mitigate its impact on blockchain scalability.
Integrating Post-Quantum Cryptography (PQC) into blockchain systems introduces significant computational and bandwidth overhead. Traditional digital signatures like ECDSA use small keys (e.g., 256-bit) and produce compact signatures (64 bytes). In contrast, PQC algorithms such as CRYSTALS-Dilithium or Falcon require larger keys (2-4 KB) and generate signatures ranging from 1-50 KB. This PQC overhead directly impacts transaction size, validation time, and gas costs, posing a fundamental challenge for scaling Layer 1 blockchains where every node must verify every transaction.
ZK-Rollups offer a strategic architecture to absorb this overhead. A ZK-Rollup batches hundreds or thousands of transactions off-chain, generates a single cryptographic proof (a ZK-SNARK or ZK-STARK) of their validity, and posts only that proof and minimal state data to the base layer (e.g., Ethereum). The key insight is that the PQC signature for each user transaction is verified off-chain by the rollup's prover. Only the aggregate proof, which is already large (tens to hundreds of KB), needs to be verified on-chain. The marginal cost of adding PQC verification to the off-chain proving process is minimal compared to the cost of verifying each signature individually on-chain.
Designing a ZK-Rollup for a PQC future involves specific architectural choices. The circuit for the zero-knowledge proof must include the logic for verifying PQC signatures. For example, a circuit for a rollup like zkSync or StarkNet would be extended with gates implementing the verification algorithm for Dilithium3. While this increases the proving time and complexity off-chain, it is a one-time cost per batch. The on-chain verifier contract only needs the logic to check the single ZK proof, which remains constant regardless of the number of PQC signatures in the batch. This effectively amortizes the PQC verification cost across all transactions.
A practical implementation strategy involves a hybrid approach during the transition period. Rollups can support both traditional ECDSA/secp256k1 signatures and PQC signatures within the same circuit, allowing users to migrate gradually. The state tree must account for larger public keys. Furthermore, data availability becomes more critical; while proof verification is efficient, the larger transaction data (including PQC signatures) must be made available off-chain, typically through a data availability committee or an external data layer like Celestia or EigenDA, to allow for state reconstruction and fraud challenges if needed.
The long-term scalability strategy hinges on continuous optimization of both PQC algorithms and ZK proof systems. Projects like Nova (for recursive SNARKs) and advancements in STARKs are reducing proving times. Simultaneously, NIST's ongoing PQC standardization process is focusing on optimizing schemes for performance. By leveraging ZK-Rollups, blockchain systems can front-load the PQC transition's computational burden into an off-chain, specialized proving environment, preserving the scalability and low fees for end-users while securing the network against future quantum attacks.
PQC Algorithm Overhead Comparison
Comparison of computational and storage overhead for leading NIST-standardized PQC algorithms relevant to ZK-Rollup design.
| Algorithm / Metric | CRYSTALS-Kyber (KEM) | CRYSTALS-Dilithium (Signature) | Falcon (Signature) | SPHINCS+ (Signature) |
|---|---|---|---|---|
NIST Security Level | 1, 3, 5 | 2, 3, 5 | 1, 5 | 1, 3, 5 |
Public Key Size (bytes) | 800 - 1,568 | 1,312 - 2,592 | 897 - 1,793 | 32 - 64 |
Signature Size (bytes) | 768 - 1,568 | 2,420 - 4,596 | 666 - 1,280 | 7,856 - 49,216 |
Key Gen Time (relative) | 1x | 1.5x | 3x | 1x |
Sign/Encrypt Time (relative) | 1x | 2x | 5x | 100x+ |
Verify/Decrypt Time (relative) | 1x | 1x | 2x | 100x+ |
On-chain Proof Overhead | Medium | High | Low | Very High |
ZK-SNARK Proving Complexity | Low | Medium | High | Very High |
Proof Aggregation for Batch Verification
Combine multiple zero-knowledge proofs into a single, succinct proof to dramatically reduce on-chain verification costs and data for ZK-Rollups.
Proof aggregation is a cryptographic technique that allows a prover to combine multiple individual validity proofs into a single, aggregated proof. In the context of ZK-Rollups, this means batching proofs for hundreds or thousands of transactions. The core benefit is that the cost of verifying this single aggregated proof on the base layer (e.g., Ethereum) is significantly lower than verifying each proof individually, leading to substantial gas savings and higher throughput. This is the primary scalability lever for modern ZK-Rollups like StarkNet and zkSync Era.
The process typically involves a two-step proof system. First, a recursive proof is generated. This is a special proof that can verify the correctness of other proofs. By recursively composing proofs, you create a proof-of-proofs. Second, this final recursive proof is aggregated. Libraries like Plonky2 and Halo2 are designed with recursion and aggregation as first-class features, enabling efficient proof systems where the verification time grows logarithmically with the number of batched transactions.
Implementing aggregation requires careful circuit design. Your application's main logic is compiled into a ZK-SNARK or ZK-STARK circuit. To enable aggregation, you must design this circuit to output a small piece of public data—often a cryptographic commitment like a Merkle root—that can be efficiently verified inside another circuit. This allows an aggregator circuit to take many of these outputs and produce a single proof attesting to their collective validity, which is what gets posted on-chain.
Here is a simplified conceptual outline of the aggregation flow in pseudocode:
code// 1. Generate individual transaction proofs tx_proofs = [] for tx in batch_transactions: proof = generate_proof(tx, circuit_params) tx_proofs.append(proof) // 2. Aggregate proofs using a recursive verifier circuit aggregated_proof = aggregate_proofs(tx_proofs, aggregation_circuit) // 3. Post only the final aggregated proof to L1 l1_contract.verifyAggregatedProof(aggregated_proof)
The on-chain verifier contract only needs the logic to check the single aggregated_proof, not the logic for every individual transaction.
The major trade-off is prover time and complexity. Aggregation adds computational overhead for the rollup sequencer or prover node. Generating recursive proofs is more computationally intensive than creating standalone proofs. However, this cost is borne off-chain, while the on-chain verification savings are permanent. The choice of proof system (SNARKs vs. STARKs) and the specific cryptographic backend (e.g., Groth16, Plonk, FRI) will heavily influence this trade-off and the final scalability gains.
Strategy 2: Recursive Proofs with PQC Oracles
This guide explains how to design a ZK-rollup that uses recursive proofs verified by a Post-Quantum Cryptography (PQC) oracle to achieve scalable, future-proof finality.
A recursive proof is a zero-knowledge proof that verifies other proofs. In a ZK-rollup, this allows you to aggregate multiple transaction proofs into a single, succinct proof for the L1. The core challenge is that the verification of a recursive proof's final step must be executed on-chain, and this verification algorithm itself must be quantum-resistant. The most practical approach is to delegate this final verification to a specialized PQC oracle. This oracle runs a PQC signature scheme (like Dilithium or Falcon) to attest to the validity of the aggregated recursive proof.
The system architecture involves three layers. First, the rollup sequencer batches transactions and generates standard ZK-SNARK proofs (e.g., using Groth16 or PLONK). Second, a recursive proof aggregator (an off-chain prover) consumes these individual proofs and generates a single recursive proof. Third, the critical component: a PQC oracle signs a message containing the recursive proof's validity claim using a post-quantum signature. The rollup's smart contract on Ethereum or another L1 only needs to verify the PQC signature, which is a lightweight operation compared to verifying a complex ZK proof directly.
Implementing this requires careful design. The recursive prover and the PQC oracle must be logically separated to minimize trust assumptions. The oracle should be implemented as a decentralized network or a threshold signature scheme to avoid a single point of failure. The on-chain contract stores the oracle's PQC public key. When the aggregated proof is submitted, the contract checks the attached PQC signature against the stored key and the claimed proof hash. This means the L1 never processes the proof data directly; it only attests to a signed statement about the proof.
Here is a simplified conceptual flow in pseudocode:
code// 1. Off-chain: Generate and aggregate proofs batch_proof = generate_zk_proof(tx_batch); recursive_proof = aggregate_proofs([batch_proof, previous_state]); // 2. Off-chain: PQC Oracle attests to proof validity proof_validity_message = keccak256(recursive_proof, new_state_root); pqc_signature = pqc_oracle.sign(proof_validity_message); // 3. On-chain: L1 contract verifies only the PQC signature function submitState(bytes calldata pqcSig, bytes32 newStateRoot) public { bytes32 messageHash = keccak256(abi.encodePacked(newStateRoot)); require(pqcVerify(orcPubKey, messageHash, pqcSig), "Invalid PQC attestation"); stateRoot = newStateRoot; // State update finalized }
The primary advantage of this strategy is asymmetric scalability. The computationally expensive operations—proof generation and PQC signing—remain off-chain. The on-chain work is reduced to a single, cheap signature verification. This design also future-proofs the system; if a quantum computer breaks the rollup's underlying SNARK curve (e.g., BN254), the security of the attested state transitions remains intact as long as the PQC signature scheme holds. This decouples the progress of quantum attacks on ZK cryptography from the security of the chain's historical data.
Key considerations for deployment include oracle incentivization, the latency of PQC signing, and the choice of PQC algorithm standardized by NIST (like ML-DSA). Monitoring projects like EigenLayer for decentralized oracle designs or zkSync's Boojum upgrade for recursive proof benchmarks is essential. This strategy is not a silver bullet but a robust architectural pattern for building scalable L2s whose finality guarantees are designed to survive the advent of quantum computing.
Specialized Data Compression for PQC
Optimize post-quantum cryptography (PQC) for ZK-rollups by implementing protocol-aware compression to reduce the size of signatures and proofs.
Post-quantum cryptographic schemes like Dilithium (for signatures) and Kyber (for KEM) are essential for long-term security but introduce significant data overhead. A Dilithium2 signature is approximately 2.5 KB, compared to ~64 bytes for ECDSA. In a ZK-rollup, where every transaction's signature must be included in a batch and proven, this data bloat directly impacts proving time and calldata costs on L1. The core challenge is to maintain cryptographic security while minimizing the data that must be processed by the zkVM and posted on-chain.
Specialized compression exploits the structural properties of PQC artifacts. For lattice-based signatures, the signature often consists of vectors (z, c, h) where h is a hint used for verification. A naive approach is to send the full tuple. An optimized strategy is to have the prover reconstruct the hint h during proof generation inside the zkVM, requiring only (z, c) to be provided as input. This can reduce the external witness data by 30-40%. Similarly, for Falcon signatures, the nonce can be derived deterministically, eliminating the need to transmit it.
Implementing this requires modifying the standard verification logic within your zk-circuit. Instead of a black-box call to a PQC verification library, you design a custom circuit that accepts the compressed components. For a Dilithium signature, your circuit would take z and c, then internally recompute the candidate w1 and use c to regenerate the hint h via a deterministic commitment. This shifts computational burden to the prover (inside the ZK-proof) but drastically cuts the data footprint. This approach is protocol-specific and must be rigorously audited to ensure it does not introduce new attack vectors.
The benefits are multiplicative in a rollup context. Reducing each transaction's signature size from 2.5 KB to 1.5 KB means a batch of 1000 transactions saves 1 MB of data that the prover must hash and the circuit must verify. This translates to faster proof generation and lower Ethereum gas fees for data publication. This strategy is complementary to general-purpose compression (like brotli); it's a semantic compression that understands the cryptographic data format.
To implement, start by forking the verification logic from a library like PQClean and porting it to your zkDSL (e.g., Circom, Halo2). Isolate the components that can be derived. Your circuit's public inputs should be the compressed signature and the message. Always benchmark against the full-verification circuit to quantify the reduction in constraints and witness size. This optimization is critical for making PQC-practical for high-throughput, cost-sensitive applications like ZK-rollups.
Scalability Strategy Trade-off Matrix
Key trade-offs between a PQC-first approach, a ZK-Rollup-first approach, and a hybrid strategy for blockchain scalability.
| Feature / Metric | PQC-First Strategy | ZK-Rollup-First Strategy | Hybrid (PQC + ZK) Strategy |
|---|---|---|---|
Primary Security Focus | Post-quantum cryptographic resistance | Scalability with succinct verification | Balanced quantum resistance & scalability |
Throughput (TPS) | ~100-1,000 | ~2,000-10,000+ | ~500-5,000 |
On-Chain Data Footprint | High (full transaction data) | Low (only validity proofs) | Medium (proofs + selective data) |
Time to Finality | ~1-6 minutes | < 10 seconds | ~10-60 seconds |
Development Complexity | High (novel cryptography integration) | Very High (circuit design & proving) | Extreme (both cryptographic systems) |
Gas Cost for Users | High (base layer execution) | Very Low (L2 execution) | Medium (varies by operation) |
Quantum Attack Resistance (NIST Level 1) | |||
Trust Assumptions | Native blockchain consensus | Cryptographic (ZK) + 1/N honest operator | Cryptographic (PQC & ZK) + 1/N honest operator |
Implementation Resources and Tools
These resources help engineers design a scalability strategy that combines post-quantum cryptography (PQC) with ZK-rollups, focusing on concrete tooling, protocol constraints, and integration paths that work on Ethereum-compatible stacks today.
ZK Circuit Design for PQC Verification
ZK-rollups rely on arithmetic circuits, so PQC verification must be expressed inside a ZK-friendly constraint system. This card focuses on how to evaluate and design circuits that verify PQC signatures efficiently.
Practical steps for circuit designers:
- Use Circom or Halo2 to benchmark PQC primitives at the constraint level.
- Favor hash-based signatures like SPHINCS+ only when recursion or batching offsets their high verification cost.
- Model constraint counts early. A single Dilithium verification can require hundreds of thousands of constraints, which affects proof time and prover hardware.
Design patterns that work in practice:
- Batch multiple PQC signature checks inside one proof.
- Verify PQC signatures offchain and prove correctness via a ZK attestation circuit.
- Use recursion to amortize PQC costs across many rollup blocks.
This resource area is critical for avoiding designs that are theoretically secure but economically impossible to deploy on a production rollup.
Implementation Walkthrough: A Hybrid Approach
This guide details a practical architecture for integrating post-quantum cryptography (PQC) with ZK-Rollups to future-proof Ethereum's Layer 2 scaling.
The core challenge is balancing quantum-resistant security with the computational efficiency required for scalable rollups. A hybrid approach uses PQC algorithms for long-term signature and key agreement security, while relying on classical cryptography like Keccak and SHA-256 for high-performance components such as the zk-SNARK proving system. This design isolates the computationally intensive PQC operations to specific, less frequent tasks, preventing them from becoming a bottleneck for proof generation. For instance, you might use CRYSTALS-Dilithium for validator signatures on the Layer 1 settlement contract, but keep the zk-SNARK's hash function as SHA-256.
A practical implementation involves a two-tiered signature scheme within the rollup's state transition logic. User transactions can be signed with a PQC algorithm like Falcon-512. The sequencer batches these transactions and generates a zk-SNARK proof attesting to the validity of the state transition. Crucially, the proof's verification key and the sequencer's signature on the batch are secured with PQC. The heavy proving work remains in the classical domain, ensuring sub-second proof generation times essential for user experience. The NIST Post-Quantum Cryptography Standardization project provides the definitive reference for approved algorithms.
To implement this, you would modify a standard rollup circuit, such as one built with Circom or Halo2. The circuit must include a PQC signature verification component. For a Dilithium signature, the circuit would verify the mathematical relationship between the public key, message, and signature using lattice-based operations. This is more complex than an ECDSA verifier and will increase circuit size. A key optimization is to use a recursive proof that aggregates multiple PQC verifications into a single classical SNARK, amortizing the cost. Development frameworks like liboqs provide C libraries that can be integrated into zk-SNARK toolchains for prototyping.
The on-chain verifier contract on Ethereum Layer 1 must also be upgraded. Instead of a standard ECDSA ecrecover check, it will need to validate a PQC signature attesting to the validity of the zk-SNARK proof. This requires writing a Solidity or Vyper contract that implements the verification logic for your chosen PQC algorithm. Due to high gas costs, this is often implemented as a precompile or a verifier optimized with techniques like PLONK or Groth16. The contract's address becomes the new trusted entry point for submitting rollup batches, creating a quantum-resistant bridge between Layer 2 and Layer 1.
Testing and benchmarking are critical. You must profile the performance impact of PQC on both the prover (slower proof generation) and the on-chain verifier (higher gas costs). A phased rollout strategy is advisable: start with a hybrid multi-signature scheme that accepts both ECDSA and PQC signatures, then gradually increase the quorum of PQC signatures required. This allows the ecosystem to transition without breaking existing tooling. Monitoring tools should track metrics like average proof time with PQC enabled and the associated gas overhead on Layer 1 to inform protocol parameter adjustments.
Frequently Asked Questions on PQC and ZK-Rollups
Integrating Post-Quantum Cryptography (PQC) with ZK-Rollups introduces new design considerations. This FAQ addresses common technical questions and implementation challenges for developers building future-proof, scalable L2 solutions.
Integrating Post-Quantum Cryptography (PQC) is a proactive security measure, not an immediate performance requirement. While classical cryptography like ECDSA is secure today, a future quantum computer capable of breaking it would compromise the entire rollup's state and asset security retroactively. Starting integration now allows for:
- Long-term future-proofing of user funds and state transitions.
- Gradual testing and optimization of PQC algorithms (e.g., CRYSTALS-Dilithium, Falcon) within the complex ZK-SNARK/STARK proving stack.
- Avoiding a rushed, potentially insecure migration under pressure if quantum threats materialize. The goal is to design a hybrid or agile cryptographic layer that can transition smoothly when necessary, ensuring the rollup's longevity.
Designing a Scalability Strategy with PQC and ZK-Rollups
This guide outlines a practical framework for integrating Post-Quantum Cryptography (PQC) with ZK-Rollups to build scalable, future-proof blockchain applications.
A robust scalability strategy must address both throughput and long-term security. ZK-Rollups are the leading Layer 2 solution for scaling Ethereum, bundling thousands of transactions into a single cryptographic proof. However, the cryptographic primitives they rely on—primarily elliptic curve cryptography (ECC) and SNARK/STARK systems—are potentially vulnerable to future quantum computers. Integrating Post-Quantum Cryptography (PQC) algorithms, such as those being standardized by NIST (e.g., CRYSTALS-Dilithium, CRYSTALS-Kyber), into the rollup's proof system and state transition logic is a critical step for quantum resilience. This creates a dual-layer architecture where classical cryptography handles present-day efficiency while PQC safeguards the system's future.
The integration strategy follows a phased approach. Phase 1: Assessment. Audit your current ZK-Rollup stack (e.g., using zkSync, Starknet, or a custom circuit) to identify all cryptographic dependencies: signature schemes for transaction validation, hash functions in the Merkle trees, and the zk-SNARK's trusted setup or FRI protocol. Phase 2: Hybrid Cryptography. Implement a hybrid signature scheme where transactions are signed with both an ECDSA/secp256k1 key and a PQC key (like Dilithium). The rollup's state transition function must validate both signatures. This provides cryptographic agility, allowing for a gradual transition as PQC standards mature and hardware support improves.
Phase 3: Proof System Upgrade. This is the most complex phase, involving the rollup's core zero-knowledge proof system. For SNARKs, this may mean replacing the elliptic curve pairings in the trusted setup with a quantum-resistant cryptographic group. For STARKs, which are already based on hash functions, the focus shifts to ensuring the underlying hash (like SHA-256 or Rescue) is quantum-secure or can be swapped for a PQC-secure alternative. Developers should engage with rollup framework providers and monitor projects like the QED protocol that are pioneering quantum-secure zk-proofs. Code changes here are profound, affecting the circuit compiler and verifier smart contract.
For practical implementation, consider a modular design. Keep cryptographic logic in isolated, upgradeable modules within your rollup's sequencer and verifier contracts. Use proxy patterns or Ethereum's EIP-2535 Diamonds standard for the on-chain verifier to allow for future cryptographic upgrades without a full migration. When testing, benchmark the performance impact meticulously; PQC algorithms often have larger key and signature sizes, which increase calldata costs on L1 and proof generation time on L2. A successful pilot might involve a dedicated, parallel PQC-secured rollup for high-value institutional transactions before a full network upgrade.
The future development roadmap is clear. Short-term (1-2 years): Deploy hybrid signature schemes and begin community education. Medium-term (3-5 years): Collaborate with ZK-Rollup teams to integrate PQC-ready proof backends and establish new trusted setups if required. Long-term: Achieve full quantum resistance, potentially leveraging advancements in quantum-secure multi-party computation (MPC) and homomorphic encryption to further enhance privacy. By proactively designing with PQC in mind, developers can build ZK-Rollup applications that are not only scalable today but also secure against the computational paradigms of tomorrow.