Signature size explodes: PQ algorithms like Dilithium and Falcon produce signatures 10-100x larger than ECDSA, making them impossible to verify on-chain at scale.
Why Signature Aggregation Is Key to Scalable PQ Blockchains
Post-quantum signatures are 10-100x larger than ECDSA. Without aggregation, block size and validator performance collapse. This is the core scalability challenge for quantum-resistant blockchains.
Introduction
Post-quantum cryptography introduces a signature size problem that breaks current blockchain scaling models.
Aggregation is the only path: The solution is not faster hardware, but cryptographic signature aggregation, which compresses thousands of signatures into a single, verifiable proof.
Current models fail: L2 rollups like Arbitrum and Optimism rely on cheap signature verification; their cost models collapse under PQ signature bloat without aggregation.
Evidence: A single Dilithium2 signature is ~2.5KB; verifying 10,000 in a block consumes ~25MB of calldata, exceeding Ethereum's current block gas limit by orders of magnitude.
Executive Summary: The PQ Scalability Trilemma
Post-quantum (PQ) cryptography introduces a fundamental bottleneck: signature sizes are 10-100x larger than ECDSA, forcing a brutal trade-off between security, throughput, and decentralization.
The Problem: PQ Signatures Break Block Propagation
A single Dilithium signature is ~2.5KB vs. ECDSA's 65 bytes. Including thousands per block creates propagation latency that cripples decentralization, as seen in early tests on Ethereum and Solana forks.
- Block size inflation: A 1MB block becomes 10MB+.
- Network chokepoint: Slower sync for validators and RPC nodes.
- Centralization pressure: Only nodes with high bandwidth can participate.
The Solution: BLS-Based Aggregation
BLS signatures allow non-interactive aggregation of N signatures into a single, constant-sized (~96 byte) proof, a technique used by Ethereum's consensus and Aptos. This is the only viable path for PQ blockchains.
- Constant verification cost: Verify 10 or 10,000 sigs with one operation.
- Native multi-signature support: Enables efficient MPC and threshold schemes.
- Foundation for rollups: Critical for scaling ZK-proof systems like StarkNet and zkSync.
The Trade-Off: New Trust Assumptions
Pure aggregation requires a trusted aggregator, creating a single point of failure. Projects like Supranational and Ingonyama are building hardware-accelerated trusted setups, while FROST and other threshold schemes aim for decentralized aggregation.
- Trusted Execution Environment (TEE) reliance: Intel SGX or AMD SEV as a bridge solution.
- Cryptographic innovation needed: Progress in SNARKs and VDFs for trustless aggregation.
- Hardware acceleration: Essential for practical latency (<100ms) at scale.
The Benchmark: Ethereum's Roadmap
Ethereum's PBS + Danksharding architecture is the canonical case study. It uses BLS aggregation to scale to ~1.3MB per slot for data blobs, directly informing PQ blockchain design. The path is clear: separate execution from consensus and aggregate aggressively.
- Proposer-Builder Separation (PBS): Decouples block building from proposing.
- Data Availability Sampling (DAS): Ensures data is published without downloading full blocks.
- Modular stack imperative: Forces specialization, as seen in Celestia and EigenLayer.
The Competitor: SNARK/STARK Compression
An alternative to classical aggregation: wrap all transactions in a single ZK-SNARK or ZK-STARK proof. Used by zkRollups and projects like Mina Protocol. This trades signature verification for proof verification, which is also PQ-resistant.
- Ultimate compression: Entire block state transition in one proof.
- Heavy upfront cost: Prover time and hardware are significant bottlenecks.
- EVM compatibility hurdles: Tools like Risc Zero and SP1 are bridging this gap.
The Bottom Line: Aggregation Is Infrastructure
Signature aggregation is not a feature—it's the new base layer. Protocols that treat it as an afterthought will fail. The winning stack will integrate aggregation at the VM level (like Move), leverage dedicated coprocessors, and adopt a modular data availability layer.
- VC takeaway: Back teams with deep crypto, not web2, backgrounds.
- Architect's mandate: Design for aggregation from day one.
- Market gap: A generalized aggregation network is the next LayerZero or Axelar.
The Math of the Bloat: Why Aggregation Isn't Optional
Post-quantum signature sizes create a fundamental data bottleneck that signature aggregation directly solves.
Post-quantum signatures are massive. A single Dilithium2 signature is ~2.5KB, dwarfing a 64-byte ECDSA signature. This 40x inflation makes block propagation and state growth untenable for high-throughput chains like Solana or Arbitrum.
Aggregation is a data compression primitive. Protocols like BLS signature aggregation or SNARK-based proof systems (e.g., zkSync's Boojum) compress thousands of signatures into a single, constant-sized verification object. This transforms O(n) growth into O(1).
The alternative is economic failure. Without aggregation, a network's useful throughput is capped by its gossip layer's ability to transmit megabyte-sized blocks. This creates a direct trade-off between security (quantum-resistance) and scalability that only aggregation resolves.
Evidence: StarkWare's SHARP prover aggregates Cairo program executions. A single STARK proof can verify a batch of ~1M transactions, making the per-transaction verification cost negligible and enabling the scale of dYdX.
Signature Overhead: The Cold, Hard Numbers
Comparing the transaction size and verification cost overhead of classical, hybrid, and aggregated post-quantum signatures.
| Metric | Classical ECDSA (Baseline) | Standalone Dilithium (PQ) | BLS-SNARK Aggregation |
|---|---|---|---|
Signature Size per TX | ~65 bytes | ~2,420 bytes | ~200 bytes (for 1k TXs) |
Bandwidth Overhead vs Baseline | 1x | 37x | ~0.03x per TX |
On-Chain Verification Gas Cost | 21k gas |
| ~500k gas (for batch) |
Supports Native Aggregation | |||
Quantum Security (NIST Level) | Level 0 | Level 2 | Level 2 |
Time to Verify 10k Signatures | < 1 sec |
| < 2 sec |
Implementation Complexity | Low | High | Very High |
The Aggregation Trade-Off: Not a Free Lunch
Signature aggregation is the non-negotiable scaling primitive for post-quantum blockchains, but its implementation demands a fundamental architectural trade-off.
Post-quantum signatures are massive. A single Dilithium2 signature is ~2.5KB, dwarfing ECDSA's 64 bytes. Without aggregation, a 10,000-validator consensus message becomes 25MB, making networks like Cosmos or Ethereum untenable.
Aggregation compresses, not eliminates, cost. Protocols like BLS or BN254 merge signatures into a single proof, but the verification workload shifts to provers. This creates a new bottleneck at the aggregation layer, trading network bandwidth for compute.
The trade-off is latency for throughput. Real-time aggregation for fast chains like Solana is impossible; signatures must be batched. This introduces a deterministic delay, a core design constraint for any PQ L1 or L2 like Arbitrum.
Evidence: StarkWare's experiments show verifying 1,024 Dilithium signatures natively takes ~1.5 seconds on a server. Aggregation reduces this to ~10ms, but the prover time grows linearly with the batch size, defining the system's scalability ceiling.
Who's Building? Aggregation in the Wild
Signature aggregation is moving from academic papers to live infrastructure, driven by teams solving concrete scalability bottlenecks.
The Problem: Post-Quantum Signatures Are Huge
A single Dilithium signature is ~2.5KB, making a 10,000-signature block add ~25MB of pure signature data. This breaks existing blockchain gossip and consensus models.
- Network Overhead: Gossiping a 25MB block every ~12 seconds is impossible for most nodes.
- Storage Bloat: Chain state growth becomes dominated by security metadata, not application logic.
The Solution: BLS Aggregation for Rollups
Projects like EigenLayer and zkSync use BLS signature aggregation today to batch thousands of validator attestations into a single ~96-byte proof. This is the blueprint for PQ migration.
- State of the Art: Aggregates O(n) signatures into O(1) constant-sized proof.
- Proven Scale: Enables 100,000+ validator sets without bloating L1 consensus.
The Bridge: Aggregation Layers (Like Sui's Narwhal)
Sui's Narwhal mempool separates transaction dissemination from consensus, making it a natural fit for aggregating signatures before they hit the critical path. This architecture is a precursor to PQ-ready systems.
- Decoupled Design: Compute-intensive aggregation happens off-critical-path.
- Throughput: Enables 120,000 TPS in benchmarks by minimizing consensus payload.
The Future: SNARKs of Aggregated Signatures
The endgame combines aggregation with succinct proofs. A zkSNARK can verify an entire aggregated signature batch in a constant-sized proof, compressing verification logic itself. Teams like Nil Foundation are pioneering this.
- Double Compression: Aggregates signatures, then proves correctness with a SNARK.
- L1 Finality: Enables trust-minimized bridging of PQ-secured chains.
The Path Forward: Hybrid Schemes and Modular Aggregation
Post-quantum blockchains require hybrid signature schemes and modular aggregation layers to achieve scalability without sacrificing security.
Hybrid signatures are mandatory. Pure lattice-based signatures like Dilithium increase block size by 100x, making them unusable for consensus. The only viable path combines a fast classical signature (Ed25519) with a quantum-resistant component, creating a dual-proof system that maintains current throughput.
Aggregation moves off-chain. On-chain verification of PQ signatures remains prohibitive. The solution is a modular aggregation layer, similar to EigenLayer for restaking, where specialized provers batch thousands of signatures into a single proof for the base chain.
This mirrors L2 scaling patterns. Just as rollups move execution off-chain, signature aggregation moves verification off-chain. Projects like Succinct Labs and Avail are building generalized proof aggregation layers that will become critical infrastructure.
The end-state is protocol abstraction. Developers will call a verifySignature function; the underlying system will dynamically route to the most cost-effective hybrid scheme and aggregator network, abstracting cryptographic complexity entirely.
TL;DR for the Time-Poor Architect
Classic BLS or ECDSA signatures will be quantum-broken, making today's consensus and rollup proofs insecure. Aggregation is the only viable path to scale.
The Problem: Post-Quantum Signatures Are Huge
A single Dilithium signature is ~2KB vs. BLS's 96 bytes. For a 1000-validator consensus round, that's ~2MB of bloat per block, destroying throughput and node sync times.
- Network Overhead: 10-100x more data per attestation.
- State Growth: Unmanageable signature storage in light clients.
The Solution: Aggregate, Then Verify
Aggregators like Supranational's blst or KZG ceremonies compress thousands of PQ signatures into a single, constant-sized proof. This mirrors the scaling playbook of zk-rollups (StarkNet, zkSync) for execution.
- Scalability: O(1) verification complexity.
- Composability: Enables PQ-secured light clients and bridges.
The Trade-off: Centralization & Liveness
Aggregation introduces a single point of failure: the aggregator. If it's offline, the chain halts. This is the core liveness-security tradeoff that protocols like EigenLayer and Babylon are solving for with decentralized sequencing.
- Risk: Malicious aggregator can censor.
- Mitigation: Distributed key generation (DKG) and slashing.
The Blueprint: Look at Ethereum's Roadmap
Ethereum's PBS (Proposer-Builder Separation) and Danksharding are predicate on efficient BLS aggregation. The post-quantum transition (likely to STARKs or Lattice-based schemes) will follow the same architectural pattern but with new crypto.
- Precedent: EIP-4844 (blobs) for data scaling.
- Future: PQ-VDFs for leader election.
The Competitor: SNARKs/STARKs as Aggregators
Why aggregate signatures when you can prove them? A single zk-SNARK (e.g., Plonky2) can verify a batch of millions of PQ signatures off-chain, submitting a ~45KB proof on-chain. This is the ultimate form of aggregation, used by zk-rollups and Polygon zkEVM.
- Finality: Cryptographic, not economic.
- Cost: High prover compute, but fixed on-chain cost.
The Bottom Line: It's About Cost Curves
The winning PQ stack will be determined by amortized verification cost. Aggregation flattens the cost curve from O(n) to O(1). This isn't optional—it's the difference between a $1000 TPS chain and a $10 TPS chain when quantum-safe. Your architecture must treat signature aggregation as a first-class primitive.
- Metric: Gas per signature in a batch.
- Target: Sub-cent verification for mass adoption.
Get In Touch
today.
Our experts will offer a free quote and a 30min call to discuss your project.