Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Benchmark Performance of PQC Algorithms on Your Chain

A developer guide for empirically testing post-quantum cryptography algorithms. Measure transaction throughput, block propagation latency, and hardware resource consumption to inform algorithm selection and scaling.
Chainscore © 2026
introduction
INTRODUCTION

How to Benchmark Performance of PQC Algorithms on Your Chain

A practical guide to evaluating the computational and storage overhead of post-quantum cryptographic algorithms for blockchain applications.

Benchmarking Post-Quantum Cryptography (PQC) algorithms is a critical step for any blockchain project preparing for the quantum computing era. Unlike theoretical comparisons, on-chain benchmarking measures real-world performance metrics like transaction processing speed, signature verification time, and block size inflation. This process helps developers make informed decisions about which standardized algorithms, such as CRYSTALS-Dilithium for signatures or CRYSTALS-Kyber for key encapsulation, are viable for their specific consensus mechanism and network constraints.

To begin benchmarking, you must first integrate a PQC library into your node software. For Rust-based chains, the PQClean or liboqs libraries offer production-ready implementations. A basic integration involves replacing your existing signing function (e.g., ed25519) with a PQC alternative. The core metrics to track are: CPU cycles per operation, memory footprint, and the resulting serialized signature or key size. These directly translate to validator hardware requirements and network throughput.

Effective benchmarking requires a controlled test environment that mirrors mainnet conditions. Use a local testnet or a dedicated benchmarking pallet if using Substrate. Measure performance under load by simulating high transaction volumes. For example, compare the time to verify 10,000 EdDSA signatures versus 10,000 Dilithium3 signatures. Tools like hyperfine for timing and Valgrind for memory profiling are essential. Document the average, p95, and p99 latency percentiles to understand worst-case scenarios.

The results will reveal trade-offs. Lattice-based schemes like Dilithium have larger signatures (~2-4KB) but fast verification, which may be acceptable for high-throughput chains. Hash-based signatures (e.g., SPHINCS+) have tiny public keys but slower signing and larger signatures, potentially suited for infrequent, high-value transactions. Your chain's architecture—whether it's a high-TPS L2 or a sovereign settlement layer—will determine which trade-off is optimal.

Finally, publish your benchmark methodology and results transparently. This builds trust and contributes to the broader ecosystem. Share raw data on execution time per CPU core, RAM usage peaks, and bandwidth overhead. This empirical data is more valuable for decision-making than theoretical claims. The goal is not to find a 'best' algorithm, but the most appropriate one for your chain's security requirements and performance envelope.

prerequisites
PREREQUISITES AND SETUP

How to Benchmark Performance of PQC Algorithms on Your Chain

This guide outlines the essential tools and initial configuration needed to accurately measure the computational overhead of Post-Quantum Cryptography (PQC) algorithms on your blockchain node.

Before running any benchmarks, you must establish a controlled testing environment. Start by setting up a dedicated test node or validator instance separate from your mainnet or production network. This prevents any performance degradation or security risks. Ensure your system meets the hardware requirements for PQC, which are more demanding than classical ECDSA or EdDSA. Key prerequisites include a modern multi-core CPU (Intel Xeon or AMD EPYC recommended), at least 16GB of RAM, and a stable operating system like Ubuntu 22.04 LTS. You will also need git, make, gcc, and cmake installed for compiling cryptographic libraries.

The core of your setup involves integrating a PQC library with your chain's client software. For most blockchain clients built in Go, Rust, or C++, you will link against a library like liboqs (Open Quantum Safe) or PQClean. First, clone the library repository and build it for your architecture. For example, to build liboqs for a Go-based chain, you would run git clone https://github.com/open-quantum-safe/liboqs.git and follow the CMake build instructions. The critical step is to modify your chain's cryptographic provider or signing module to call the PQC algorithms (e.g., Dilithium, Falcon, or SPHINCS+) instead of the standard digital signature algorithms.

Next, you need to instrument your node for measurement. Install profiling tools specific to your client's language. For Go clients, use the built-in pprof and benchmarking tools. For Rust, use criterion or iai for precise cycle counts. You must write specific benchmark tests that isolate the cryptographic operations you want to measure: key generation, signing, and verification. These tests should be integrated into your client's existing test suite. A common practice is to create a new benchmark file, e.g., bench_pqc_signing.go, that imports the PQC library and uses the testing.B object to run the operation thousands of times to get a stable average.

Finally, configure your environment for consistent results. Disable any unnecessary background processes and consider using cpupower to set a fixed CPU frequency to avoid dynamic scaling skewing your results. Set up logging to output detailed timing data, typically in milliseconds or microseconds per operation. You are now ready to execute the benchmarks, collect the raw performance data on latency and throughput, and analyze how PQC algorithms impact your chain's block validation time and overall consensus performance.

key-concepts
POST-QUANTUM CRYPTOGRAPHY

Key Performance Metrics to Measure

Benchmarking PQC algorithms requires measuring specific computational and network overheads. These metrics determine real-world viability for blockchain consensus, transaction signing, and state validation.

02

Key and Signature Sizes

PQC algorithms have larger keys and signatures than classical ECC. Track the public key size, private key size, and signature size in bytes. For instance:

  • SPHINCS+: ~1KB signature, ~1KB public key.
  • Dilithium: ~2-4KB signature, ~1-2KB public key. These sizes increase on-chain storage requirements and the data payload for every transaction, affecting gas costs and block space efficiency.
03

CPU & Memory Utilization

Profile the CPU cycles and RAM consumption during key generation, signing, and verification operations. Use profiling tools like perf or valgrind to identify bottlenecks. High memory usage (e.g., >50MB for certain lattice-based schemes) can be prohibitive for resource-constrained validators or light clients, impacting network decentralization.

04

Network Propagation Delay

Measure the impact of larger PQC transaction sizes on gossip protocol performance. Benchmark the time for a signed transaction or block to propagate to 95% of network nodes. A signature size increase from 64 bytes (ECDSA) to 2KB (Dilithium) can add 10-100ms of propagation delay, which affects consensus finality time, especially in high-latency environments.

05

Gas Cost Overhead

For EVM-compatible chains, translate computational and storage overhead into gas cost. Deploy test contracts that perform PQC operations and measure the gasUsed. A verification that costs 200k gas versus 3k gas for ECDSA makes frequent operations economically non-viable. This metric is essential for fee market design and dApp economics.

NIST STANDARDIZATION ROUND 4 FINALISTS

PQC Algorithm Candidates and Characteristics

Comparison of the primary post-quantum cryptographic algorithms selected for standardization by NIST, focusing on core attributes relevant to blockchain integration.

Algorithm / MetricKyber (KEM)Dilithium (Signature)Falcon (Signature)SPHINCS+ (Signature)

Underlying Mathematical Problem

Module Learning with Errors (MLWE)

Module Learning with Errors (MLWE)

NTRU Lattices

Hash-Based Functions

NIST Security Level

Level 1, 3, 5

Level 2, 3, 5

Level 1, 5

Level 1, 3, 5

Public Key Size (approx.)

800-1,600 bytes

1,300-2,600 bytes

900-1,800 bytes

16-64 bytes

Signature Size (approx.)

N/A

2,400-4,600 bytes

600-1,300 bytes

8,000-50,000 bytes

Key Generation Speed

Fast (< 10 ms)

Fast (< 10 ms)

Slow (100-500 ms)

Fast (< 10 ms)

Signature Verification Speed

N/A

Fast (< 1 ms)

Very Fast (< 0.5 ms)

Slow (1-10 ms)

Resistance to Side-Channel Attacks

Recommended for Smart Contracts

benchmark-setup
PREREQUISITES

Step 1: Setting Up the Benchmarking Environment

This guide details the initial setup required to benchmark Post-Quantum Cryptography (PQC) algorithms on a blockchain node, focusing on hardware isolation, software dependencies, and baseline configuration.

Benchmarking cryptographic algorithms requires a stable, isolated environment to ensure consistent and reproducible results. Begin by provisioning a dedicated machine or virtual instance. For CPU-bound operations like signature verification, a modern multi-core processor (e.g., Intel Xeon or AMD EPYC) is ideal. Ensure the system has sufficient RAM (16GB minimum) and uses an SSD for fast disk I/O when loading large datasets or blockchain state. Disable power-saving features like CPU frequency scaling (cpufreq) and turbo boost in the BIOS/UEFI to prevent performance fluctuations during long-running tests.

The core software stack includes a Linux distribution (Ubuntu 22.04 LTS is a common choice), the Go programming language (version 1.21+), and git. You will also need the specific PQC library you intend to test, such as liboqs from the Open Quantum Safe project, or a Go-native implementation like circl. Install build essentials and development tools: sudo apt install build-essential cmake ninja-build. Clone your blockchain's node software repository (e.g., https://github.com/ethereum/go-ethereum for Geth) and the PQC library source code into separate directories.

Configure your node software to integrate the PQC algorithms. This typically involves modifying the cryptographic backend. For a Go-based chain, you might replace calls to crypto/ecdsa with wrappers from circl/sign/dilithium. Crucially, compile the node in benchmarking mode. Disable networking, mining, and RPC services by setting flags like --nodiscover --maxpeers 0 to eliminate external variables. Use the -benchtime and -count flags with Go's built-in benchmarking tool to control the duration and repetitions of each test run.

Establish a performance baseline by first benchmarking the existing classical cryptographic algorithms (e.g., ECDSA, Ed25519) on your setup. Run go test -bench=. -benchtime=10s ./crypto in your node's source directory. Record metrics like ns/op (nanoseconds per operation) and allocated bytes per operation. This baseline is essential for comparing the computational and memory overhead introduced by PQC candidates like CRYSTALS-Dilithium or Falcon, providing the context for your final performance analysis.

Finally, set up structured data collection. Create a script to automate test execution, parse the benchmark output, and log results to a CSV or JSON file. Include metadata in each run: timestamp, Git commit hashes for both the node and PQC library, CPU model, and Go version. Consistent data logging transforms ad-hoc tests into a reproducible research dataset, enabling you to track performance regressions or improvements across different algorithm parameters or library versions.

throughput-test
PERFORMANCE BENCHMARKING

Step 2: Measuring Transaction Throughput (TPS)

Transaction throughput, measured in Transactions Per Second (TPS), is the primary metric for evaluating a blockchain's practical capacity. This step details how to benchmark the impact of PQC algorithms on your chain's TPS using controlled tests.

To measure TPS, you must first define a standardized transaction. For a meaningful benchmark, use a common operation like a simple token transfer or a specific, lightweight smart contract call. The goal is to isolate the performance overhead of the cryptographic operations—key generation, signing, and verification—from other network and execution latencies. You will need a local testnet or a dedicated, isolated environment that mirrors your mainnet's configuration to ensure consistent results.

The core testing methodology involves a load generator that submits transactions to a single, synchronized validator node. You will run two identical test suites: one using your current digital signature scheme (e.g., ECDSA or Ed25519) and another using the PQC candidate (e.g., Dilithium or Falcon). For each suite, measure the sustained TPS—the maximum rate at which the node can process transactions without the mempool backlog growing indefinitely. Critical metrics to log include average block time, average block size (in transactions), and CPU/memory usage on the validator.

A key analytical step is calculating the TPS overhead introduced by PQC. Use the formula: Overhead = 1 - (PQC_TPS / Baseline_TPS). For example, if your baseline ECDSA test achieves 10,000 TPS and the Dilithium test achieves 2,500 TPS, the overhead is 75%. This quantifies the performance trade-off. It's also crucial to test under different transaction loads to identify non-linear scaling effects and pinpoint the network's new maximum capacity.

For developers, here is a conceptual Python snippet using a hypothetical blockchain client to run a simple TPS test loop. This highlights the process of sending batches of transactions and measuring confirmation times.

python
import time
from blockchain_client import Client  # Hypothetical client

def benchmark_tps(client, num_txs, batch_size):
    """Measures TPS for a given client configuration."""
    start_time = time.time()
    tx_sent = 0
    
    while tx_sent < num_txs:
        batch = generate_transactions(batch_size)  # Uses current sig scheme
        client.send_transaction_batch(batch)
        tx_sent += batch_size
    
    # Wait for final transaction to be included in a block
    client.await_finalization()
    end_time = time.time()
    
    total_time = end_time - start_time
    achieved_tps = num_txs / total_time
    return achieved_tps

Beyond raw TPS, analyze the impact on block propagation times. Larger signature sizes from PQC algorithms increase the size of each transaction, which can slow down block transmission across the peer-to-peer network. This can indirectly reduce effective TPS in a multi-validator setup. Consider running a multi-node testnet benchmark to observe these network-level effects, which are not captured in single-node tests.

Document your findings in a clear report. Include the test environment specifications (CPU, RAM, network), the exact cryptographic libraries and versions used (e.g., liboqs v0.8.0), raw TPS numbers, calculated overhead, and resource utilization graphs. This data is essential for making an informed decision about the feasibility of deploying a specific PQC algorithm on your production chain.

propagation-latency-test
BENCHMARKING PERFORMANCE

Step 3: Testing Block Propagation and Validation Time

This step measures the real-world network and computational impact of PQC algorithms by testing how quickly blocks are shared and verified across nodes.

Block propagation and validation time are critical metrics for any blockchain's health. Propagation time measures how long it takes for a newly created block to be transmitted to all nodes in the network. Validation time is the duration a node spends verifying the block's contents, including the new PQC signatures. Slow performance in either metric directly increases the risk of forks and reduces network throughput. For PQC algorithms, which are computationally heavier than ECDSA, establishing a baseline and monitoring changes is essential.

To test propagation, you need a multi-node testnet. Tools like Ganache (for EVM chains) or a custom network using geth or besu in dev mode are suitable. The core method involves instrumenting your node software to log timestamps: when a block is mined (t_mine), when it's received by a peer (t_receive), and when it's fully validated (t_valid). Propagation delay is t_receive - t_mine, and validation time is t_valid - t_receive. You should run tests with varying network latencies and node counts to simulate real-world conditions.

For a concrete example, here's a simplified Python script using web3.py to measure validation time on a single node by timing the eth_getBlockByNumber call, which triggers internal validation. This is a proxy measurement useful for initial benchmarking:

python
from web3 import Web3
import time
w3 = Web3(Web3.HTTPProvider('http://localhost:8545'))
block_number = w3.eth.block_number
start = time.time()
block = w3.eth.get_block(block_number)  # This forces re-validation
validation_time = time.time() - start
print(f"Block validation took {validation_time:.4f} seconds")

A more accurate approach involves modifying your client's source code to log the specific PQC signature verification routine.

When analyzing results, compare the PQC algorithm (e.g., Dilithium5) against your baseline (e.g., secp256k1). Expect a 10-100x increase in CPU time for signature verification, which will directly inflate validation time. Propagation time may also increase slightly due to larger signature sizes in the block. If validation time approaches or exceeds your target block time, it becomes a bottleneck. For instance, if validation takes 2 seconds but your chain targets a 12-second block time, you're using over 15% of the interval just for verification, leaving little room for other operations.

Key performance targets to monitor are the 99th percentile (p99) validation time and the average block propagation delay. The p99 metric ensures your network remains stable under edge cases. If these metrics degrade significantly with PQC, consider optimizations: using faster implementations (like optimized assembly), hardware acceleration, or alternative signature schemes with better performance profiles (e.g., SPHINCS+ for smaller signatures, though it has other trade-offs). Document all findings, as this data is crucial for deciding on a final PQC algorithm and any necessary protocol parameter adjustments.

hardware-resource-test
PERFORMANCE ANALYSIS

Step 4: Profiling CPU, Memory, and Storage Usage

After establishing a baseline, profiling reveals the resource cost of running PQC algorithms on your blockchain. This step measures the real-world computational, memory, and storage overhead of your chosen cryptographic primitives.

Profiling is the process of measuring the resource consumption of a program. For blockchain nodes, the critical resources are CPU cycles, RAM, and persistent storage. Unlike simple benchmarks that measure speed, profiling tells you where and why resources are consumed. For PQC algorithms, you must profile key operations: key generation, signing, verification, and encryption/decryption. This data is essential for setting realistic gas costs, node hardware requirements, and understanding network-wide scaling implications.

To profile CPU usage, use tools like perf on Linux, Instruments on macOS, or VTune on Windows. For a blockchain context, integrate profiling into your node's test suite. For example, you can modify a transaction processing unit test to record CPU time using your language's native timing functions (e.g., time.process_time() in Python, std::chrono in C++). Focus on the cryptographic operations within the transaction lifecycle. A significant CPU spike during block validation with a new signature scheme could become a network bottleneck.

Memory profiling is crucial as many PQC algorithms, like those based on structured lattices (e.g., Kyber, Dilithium), have larger key and signature sizes than ECDSA. Use tools like Valgrind's massif, heaptrack, or language-specific profilers. You need to measure both peak heap allocation during operations and persistent memory footprint (e.g., the size of a public key stored in a validator's state). A memory leak in a frequently called verification function would be catastrophic for node stability.

Storage profiling translates cryptographic object sizes into blockchain state growth. Calculate the cost of storing a PQC public key (often 800-2000 bytes) versus an Ethereum address (20 bytes). If your chain stores signatures in receipts or events, profile that too. Use your node's database layer to measure the byte-size impact of inserting these new data structures. This directly informs state storage costs and the efficiency of state sync protocols for new nodes joining the network.

The output of profiling should be a structured report. For each PQC algorithm and operation, document: average/peak CPU time (ms), average/peak RAM allocation (KB), and the size of artifacts (keys, signatures) in bytes. Compare these metrics directly against your current classical algorithm (e.g., ECDSA/secp256k1). This comparison, often called the performance penalty, is your key data point for evaluating the practical feasibility of the migration on your specific chain architecture.

Finally, integrate these profiles into a continuous performance regression suite. Any change to the cryptographic library, compiler optimizations, or even node client version should re-run these profiles. This ensures the performance characteristics remain predictable and within acceptable bounds as your blockchain and the underlying PQC implementations evolve. Tools like GitHub Actions or GitLab CI can automate this process, failing builds if resource usage exceeds defined thresholds.

COMPARISON

Benchmark Results Framework and Analysis

A framework for comparing the performance and characteristics of post-quantum cryptography algorithms on a blockchain node.

Metric / CharacteristicKyber-512Dilithium-2Falcon-512SPHINCS+-128s

Algorithm Type

KEM

Signature

Signature

Signature

NIST Security Level

1

2

1

1

Avg. Key Gen Time

< 50 ms

< 100 ms

< 200 ms

< 10 ms

Avg. Sign/Encrypt Time

< 100 ms

< 150 ms

< 300 ms

< 1 sec

Avg. Verify/Decrypt Time

< 50 ms

< 100 ms

< 100 ms

< 10 ms

Public Key Size

800 bytes

1,312 bytes

897 bytes

32 bytes

Signature/Ciphertext Size

768 bytes

2,420 bytes

666 bytes

17,088 bytes

Memory Overhead (Peak)

Low

Medium

High

Very High

analysis-scaling
PERFORMANCE BENCHMARKING

Analyzing Results and Planning for Scale

After running your PQC algorithm benchmarks, the next critical step is interpreting the data to make informed decisions for your blockchain's security roadmap.

Your benchmark results will produce several key metrics. Focus on latency (sign/verify time in milliseconds), throughput (operations per second), and transaction size overhead (bytes added by the new signature). For example, benchmarking Dilithium3 on an EVM-compatible chain might show a verification time of 8ms and a signature size of 2,420 bytes, compared to 0.1ms and 65 bytes for standard ECDSA. This data forms your baseline for understanding the performance penalty of quantum resistance.

Analyze these results in the context of your chain's specific workload. A high-throughput DeFi chain processing 100+ TPS will be more sensitive to verification latency than a low-frequency governance chain. Use the data to model the impact on block gas limits and block propagation times. Tools like geth's built-in tracing or custom scripts using the web3.py or ethers.js libraries can help you simulate these changes on a testnet fork before mainnet deployment.

Planning for scale requires a phased approach. Start by integrating a PQC algorithm for low-risk, non-critical operations, such as off-chain attestations or specific governance modules. This allows you to monitor real-world performance and gather more data. Concurrently, work on cryptographic agility—designing your system to easily swap signature schemes via upgradeable smart contracts or consensus parameter changes, as recommended by NIST's post-quantum migration guidelines.

Consider hybrid signature schemes as a transitional strategy. Schemes like SPHINCS+ combined with ECDSA can provide quantum resistance today with larger sizes, while newer lattice-based algorithms like ML-DSA mature. Monitor the evolution of algorithms and hardware acceleration; dedicated PQC hardware modules could drastically reduce latency in future node hardware, changing your scaling calculus.

Finally, document your findings and roadmap transparently. Share benchmark results with your validator community and developers. A clear, data-driven plan that addresses performance impacts, upgrade paths, and external dependencies (like wallet support) is essential for coordinating a successful, scalable transition to post-quantum security.

PQC BENCHMARKING

Frequently Asked Questions

Common questions and troubleshooting for developers benchmarking post-quantum cryptography algorithms on blockchain networks.

PQC benchmarking is the systematic measurement of post-quantum cryptographic algorithms' performance metrics—such as signature generation/verification speed, key size, and memory usage—within a blockchain's execution environment. It's critical because quantum computers threaten current standards like ECDSA and BLS signatures. Benchmarking quantifies the computational and storage overhead of quantum-resistant replacements, which is essential for protocol design. For example, switching from a 64-byte ECDSA signature to a 1.7KB Dilithium signature can increase transaction size by 2600%, directly impacting gas costs and network throughput. This data is foundational for making informed decisions about algorithm selection, consensus mechanism adjustments, and hard fork planning.