Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Design a Verifiable Delay Function for Physical Event Sequencing

A developer guide for implementing a VDF to create a manipulation-resistant timeline for events from physical sensors in a DePIN. Covers parameter selection, hardware constraints, and integration with consensus.
Chainscore © 2026
introduction
IMPLEMENTATION GUIDE

How to Design a Verifiable Delay Function for Physical Event Sequencing

A practical guide to designing and implementing a Verifiable Delay Function (VDF) to create provably fair, timestamped sequences from real-world physical events.

A Verifiable Delay Function (VDF) is a cryptographic primitive that enforces a minimum computation time, producing a unique output that is efficiently verifiable. For physical event sequencing, a VDF acts as a cryptographic stopwatch, creating an immutable, time-locked proof that an event occurred at a specific moment relative to a known starting point. The core property is sequentiality: the computation cannot be parallelized, guaranteeing the elapsed 'wall-clock' time. This makes VDFs ideal for applications like - provably fair randomness beacons (e.g., Ethereum's RANDAO+VDF), - timestamping services, and - synchronizing consensus across distributed systems without trusted hardware.

Designing a VDF system for a physical event requires three core components: an evaluation function, a verification function, and a source of randomness. The most common construction uses repeated squaring in a group of unknown order, such as an RSA group or a class group. The evaluator computes y = x^(2^T) mod N, where x is the input (seed), T is the delay parameter (number of sequential steps), and N is the modulus of the unknown-order group. The verifier can then check a proof π that confirms y was correctly computed without redoing the work, often using Wesolowski or Pietrzak proof protocols.

To sequence a physical event, you must bind the VDF input to the event data. For instance, to timestamp a sensor reading, you would use a hash of the reading H(event_data) as the seed x. Once the event is captured, the VDF evaluation begins. The elapsed time T to produce the output y and proof π provides the verifiable delay. Anyone can later verify that y corresponds to H(event_data) and that its computation required roughly T sequential steps, proving the event must have occurred before the evaluation could finish. This creates a tamper-evident timeline.

Implementation requires careful parameter selection. The delay parameter T must be calibrated to the desired time window (e.g., 1 minute of compute time) and the expected speed of the prover's hardware. The security relies on the unknown order of the group; if the factorization of N is discovered, the delay can be shortcut. Therefore, N must be generated via a trusted setup ceremony or using class groups which are believed to not require a trusted setup. Libraries like chiavdf (used in the Chia blockchain) or vdf-competition winners provide production-ready references for these squaring-based VDFs.

A practical architecture involves an oracle or relay that watches for the physical event. Upon detection, it - Hashes the event data to create seed x, - Initiates the VDF evaluation for the pre-set duration T, - Broadcasts the final output y and proof π to a blockchain or public ledger. Smart contracts can then verify the proof on-chain using the known modulus N and input x, finalizing the event's position in the sequence. This creates a decentralized, auditable record where the order of events is secured by computational time, not subjective timestamps.

Key challenges include cost of computation for high-frequency events, ensuring the prover does not cheat by pre-computing, and managing the trusted setup. The field is evolving with ASIC-resistant VDF designs and research into continuous VDFs. For many applications, integrating with an existing VDF service like Ethereum's beacon chain or a dedicated randomness beacon may be more practical than a custom implementation. The core takeaway is that VDFs provide a powerful, trust-minimized primitive for translating the passage of real time into an unforgeable digital sequence.

prerequisites
VDF DESIGN

Prerequisites and System Requirements

Building a Verifiable Delay Function (VDF) for physical event sequencing requires a robust technical foundation. This guide outlines the essential knowledge, tools, and hardware needed to implement a secure and functional system.

A Verifiable Delay Function (VDF) is a cryptographic primitive that enforces a minimum, wall-clock time to compute an output, even with massive parallelism. For sequencing physical events—like proving a sensor reading occurred at a specific time—the VDF's sequential delay is crucial. You must understand core cryptographic concepts: one-way functions, sequential computation, and succinct non-interactive arguments of knowledge (SNARKs) or STARKs for efficient verification. Familiarity with time-lock puzzles and the RSA-based VDF constructions from Boneh et al. (2018) is highly recommended.

The primary system requirement is a trusted execution environment (TEE) or a secure hardware module. Since the VDF must compute over real-world inputs (e.g., a sensor's data hash), the initial setup and the input ingestion must be tamper-proof. Options include Intel SGX, AMD SEV, or a dedicated hardware security module (HSM). The system must also have a reliable, high-resolution time source, such as a GPS clock or a network time protocol (NTP) server with attestation, to anchor the delay period to real time.

Your development environment should support low-level systems programming. Rust and C++ are preferred for their performance and memory safety (in Rust's case). You will need cryptographic libraries like libsodium or OpenSSL, and for SNARK/STARK proving, frameworks such as arkworks (Rust) or libsnark (C++). A basic implementation involves three algorithms: Setup(λ, T) to generate public parameters for delay time T, Eval(x) to compute the output and proof sequentially, and Verify(x, y, π) to check the result.

Consider the operational lifecycle. The Setup phase, which may involve generating a large RSA modulus, is critical and must be performed securely, often via a multi-party computation (MPC) ceremony. The Eval function will run continuously on your secure hardware, hashing incoming event data and iterating the sequential function (e.g., repeated squaring modulo an RSA group). You must plan for key management, proof aggregation to reduce on-chain verification costs, and failure recovery mechanisms for the hardware module.

Finally, integrate with the broader system. The VDF's output—a proof that an event was sequenced after a mandatory delay—is typically posted to a blockchain or a distributed ledger for public verification. Ensure your design includes a clear data pipeline: from the physical sensor, to the secure enclave for VDF evaluation, and finally to the verification contract on-chain, such as an Ethereum smart contract using the BN254 or BLS12-381 curve for efficient pairing verification.

vdf-core-concepts
CORE CONCEPTS

How to Design a Verifiable Delay Function for Physical Event Sequencing

Verifiable Delay Functions (VDFs) enforce a minimum computation time that cannot be parallelized. This guide explains how to design a VDF for sequencing real-world physical events, such as random number generation or blockchain consensus.

A Verifiable Delay Function (VDF) is a cryptographic primitive that guarantees a computation requires a specific, wall-clock time to complete, even with massive parallelism. Unlike Proof-of-Work, which is energy-intensive but parallelizable, a VDF's sequential nature makes it ideal for creating trusted time delays. For physical event sequencing, this property is crucial. It ensures that an event, like the observation of a cosmic ray or a sports match outcome, can be timestamped and ordered in a decentralized system without relying on a trusted third party's clock.

Designing a VDF for this purpose involves three core components. First, the delay function itself, typically a repeated squaring modulo a large integer (e.g., x^(2^T) mod N), which is inherently sequential. Second, a proof generation mechanism that allows a prover to quickly convince a verifier the computation was done correctly, using schemes like the Wesolowski or Pietrzak proofs. Third, a publicly verifiable random beacon that uses the VDF output to generate an unpredictable result. The physical event's data is used as the seed for this process, binding the real-world occurrence to the immutable delay.

The security of the system hinges on the sequentiality assumption—that no adversary with many processors can compute the function significantly faster. For the repeated squaring VDF, this relies on the hardness of taking modular square roots without knowing the factorization of N. The delay parameter T is set based on the desired time window (e.g., 1 minute) and estimated processor speed. A longer T increases security but also the latency before the result is available. Protocols like Chia's Proof-of-Space-and-Time and Ethereum's RANDAO/VDF hybrid demonstrate practical implementations of this concept for consensus and randomness.

To integrate a physical event, you must create a cryptographic commitment to the event data before the VDF evaluation begins. For instance, a sensor measuring a physical phenomenon publishes a hash of its reading. This hash becomes the input seed for the VDF. The enforced delay ensures that no participant can manipulate the VDF output to retroactively match a future event. Only events that occurred before the VDF started can be correctly sequenced. This creates a temporal anchor linking the blockchain's logical time to real-world time.

Implementation requires careful parameter selection. The modulus N must be an RSA-like integer where the factorization is unknown but its structure is verifiable (a class group or RSA group). Libraries like chiavdf provide optimized C++ implementations. The proof must be succinct and fast to verify, often requiring only a single modular exponentiation. When designing the system, you must also consider fault tolerance and liveness: what happens if the primary VDF prover fails? Most designs use a committee of provers or a fallback mechanism to ensure the sequence continues uninterrupted.

In practice, a VDF-based sequencer for physical events enables applications like decentralized oracle randomness, fair ordering of transactions based on external triggers, and proof-of-elapsed-time consensus. The key takeaway is that the VDF acts as a cryptographic clock. By consuming a commitment to a physical event as its starting fuel, it produces a verifiable proof that a minimum amount of time has passed since that event was known, creating an immutable and trust-minimized sequence for the decentralized world.

vdf-parameter-selection
GUIDE

VDF Parameter Selection for Hardware Constraints

A practical guide to configuring Verifiable Delay Functions for real-world hardware, balancing security, cost, and verification speed.

03

Estimating Hardware Performance (t)

You must benchmark the iteration time t on the exact hardware you are modeling as the constraint (e.g., a standard cloud VM, a consumer GPU). For a squaring-based VDF in a class group, measure the time for a single x^2 mod N operation. This t value, multiplied by your chosen T, gives the provable minimum delay. Document the CPU model, clock speed, and library (e.g., libvdf) used for benchmarking to ensure reproducibility.

~10 μs
Iteration time (example)
05

Optimizing Proof Generation & Verification

The Wesolowski proof allows efficient verification but requires the prover to compute an exponentiation. The exponent size is T / l, where l is a challenge. To keep verification fast for resource-limited devices (like smart contracts):

  • Choose T and l so the verification exponent is manageable.
  • Consider the Pietrzak proof for smaller proof size, though it requires more verification rounds.
  • Benchmark gas costs on L1 Ethereum or other target chains to ensure practical verification.
< 1 sec
Target verification
implementation-steps
IMPLEMENTATION GUIDE

How to Design a Verifiable Delay Function for Physical Event Sequencing

A practical guide to implementing a VDF for proving the sequence and timing of real-world events on-chain, using a modular smart contract architecture.

A Verifiable Delay Function (VDF) is a cryptographic primitive that enforces a minimum, wall-clock time delay to compute its output, yet allows for fast verification. For physical event sequencing—such as proving that a sensor reading occurred before a transaction—the VDF acts as a tamper-proof timer. The core property is sequentiality: the computation cannot be parallelized, guaranteeing that a specific amount of real time has passed between an input (the event) and the output (the proof). This guide outlines a design using a RSA-based VDF (like the one from the VDF Alliance) due to its well-understood security properties and available implementations, such as in the chiavdf library.

The system architecture involves three core off-chain components and an on-chain verifier. First, an Event Ingestor captures the physical event data (e.g., a hash of a sensor reading) and submits it as the input seed to the VDF evaluator. Second, a VDF Evaluator runs the sequential computation for the predetermined delay period T (e.g., 60 seconds). This generates the output and a succinct proof. Third, a Proof Aggregator may batch proofs for efficiency. Finally, a Verifier Smart Contract on-chain validates the proof against the public parameters and the original input. Use a library like chiavdf (create_discriminant, verify_wesolowski) for the heavy lifting, ensuring your delay parameter T is calibrated to your hardware to prevent adversaries with faster machines from cheating.

Implement the on-chain verifier as a lightweight, gas-efficient contract. It must store the VDF public parameters: the RSA modulus N and the delay time T. The verification function, verifyVDF(bytes memory input, bytes memory output, bytes memory proof), will use the Wesolowski proof scheme to check that output = input^(2^T) mod N without redoing the work. Precompile the verification logic in a language like Yul or Huff for minimal gas costs. For physical sequencing, the contract should also enforce state transitions; for example, a commitEvent function stores the VDF input hash, and a finalizeWithProof function only allows finalization after a valid VDF proof for that input is provided, creating an enforceable time lock.

To sequence multiple events, you must chain VDF computations. The output of one VDF becomes the input for the next, creating an immutable and verifiable timeline. Your design must prevent precomputation attacks by ensuring each VDF input contains a high-entropy, unpredictable component (like a block hash or random oracle output). Furthermore, the system's security depends on the trusted setup for generating the RSA modulus N. Use a decentralized multi-party ceremony (MPC) for this, as a compromised modulus breaks all security. Audit all cryptographic code and consider the economic security: the delay T must be long enough that attempting to compute it faster with specialized hardware is economically infeasible for an attacker.

ARCHITECTURE SELECTION

VDF Algorithm Comparison for DePIN Use Cases

Comparison of Verifiable Delay Function algorithms for sequencing physical events in decentralized physical infrastructure networks.

Algorithm FeatureWesolowski (VDF.PRO)Pietrzak (Chia)Sloth (Ethereum 2.0)

Underlying Primitive

Repeated squaring in RSA group

Repeated squaring in RSA group

Modular square root in prime field

Verification Time

< 1 second

< 2 seconds

< 100 ms

Prover Memory

~1 MB

~10 MB

~1 KB

Trusted Setup Required

Quantum Resistance

Hardware Acceleration

ASIC/FPGA optimized

GPU/CPU optimized

CPU only

Delay Granularity

1 ms to 10+ years

10 ms to 10+ years

1 second to 1 hour

Energy Efficiency (Joules/op)

0.5 - 2 J

2 - 5 J

0.1 - 0.5 J

consensus-integration
TUTORIAL

Integrating VDF Output with Network Consensus

This guide explains how to design a Verifiable Delay Function (VDF) to sequence physical events, such as block production or leader election, within a decentralized network's consensus mechanism.

A Verifiable Delay Function (VDF) is a cryptographic primitive that enforces a minimum, real-world time delay on an output, which can be verified much faster than it was computed. This property is critical for sequencing physical events in consensus, preventing a malicious actor from predicting or manipulating the order of future events by simply using more computational power. Unlike Proof-of-Work, which is probabilistic and energy-intensive, a VDF provides a deterministic delay that is independent of parallel computation, making it ideal for creating unbiased, time-based randomness or fair leader election.

The core design involves three algorithms: setup(λ, T), eval(x, T), and verify(x, y, π, T). The setup generates public parameters for a security level λ and a target delay T. The eval function takes an input x (e.g., a block hash or random beacon output) and sequentially computes the output y and a proof π over precisely T sequential steps. Crucially, this computation cannot be parallelized. The verify function allows any network participant to quickly check that y is the correct output for input x after delay T, using the proof π. A common construction uses repeated squaring in a class group of an imaginary quadratic field, which is inherently sequential.

To integrate the VDF output into network consensus, you must define the source of the input x. This is often the output of a random beacon (like a BLS signature aggregation from a committee) from the previous consensus round. This links the unpredictable VDF output to the network's shared history. The computed output y can then be used to determine the next block proposer via a verifiable random function (VRF), to seed a lottery, or to define a unique timestamp. This process creates a cryptographically verifiable timeline where the order of events is bound to the passage of real time, preventing last-reveal attacks and ensuring fairness.

Implementation requires careful parameter selection. The delay parameter T must be calibrated to the network's block time or epoch duration. It should be long enough to ensure global propagation of the VDF input but short enough not to bottleneck consensus. Security parameter λ (e.g., 128-bit) determines the size of the class group, impacting proof size and verification speed. Libraries like Chia's VDF code or Ethereum's research on VDFs provide practical starting points. The verification proof π must be compact and efficiently verifiable on-chain to minimize consensus overhead.

In a practical consensus design, such as a proof-of-stake system, the sequence might be: 1) A committee agrees on random beacon value R for epoch N. 2) This value R is used as the input x to the VDF with delay T. 3) After time T, the output y and proof π are published. 4) Validators verify π against R and y. 5) The output y deterministically selects the leader for epoch N+1. This design ensures that the leader for the future epoch is unknowable until the sequential work for the current epoch is complete, decoupling leadership from mere stake weight and adding a layer of attack-resistant timing.

The primary challenges in production are hardware acceleration and trusted setup. While the computation must be sequential, specialized ASICs can compute it faster than general-purpose CPUs, potentially centralizing the role of VDF evaluators. Networks may mitigate this by making the role permissionless and rewarding evaluators. Additionally, some VDF constructions require a one-time trusted setup ceremony to generate public parameters, which introduces a potential weakness. Ongoing research focuses on trustless setups and quantum-resistant VDFs to future-proof this critical consensus component.

common-pitfalls-optimizations
VERIFIABLE DELAY FUNCTIONS

Common Pitfalls and Performance Optimizations

Designing a VDF for physical event sequencing requires careful consideration of hardware, security, and timing. This guide covers key challenges and solutions.

01

Hardware Security and Trusted Execution

A primary pitfall is relying on untrusted hardware, which can be accelerated or manipulated. Trusted Execution Environments (TEEs) like Intel SGX or AMD SEV can provide isolation, but introduce complexity and potential side-channel vulnerabilities. Key considerations:

  • Attestation: The prover must generate a remote attestation to prove the VDF is running in a genuine, secure enclave.
  • Side-channels: Mitigate timing and power analysis attacks through constant-time algorithms and noise injection.
  • Supply chain risk: The security of the entire system depends on the hardware manufacturer.
02

Minimizing Prover-Verifier Asymmetry

A core property of a VDF is that evaluation is slow, but verification is fast. Poor design can lead to minimal asymmetry, making the system inefficient.

  • Algorithm choice: Use inherently sequential functions like repeated squaring in a group of unknown order (e.g., RSA groups, class groups). Avoid parallelizable hash functions.
  • Parameter tuning: The delay parameter t must be set high enough to guarantee the desired time window (e.g., 1 minute for block time) on expected hardware, but verification must remain under 1 second.
  • Benchmarking: Rigorously test evaluation time on target hardware under load to set accurate t parameters.
03

Ensuring Unpredictability and Random Beacon Integration

For event sequencing, the VDF output must be unpredictable. A common mistake is using a predictable input seed.

  • Seed generation: The input must be a high-entropy, publicly verifiable random beacon, such as the output of a prior VDF round or a verifiable random function (VRF).
  • Commit-Reveal schemes: To prevent grinding attacks, use a two-phase process: commit to a seed, then reveal it and compute the VDF.
  • Chainlink VRF: Integrates with smart contracts to provide a verifiable random seed, though it requires careful on-chain verification of the VDF proof.
04

Proof Systems and On-Chain Verification

The prover must generate a succinct proof that the VDF was computed correctly. Choosing the wrong proof system is a major performance bottleneck.

  • Wesolowski Proofs: Provide constant-size proofs and verification time, ideal for on-chain use. The prover computes a single group element.
  • Pietrzak Proofs: Require log(t)-sized proofs and interactive verification, which can be made non-interactive with the Fiat-Shamir heuristic.
  • Gas costs: For Ethereum, a Wesolowski proof verification can cost ~300k gas. Optimize by using precompiles for modular exponentiation (EIP-198).
05

Network Latency and Synchronization

Physical events occur across distributed nodes. Ignoring network conditions can break sequencing.

  • Time assumptions: Do not assume synchronized clocks. Use the VDF's computed output as the canonical timestamp.
  • Buffer windows: Account for network propagation delay when defining the valid submission period for VDF proofs. A 12-second buffer is common in protocols like Ethereum's consensus.
  • Fault tolerance: Design the system to be resilient to nodes going offline during the computation period, potentially using a committee of provers.
VERIFIABLE DELAY FUNCTIONS

Frequently Asked Questions (FAQ)

Common questions about designing and implementing Verifiable Delay Functions (VDFs) for sequencing physical events in blockchain systems.

A Verifiable Delay Function (VDF) is a cryptographic primitive that enforces a minimum computation time to produce an output, even with parallel processing. Its output is uniquely determined by its input and can be verified quickly by anyone.

For physical event sequencing, VDFs create a trust-minimized timestamp. When a physical event (like a sensor reading) occurs, its data is used as the VDF input. The enforced delay prevents an attacker from retroactively creating a valid proof for a fabricated event that happened earlier, establishing a reliable temporal order on-chain. This is critical for applications like proof-of-location, supply chain tracking, and oracle data sequencing where the timing of real-world data matters.

conclusion
IMPLEMENTATION ROADMAP

Conclusion and Next Steps

This guide has outlined the core principles for designing a VDF for physical event sequencing. The next steps involve practical implementation and rigorous security analysis.

Designing a Verifiable Delay Function (VDF) for physical events is a multidisciplinary challenge requiring careful integration of hardware, cryptography, and protocol logic. The core architecture must enforce a minimum time delay between a provable physical stimulus and the generation of a usable output, preventing precomputation. This is achieved by combining a slow, sequential function (like repeated squaring in a finite group) with a commitment to a physical sensor reading (e.g., a photodiode's response to a laser pulse). The final design's security hinges on the assumption that the physical measurement cannot be predicted or forged faster than the sequential computation can be completed.

For a practical implementation, start by selecting and characterizing your hardware. Choose a sensor with a well-defined, tamper-evident physical interface and a secure element (like a TPM or HSM) to act as the trusted execution environment. The code running on this secure hardware must: 1) Wait for and digitally sign the raw sensor data, 2) Use this signature as the seed for the sequential VDF computation, and 3) Output the final proof. An example flow in pseudocode might look like: physical_event = read_sensor() commitment = sign_with_attestation(physical_event) vdf_output = repeated_squaring(commitment, iterations=T) final_proof = (commitment, vdf_output) This binds the VDF output irrevocably to the instant the physical event was detected.

The next critical phase is security auditing and threat modeling. You must analyze potential attack vectors: Can an adversary feed pre-recorded data into the sensor port? Is the delay T long enough to make precomputation across all possible sensor inputs economically infeasible? Could side-channel attacks on the secure element reveal the VDF seed? Engage with specialists in hardware security and cryptanalysis. Formal verification of the state machine governing the sensor-to-VDF pipeline is highly recommended to eliminate logic bugs that could bypass the delay.

Finally, consider the integration into a larger system, such as a blockchain consensus mechanism or a proof-of-physical-work protocol. The VDF output must be packaged with the signed sensor data and any necessary attestation proofs into a verifiable data structure. Off-chain verifiers need efficient algorithms to check: the sensor signature's validity, the correctness of the sequential computation, and that the elapsed time meets the minimum delay T. Publishing a detailed specification and open-sourcing the verifier code will foster trust and allow for independent review, which is essential for any system claiming to sequence real-world events.

How to Design a VDF for Physical Event Sequencing | ChainScore Guides