Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

How to Evaluate Trust Assumptions in Proof Systems

A technical guide for developers and researchers to systematically assess the trust models underlying ZK-SNARKs, STARKs, and other cryptographic proof systems.
Chainscore © 2026
introduction
SECURITY FUNDAMENTALS

How to Evaluate Trust Assumptions in Proof Systems

A framework for analyzing the security and decentralization of cryptographic proofs, from trusted setups to validity proofs.

Every blockchain proof system operates under a set of trust assumptions—conditions that must be true for the system's security guarantees to hold. Evaluating these assumptions is critical for developers and researchers choosing between systems like zk-SNARKs, zk-STARKs, or optimistic rollups. The primary trust vectors are: - Setup trust: Does the system require a trusted ceremony? - Verifier trust: Must you trust the entity running the verifier? - Prover trust: Can a malicious prover create a false proof? - Data availability: Is the underlying data needed for verification published? Understanding where trust is placed is the first step in assessing a system's security model.

Trusted setups, like the Groth16 ceremony for zk-SNARKs, require a one-time generation of public parameters (the Common Reference String or CRS). If any participant in this multi-party computation (MPC) is honest and destroys their toxic waste, the setup is secure. However, if all participants collude, they could generate fraudulent proofs. Systems like PLONK and STARKs use universal and updatable setups or no setup at all, reducing this long-term risk. When evaluating, ask: Is the setup ceremony transparent and auditable? How many participants were involved? Is the setup circuit-specific or universal?

The verifier's role is another key assumption. In a Validity Proof system (zk-Rollup), the on-chain verifier is a smart contract that checks a cryptographic proof; you only need to trust the correctness of the contract code and the underlying blockchain. In contrast, an Optimistic Rollup uses a fraud proof system where verifiers (watchtowers) must be actively monitoring and challenging invalid state transitions. Here, trust shifts to the economic honesty of at least one honest watcher and the liveness assumption that they can submit a challenge within the dispute time window.

Data availability is a crucial, often overlooked, trust component. A zk-Rollup may produce a valid proof that state transition N is correct, but if the input data for transition N+1 is withheld, the chain halts. Systems relying on Data Availability Committees (DACs) or off-chain data require trust in those entities to provide data upon request. Pure on-chain data availability, as enforced by Ethereum's calldata or dedicated data availability layers, removes this trust assumption but increases cost. Always verify what data is posted and who guarantees its availability.

To systematically evaluate a system, map its trust model. For a zkEVM using a Groth16 prover, your trust is in: 1) the integrity of its trusted setup ceremony, 2) the correctness of the verifier contract, and 3) the data availability solution. For an Optimistic Rollup, you trust: 1) at least one honest actor will monitor and challenge fraud within the challenge period (typically 7 days), and 2) the data is available for them to construct that fraud proof. This framework allows for apples-to-apples comparison based on your application's specific security requirements and threat model.

prerequisites
PREREQUISITES FOR EVALUATION

How to Evaluate Trust Assumptions in Proof Systems

A foundational guide to understanding the security models and trade-offs behind zero-knowledge proofs, validity proofs, and optimistic rollups.

Evaluating a proof system begins with identifying its core trust assumptions. These are the conditions under which the system's security guarantees hold. The primary spectrum ranges from cryptographic trust (relying on mathematical hardness assumptions) to economic trust (relying on financial incentives and game theory). For example, a zk-SNARK like Groth16 relies on a trusted setup ceremony, introducing a one-time cryptographic trust assumption that the toxic waste was discarded. In contrast, STARKs do not require a trusted setup, trading this for larger proof sizes but maintaining post-quantum security assumptions.

The second key prerequisite is understanding the prover and verifier models. You must assess who can generate proofs and who can verify them. Is the prover a centralized sequencer, a decentralized network of nodes, or the user's own device? Verification cost, especially on-chain in gas, is a critical constraint. A Validity Proof system, as used by zkRollups like zkSync Era or StarkNet, requires a verifier smart contract on Layer 1 to check a succinct proof. The trust assumption shifts from honest majority validators (as in Optimistic Rollups) to the correctness of the cryptographic verification code and the underlying elliptic curve.

Finally, you must map the system's liveness and data availability guarantees. A proof is meaningless if the required data to reconstruct state is unavailable. Optimistic rollups like Arbitrum and Optimism assume at least one honest actor will publish transaction data to L1 and challenge invalid state roots within a challenge window (e.g., 7 days). This is an economic and liveness assumption. zkRollups typically post state diffs and proofs to L1, making data availability a direct function of L1 security. Evaluating this involves checking whether data is posted on-chain (Ethereum calldata, blobs) or relies on a separate data availability committee with its own trust model.

key-concepts-text
CORE TRUST CONCEPTS

How to Evaluate Trust Assumptions in Proof Systems

A framework for analyzing the security and decentralization of cryptographic proofs, from trusted setups to validity proofs.

Evaluating a proof system begins by mapping its trust assumptions. All systems require some initial belief, but the nature and longevity of that trust differ. The primary categories are: trusted setups (requiring a one-time ceremony), economic security (relying on financial penalties), and cryptographic assumptions (depending on unproven mathematical hardness). For example, zk-SNARKs like Groth16 often need a trusted setup, where compromised secret parameters could allow infinite forgery. In contrast, zk-STARKs and some recursive SNARKs avoid this, trading it for other assumptions like collision-resistant hashes.

The trusted setup is a critical vector. Assess its ceremony: Was it a 1-of-N multi-party computation (MPC) where only one participant needs to be honest? Popular ceremonies, like the Perpetual Powers of Tau used by projects such as Tornado Cash, have hundreds of participants, significantly reducing risk. However, the trust is not eliminated; it is distributed. You must also consider the setup's toxicity: Can the secret be used to create false proofs after the fact? For universal setups, the risk is amortized across many applications, while application-specific setups create isolated risk.

Next, examine the cryptographic assumptions underlying the proof's security. zk-SNARKs typically rely on knowledge-of-exponent assumptions and pairing-friendly elliptic curves. zk-STARKs rely on collision-resistant hash functions (treated as secure) and are post-quantum resistant. Validity proofs (like STARKs) provide cryptographic certainty of execution correctness, while fraud proofs (used in optimistic rollups) offer economic security with a challenge period. Ask: What happens if the underlying math is broken? Validity proof systems may fail completely, while fraud proof systems have a window for social coordination.

Finally, evaluate the system architecture and operator dependency. A proof is only as trustworthy as its prover network. Is there a single prover (centralization risk) or a permissionless prover market? For instance, a zk-rollup with a single, closed-source prover introduces a trusted operator assumption. Also, consider data availability: Can the underlying data be withheld, making verification impossible? A proof system using Ethereum calldata for data availability inherits Ethereum's security, while one using a separate data availability committee adds another trust layer. The goal is to minimize and clearly identify all external dependencies.

TRUST ASSUMPTIONS

Proof System Trust Model Comparison

A comparison of the core cryptographic and operational trust assumptions across different proof system architectures.

Trust Assumptionzk-SNARKs (Groth16)zk-STARKsPlonk / Halo 2Bulletproofs

Trusted Setup (Ceremony)

Quantum Resistance

Transparent Setup

Post-Quantum Security

128-bit (EC)

100-bit (Hash)

128-bit (EC)

128-bit (EC)

Verification Time

< 10 ms

~40 ms

< 50 ms

~500 ms

Proof Size

~200 bytes

~45-200 KB

~400 bytes

~1-2 KB

Recursive Proof Support

Development Maturity

High

Medium

High

Medium

evaluation-framework
PROOF SYSTEM SECURITY

Step-by-Step Evaluation Framework

A systematic approach to assessing the trust assumptions, security guarantees, and economic incentives of different proof systems like ZK-SNARKs, ZK-STARKs, and Optimistic Rollups.

setup-ceremony-analysis
SECURITY FOUNDATIONS

Analyzing Trusted Setup Ceremonies

Trusted setup ceremonies generate the initial parameters for cryptographic proof systems like zk-SNARKs. This guide explains how to evaluate the trust assumptions and security of these critical events.

A trusted setup ceremony is a multi-party computation (MPC) protocol that generates the common reference string (CRS) or structured reference string (SRS) for zero-knowledge proof systems like Groth16, Plonk, or Marlin. The core security assumption is that if at least one participant is honest and destroys their secret randomness (the "toxic waste"), the final parameters are secure. If all participants collude, they could forge proofs. Major examples include the original Zcash Sprout ceremony ("The Ceremony"), the Perpetual Powers of Tau (used by Tornado Cash and others), and specific application setups like Semaphore's.

Evaluating a ceremony's security involves analyzing its participant set and protocol design. Key questions include: Who were the participants (e.g., renowned cryptographers, community members, hardware devices)? Was their identity and contribution process verifiable? The ceremony protocol itself must guarantee that the final output is a product of all contributions and that a single honest participant suffices. Protocols like the Powers of Tau use a sequential chain of contributions, where each participant receives the previous output, adds their secret, and passes it on. The security relies on the computational infeasibility of reversing the elliptic curve pairing.

To audit a ceremony, you must verify the transcripts and attestations. For a Powers of Tau setup, this means: 1) Checking that each contribution's public transcript (published on GitHub or IPFS) correctly shows the receipt of the previous beacon and the output after their contribution. 2) Verifying the attestation, which is a digital signature or proof of knowledge (like a BLS signature or a hash of a random number) that the participant performed the computation. Tools like snarkjs provide commands (snarkjs powersoftau verify) to cryptographically verify the consistency of contribution transcripts.

The trust model varies. A 1-of-N trust model (used in sequential ceremonies) requires only one honest participant. More robust designs aim for a t-of-N threshold, where an adversary must corrupt a threshold number of participants to break security. The ceremony's public verifiability is crucial: can anyone cryptographically verify the entire process post-hoc? While the final parameters can be verified, the process of verifying each contribution's correctness without the secret data is a key property of well-designed MPC protocols.

For developers integrating a zk-SNARK circuit, you must decide whether to use a universal (circuit-agnostic) setup like the Perpetual Powers of Tau or a specific (circuit-dependent) setup. Universal setups are more reusable and have undergone more public scrutiny (e.g., the ongoing Powers of Tau ceremony). Specific setups, required by Groth16, demand a new ceremony for each circuit, concentrating risk. Always use the most widely adopted, battle-tested parameters for your proof system, and document the exact source and contribution hash of the SRS you are using in your application's audit report.

COMPARISON

Trust Assumption Risk Matrix

Evaluating security and liveness risks across different proof system architectures.

Trust Assumption / Risk FactorValidity Proofs (ZK-Rollups)Optimistic RollupsProof of Stake (Sidechains)

Cryptographic Assumption

Requires trusted setup for some schemes (e.g., Groth16)

None

None

Economic Security Assumption

None

7-day challenge period with bonded validators

Slashing of staked capital

Liveness Assumption

Requires at least 1 honest prover

Requires at least 1 honest watcher

Requires >2/3 honest stake

Data Availability Risk

High (requires full data on L1)

High (requires full data on L1)

Low (data posted to its own chain)

Withdrawal Finality

~10 minutes

7 days

Instant to ~3 hours

Upgrade Control

Multi-sig or DAO (code is law risk)

Multi-sig or DAO (code is law risk)

Validator voting (governance risk)

Prover Centralization Risk

High (specialized hardware required)

Low (anyone can submit fraud proof)

Medium (delegated to validators)

code-snippet-analysis
CODE ANALYSIS FOR TRUST SIGNALS

How to Evaluate Trust Assumptions in Proof Systems

A technical guide for developers to audit the trust models of cryptographic proof systems like zk-SNARKs and zk-STARKs by analyzing their underlying code and assumptions.

Trust assumptions define the security bedrock of a proof system. Unlike trustless models, many systems rely on trusted setups, honest majorities, or cryptographic assumptions that must be verified. For a developer, evaluating these assumptions starts with the codebase. Key questions include: Is there a trusted setup ceremony (e.g., Groth16) requiring secure parameter generation? Does the system assume a majority of nodes are honest, as in optimistic rollups? Or is it based on a falsifiable cryptographic assumption like the hardness of the Discrete Logarithm Problem? Identifying this core model is the first step in a security audit.

To analyze a trusted setup, examine the ceremony implementation and the toxic waste disposal mechanism. For a Powers of Tau ceremony used in zk-SNARKs, review how the structured reference string (SRS) is generated. The code must ensure the secret randomness used to create the SRS is securely deleted or distributed via multi-party computation (MPC). Look for audit reports on the ceremony and verify the code uses a sufficient number of participants to make collusion statistically improbable. Systems like Semaphore and Tornado Cash have publicly audited ceremonies, setting a benchmark for verifiable trust.

Next, scrutinize the cryptographic primitives and their security assumptions. A zk-SNARK prover might rely on pairing-friendly elliptic curves like BN254 or BLS12-381. You must evaluate the elliptic curve discrete logarithm problem (ECDLP) security level, which for BN254 is approximately 110 bits—considered insufficient for long-term security. Check the library implementations (e.g., arkworks, libsnark) for side-channel resistance and correctness. Furthermore, assess the knowledge-of-exponent assumption or random oracle model usage; these are standard but introduce theoretical trust vectors that should be documented and understood.

For validity proofs like zk-STARKs, which are post-quantum secure and transparent (no trusted setup), the trust shifts to the soundness of the interactive oracle proof (IOP) and the collision-resistant hash function (e.g., SHA-3, Rescue). Analyze the STARK protocol's code to confirm the constraint system correctly represents the computation and that the FRI protocol is implemented with a sufficient security level (e.g., 100+ query repetitions). The StarkEx and StarkNet codebuses provide real-world examples of this trust model in production.

Finally, map the system's trust boundaries to its economic and game-theoretic incentives. An optimistic rollup like Arbitrum or Optimism assumes at least one honest validator will challenge invalid state roots within a challenge period (e.g., 7 days). Review the fraud proof submission logic and the bonding/slashing mechanisms in the smart contract code. The system's security collapses if the economic cost of corruption is lower than the potential profit. This requires analyzing the contract's challenge functions, withdrawal delays, and the capital required to become a validator.

Practical evaluation involves cloning the repository, examining key modules (setup, proving, verification), and running tests. Use static analysis tools to check for common vulnerabilities. Always reference the system's formal security audit reports from firms like Trail of Bits or OpenZeppelin. By methodically auditing the code for its trusted setup procedures, cryptographic dependencies, and incentive structures, developers can quantify the trust assumptions and make informed decisions about integrating a proof system into their application.

PROOF SYSTEMS

Frequently Asked Questions

Common developer questions about evaluating the security and trust models of different cryptographic proof systems used in blockchain scaling and privacy.

Validity proofs (e.g., zk-SNARKs, zk-STARKs) are cryptographic proofs that a state transition is correct. The prover generates a proof that is verified on-chain. Validity is assured by cryptography, requiring no active monitoring, offering finality after proof verification.

Fraud proofs (used in optimistic rollups like Arbitrum) assume transactions are valid by default. A network of watchers must challenge invalid state transitions within a dispute window (e.g., 7 days). Security relies on the honest majority assumption that at least one honest actor will submit a fraud proof.

The core trust difference is cryptographic certainty versus economic/game-theoretic security with a latency trade-off.

conclusion
PRACTICAL GUIDE

Conclusion and Next Steps

Evaluating trust assumptions is a critical skill for developers and researchers working with zero-knowledge proofs, optimistic rollups, and other cryptographic systems.

This guide has outlined a framework for systematically analyzing trust assumptions in proof systems. The core questions remain: Who do you trust? and What happens if they fail? For any system, you must map out the trust model across its key components: the prover, verifier, data availability layer, upgrade mechanism, and underlying cryptographic primitives. A system like a zk-rollup with on-chain data availability and a trustless verifier smart contract presents a different risk profile than an optimistic rollup relying on a 7-day challenge window and honest watchers.

To apply this framework, start by auditing the system's documentation and code. For a zk-rollup, examine the verifier contract on Etherscan to confirm it validates proofs on-chain. For an optimistic rollup, review the challenge protocol and the economic incentives for watchers. Tools like Dune Analytics for tracking sequencer activity or L2BEAT for risk assessments provide valuable data. Remember, trust is often economic (e.g., staked capital) or game-theoretic (e.g., fraud proofs) rather than purely cryptographic.

Your evaluation should produce a clear threat model. Document scenarios like: a malicious sequencer censoring transactions, a prover submitting a fraudulent zk-SNARK, or the failure of a trusted setup ceremony. For each, note the required attacker capability (e.g., "control of the sequencer key") and the system's mitigation (e.g., "users can force-include transactions via L1"). This exercise reveals the system's true security floor and its operational dependencies.

The landscape of proof systems is rapidly evolving. ZKPs are moving towards transparent setups (STARKs) and recursive proving. Optimistic systems are refining dispute resolution with interactive fraud proofs. Hybrid models are emerging. To stay current, follow research from teams like Ethereum Foundation, zkSync, Arbitrum, and StarkWare. Read their technical blogs and audit reports. Engage with the community on forums like EthResearch to discuss new trust trade-offs.

As a next step, implement a practical verification. Write a simple script to fetch and verify a proof's status on-chain for a zk-rollup, or monitor the contract state of an optimistic rollup's challenge window. Explore libraries like snarkjs for Groth16 proofs or circom for circuit design to understand the prover's role firsthand. Hands-on experience is the best way to internalize the abstract concepts of trust minimization.

Ultimately, there is no "trustless" system, only systems that minimize and redistribute trust in auditable ways. Your goal is to make informed decisions by quantifying residual trust. Whether you are integrating a bridge, choosing an L2, or designing a new protocol, a rigorous evaluation of trust assumptions is your most essential security practice.

How to Evaluate Trust Assumptions in Proof Systems | ChainScore Guides