Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

How to Define Privacy Trust Assumptions

A technical guide for developers and architects on systematically identifying, modeling, and documenting the trust assumptions in privacy-preserving systems like ZK-SNARKs, MPC, and mixnets.
Chainscore © 2026
introduction
FOUNDATIONS

Introduction to Trust Assumptions in Privacy Systems

A guide to defining and evaluating the trust models that underpin cryptographic privacy protocols, from zero-knowledge proofs to trusted execution environments.

A trust assumption is a foundational belief about the behavior of participants or components within a system. In privacy-preserving systems, these assumptions define who or what you must trust to keep your data confidential. For example, you might trust that a cryptographic algorithm is mathematically sound, that a hardware enclave is not compromised, or that a committee of validators will not collude. Clearly defining these assumptions is the first step in evaluating the security and practical viability of any privacy solution.

Different privacy technologies rely on fundamentally different trust models. Zero-knowledge proofs (ZKPs), like those used by zk-SNARKs in Zcash or zk-Rollups, typically assume only the correctness of the underlying cryptographic primitives and a secure initial setup (trusted setup). In contrast, a trusted execution environment (TEE) like Intel SGX assumes the hardware manufacturer has not embedded a backdoor and that the enclave's remote attestation process is secure. Multi-party computation (MPC) protocols distribute trust across multiple parties, assuming that a dishonest minority (e.g., less than 1/3 of participants) will not collude.

To define trust assumptions for your application, start by mapping the system's architecture. Identify the trusted computing base (TCB)—the set of all components whose failure breaches security. For a blockchain mixer using a ZKP, the TCB includes the prover software, the verifying smart contract, and the ZK circuit logic. For a confidential cloud service using TEEs, the TCB expands to include the CPU hardware, the hypervisor, and the attestation service. A smaller, more auditable TCB generally indicates a stronger security model.

Quantifying trust involves assessing the cost of corruption. How expensive would it be for an adversary to break your assumption? Corrupting a 1-of-1 trusted operator is often cheap. Corrupting a decentralized network of 1,000 nodes requiring a 2/3 majority is exponentially harder and more costly. This is why systems like Threshold Signature Schemes (TSS) for wallet security or distributed validators for Ethereum staking explicitly design for Byzantine Fault Tolerance (BFT), defining clear thresholds (e.g., f < n/3) for adversarial control.

Ultimately, defining trust assumptions forces a shift from vague promises of 'privacy' to a concrete security audit. It answers the critical question: Privacy from whom, and under what conditions? Documenting these assumptions—whether in a protocol whitepaper, a smart contract's NatSpec comments, or a system design doc—is essential for developer adoption and user informed consent. The goal is not to eliminate trust, but to minimize and rationalize it, creating systems whose security failures are predictable and contained.

prerequisites
PREREQUISITES AND SCOPE

How to Define Privacy Trust Assumptions

Before implementing privacy-preserving systems, you must explicitly define the trust model. This guide explains how to map out the adversarial assumptions for your specific application.

A trust assumption is a formal statement about which parties in a system you must trust to behave honestly for a protocol's security and privacy guarantees to hold. In zero-knowledge proof systems like zk-SNARKs, the primary trust assumption often revolves around the trusted setup ceremony. For a mixer like Tornado Cash, the assumption is that at least one participant in its Powers of Tau ceremony was honest and destroyed their toxic waste. Failing to define this leaves users with unclear security expectations.

To define your assumptions, start by listing all system components and actors: users, provers, verifiers, relayers, sequencers, and data availability committees. For each, ask: What could they learn? What could they manipulate? A privacy leak occurs when an unauthorized party learns private data, while an integrity failure happens when they can corrupt the system's state. For instance, in a private voting dApp, you might assume the zk-SNARK verifier is honest but the frontend could be malicious.

Next, categorize your trust model. A trust-minimized system, like using a validity rollup with on-chain verification, assumes only cryptographic security. A federated model, common in cross-chain bridges, assumes a majority of a known validator set is honest. Document these assumptions in your protocol's specification or whitepaper. The NIST Privacy Framework provides a structured approach for identifying and managing privacy risk, which can be adapted for Web3.

Your assumptions directly dictate the cryptographic primitives you select. Requiring post-quantum security? You might assume lattice-based cryptography is secure, avoiding schemes vulnerable to Shor's algorithm. Need to hide transaction amounts but not participants? You might trust a relayer to not deanonymize IP addresses, opting for a simpler confidential transaction scheme instead of a full anonymity set pool. Always benchmark against real threats: assume block explorers, RPC providers, and wallet extensions are potential adversaries.

Finally, scope your analysis by defining what is out of bounds. You might explicitly state that your protocol does not protect against network-level attacks like traffic analysis, or that privacy guarantees only hold if users generate proofs locally with uncompromised hardware. This clarity prevents false promises and focuses development on mitigations for in-scope threats, such as integrating with Tor or using zkLogin for anonymous authentication.

key-concepts-text
CORE CONCEPTS

How to Define Privacy Trust Assumptions

A practical guide to identifying and articulating the trust models that underpin privacy-preserving systems in Web3.

In privacy engineering, a trust assumption is a specific condition or entity you must rely on for the system's privacy guarantees to hold. Defining these assumptions is the first step in evaluating any privacy tool, from mixers to zero-knowledge proofs. Unlike security, which often focuses on preventing malicious actions, privacy analysis asks: Who must behave correctly for my data to remain confidential? A clear trust model answers this by explicitly listing the parties, their expected behavior, and the consequences if they deviate. This framework is essential for developers choosing infrastructure and users assessing risk.

Trust assumptions fall into broad categories. A cryptographic trust assumption relies on the mathematical hardness of problems, like the difficulty of factoring large primes in RSA. This is considered the strongest form. An economic trust assumption depends on rational actors being financially incentivized to behave honestly, such as stakers in a proof-of-stake system. An adversarial trust assumption specifies what powers an attacker is assumed not to have, like controlling more than one-third of validators. Finally, operational trust involves relying on specific entities to run software correctly and not collude.

To define assumptions for your application, start by mapping the data flow. Identify every component that handles sensitive data: the user's client, a relayer, a prover network, a blockchain, and any third-party oracles. For each component, ask: What is its role? What data does it see? Could it learn, leak, or censor information? Document the expected honest behavior. For instance, a zk-SNARK prover is trusted to generate a valid proof without learning the witness, relying on cryptographic assumptions. A relayer in a private transaction system might be trusted not to front-run or deanonymize users, an operational assumption.

Contrast this with real-world examples. Using Tornado Cash requires trusting that no one has compromised the underlying zk-SNARK circuit (cryptographic) and that the relayer (if used) does not log IP addresses (operational). A secret voting protocol on a blockchain may assume that at least one validator in the committee is honest and will not collude to reveal votes (adversarial). Threshold decryption schemes assume that the required number of key holders will not conspire together. Explicitly stating these points of failure transforms vague promises of 'privacy' into a measurable security model.

Once defined, these assumptions guide implementation and risk communication. Developers can architect systems to minimize trust, perhaps by using multiple, non-colluding relayers or implementing trustless validity proofs. Documentation should clearly list assumptions so users can make informed decisions. The goal is not to achieve 'trustlessness'—which is often impossible for privacy—but to achieve minimal, explicit, and understandable trust. This rigorous approach is what separates robust privacy systems from those that offer only the illusion of confidentiality.

trust-assumption-categories
PRIVACY ENGINEERING

Categories of Trust Assumptions

Privacy in blockchain systems is not absolute; it's defined by what you must trust. This framework breaks down the core assumptions behind different privacy solutions.

06

Economic / Game-Theoretic

Privacy is enforced by economic incentives that make attacks prohibitively expensive or irrational, rather than cryptographically impossible.

Mechanisms include:

  • Staking and slashing for committee members.
  • Bonding curves and liquidity requirements that penalize bad actors.
  • Delayed withdrawal schemes (like in Tornado Cash) that allow for community governance to freeze funds if fraud is proven.

Limitation: Assumes rational actors and sufficient stake value, which can fail under extreme market conditions or targeted attacks.

ARCHITECTURAL MODELS

Trust Assumption Comparison for Privacy Protocols

A comparison of core trust assumptions across different privacy protocol architectures, focusing on cryptographic security, data availability, and operational integrity.

Trust Assumption / FeatureZK-Rollups (e.g., Aztec)Mixers / CoinJoin (e.g., Tornado Cash)Trusted Execution Environments (e.g., Oasis)Fully Homomorphic Encryption (e.g., Fhenix)

Cryptographic Security

Data Availability on Public Ledger

Requires Honest Majority of Operators

Prover/Sequencer Censorship Risk

Trusted Setup Ceremony Required

Hardware Integrity Assumption

On-Chain Privacy Leakage (e.g., amounts)

0.1%

5-10%

< 0.01%

< 0.01%

Exit/Withdrawal Delay

~20 min

~1-24 hours

~5 min

~15 min

step-by-step-framework
PRIVACY ENGINEERING

Step-by-Step Framework for Defining Assumptions

A systematic method for identifying, documenting, and validating the foundational assumptions behind privacy-preserving systems like zero-knowledge proofs and confidential smart contracts.

Defining trust assumptions is the critical first step in designing any privacy system. These are the foundational beliefs about the environment, participants, and technology that must hold true for the system's security and privacy guarantees to be valid. For example, a zk-SNARK application might assume the underlying elliptic curve is cryptographically secure, or a private voting dApp might assume a majority of participants are honest. A poorly defined or unrealistic assumption is a single point of failure that can compromise the entire protocol. This framework provides a structured approach to surface and scrutinize these hidden dependencies before they become vulnerabilities.

Step 1: System Decomposition and Boundary Mapping Start by exhaustively listing all system components and actors. For a private DeFi pool, this includes the user's wallet, the prover, the verifier smart contract, the underlying blockchain, any relayers, and data oracles. Map the data flows and trust boundaries between them. Document every point where data is ingested, computed, or revealed. This map reveals the trust surface—every interface and component that must be analyzed for assumptions. Tools like threat modeling diagrams or data flow graphs (DFDs) are useful here.

Step 2: Categorical Assumption Elicitation For each component and boundary, brainstorm assumptions across key categories:

  • Cryptographic: e.g., "The Poseidon hash function is collision-resistant."
  • Computational: e.g., "The prover has sufficient resources to generate a proof in under 30 seconds."
  • Behavioral: e.g., "At least 2/3 of committee members will not collude."
  • Liveness/Network: e.g., "Transactions are included in a block within a 24-hour window."
  • External Dependency: e.g., "The price feed oracle provides accurate data." Document each assumption in a structured format: Component: [X] assumes [Y] for property [Z].

Step 3: Risk Assessment and Prioritization Evaluate each assumption for its impact (severity if broken) and plausibility (likelihood of breaking). A high-impact, high-plausibility assumption is a critical risk. For instance, assuming a trusted third party for key generation (high impact) in a permissionless network (high plausibility of failure) is a severe flaw. Use a simple matrix to prioritize. This step forces explicit trade-offs; you may accept a low-plausibility cryptographic assumption but must mitigate a high-plausibility behavioral one through economic incentives or cryptographic means.

Step 4: Formalization and Documentation Translate high-priority assumptions into testable or auditable statements. A vague assumption like "the network is secure" becomes "the underlying blockchain maintains >34% honest majority and finalizes blocks with at most 1 reorg depth." Document these in your protocol specification or smart contract comments. For developers, use require() statements or custom errors in Solidity to enforce assumptions where possible, e.g., require(block.timestamp < deadline, "Assumption: Liveness failure");. This creates a clear link between design and implementation.

Step 5: Validation and Continuous Review Assumptions are not static. Establish validation methods: cryptographic assumptions are validated by peer review and battle-testing (e.g., the security of the BN254 curve), while liveness assumptions are validated through network simulations. Integrate assumption review into your development lifecycle. When upgrading a library like circom or halo2, re-evaluate all dependent assumptions. A living document, such as a ASSUMPTIONS.md file in your repository, ensures the team and auditors have a single source of truth for the system's trust model.

common-pitfalls
PRIVACY ENGINEERING

Common Pitfalls and Implicit Assumptions

Privacy in Web3 is not a binary state. It's defined by the trust assumptions you make about your system's components and participants. Failing to define these explicitly is the most common source of security failures.

03

Data Availability & Data Hiding

Confusing data availability (is the data published?) with data hiding (is the data encrypted?) is a fatal error. Validium-based ZK-Rollups (e.g., some StarkEx applications) post only proofs to Ethereum, keeping data off-chain. This assumes the Data Availability Committee (DAC) remains honest and available. If the DAC censors or fails, funds can be frozen. Understanding the trade-off between validiums, volitions, and rollups is essential.

~$1B
TVL in Validiums
documentation-and-communication
PRIVACY ENGINEERING

Documenting and Communicating Trust Assumptions

A systematic approach to defining, documenting, and communicating the trust models that underpin privacy-preserving systems, from zero-knowledge proofs to secure multi-party computation.

In privacy engineering, a trust assumption is a specific condition about the behavior of participants or the security of components that must hold for the system's privacy guarantees to be valid. Unlike security, which often aims for unconditional guarantees, privacy technologies frequently rely on explicit, bounded assumptions. For example, a zk-SNARK proof system may assume the initial trusted setup ceremony was performed honestly, while a secure multi-party computation (MPC) protocol might assume a threshold of participants (e.g., 2-of-3) does not collude. Clearly documenting these assumptions is the first step toward a rigorous security model and informed user consent.

To define trust assumptions, start by mapping your system's threat model. Identify all participants (users, provers, verifiers, coordinators), data flows, and cryptographic primitives. For each component, ask: what must we believe about its operation? Common categories include: - Computational assumptions (e.g., the hardness of discrete logarithms) - Behavioral assumptions (e.g., at least one party in a 2-party computation is honest) - Setup assumptions (e.g., a trusted common reference string) - Infrastructure assumptions (e.g., the availability of a decentralized oracle). Tools like the ZK Security Reference from 0xPARC provide excellent templates for this structured analysis.

Documentation should be explicit and accessible. Avoid vague statements like "the system is trustless." Instead, use a structured format. For a zk-rollup, you might document: Assumption 1 (Data Availability): The sequencer will post all transaction data to L1. Assumption 2 (Validity Proof): The zk-SNARK proof system is sound, assuming the elliptic curve pairing is secure. Incorporate these into your protocol's whitepaper, smart contract comments (e.g., using NatSpec for Ethereum), and public documentation. Projects like Aztec Network and Zcash set a high standard with their explicit and detailed trust documentation.

Communication is critical for user adoption and auditability. Developers and auditors need technical specifications, while users need clear, concise summaries. Consider creating a Trust Spectrum visualization for your application, showing which components are trust-minimized (e.g., on-chain verification) versus trust-required (e.g., off-chain data provision). For decentralized applications, publish your assumptions in the project repository and reference them in user-facing materials. Transparency about trust trade-offs, as seen in Tornado Cash's documentation of its relayers, builds credibility and helps users make informed decisions about the privacy risks they are accepting.

COMPARATIVE ANALYSIS

Example: Trust Assumptions by Protocol

Zero-Knowledge Proofs & Relayers

Tornado Cash relies on a trusted setup ceremony for its zk-SNARK circuits, where the original participants are assumed to have destroyed their toxic waste (the secret parameters). If compromised, anonymity could be broken. Users must also trust that the relayer network does not censor transactions and that the smart contract logic is correctly implemented and immutable.

Key Assumptions:

  • The trusted setup (Perpetual Powers of Tau) was performed correctly and the toxic waste was discarded.
  • The zk-SNARK circuits are cryptographically sound.
  • Relayers are permissionless and non-censoring.
  • The Ethereum blockchain provides finality and correct execution.
PRIVACY TRUST ASSUMPTIONS

Frequently Asked Questions

Common questions from developers on defining and implementing privacy trust assumptions for on-chain applications.

Privacy trust assumptions define the entities you must trust to keep your data confidential in a system. In Web3, where data is often public by default, these assumptions are the foundation of any privacy-preserving application. For example, using a zero-knowledge proof like zk-SNARKs requires you to trust the initial setup (trusted setup ceremony) and the correctness of the proving circuit. A system using secure multi-party computation (MPC) might require you to trust that a majority of its nodes are honest. Clearly documenting these assumptions is critical because it allows users and auditors to evaluate the system's threat model and understand the trade-offs between privacy, security, and decentralization. Ignoring them can lead to catastrophic data leaks, as seen in early mixer designs that relied on single-server architectures.

conclusion
PRACTICAL APPLICATION

Conclusion and Next Steps

Defining privacy trust assumptions is a foundational step for building secure and credible systems. This guide has outlined the core framework; here's how to apply it and where to go from here.

To implement this framework, start by explicitly documenting the trust assumptions for your specific use case. For a private voting dApp, your document might state: "Users trust that the zk-SNARK prover (running client-side) correctly generates proofs without leaking input data, and that the smart contract on-chain verifier is implemented correctly." This clarity is crucial for user education and security audits. Tools like Nucleo Studio or Veridise can help formally model and verify these assumptions.

Next, continuously test and monitor these assumptions. For cryptographic primitives, this means tracking the cryptanalysis of algorithms like Poseidon hash or BLS signatures. For operational trust, implement monitoring for your relayer's uptime and transaction success rate. In decentralized systems, consider using fault-tolerant relay networks or proof aggregation services to reduce single points of failure. The goal is to minimize the trust surface area and have mitigation plans for when assumptions are challenged.

Your next steps should involve deeper research into the specific privacy technologies you're using. If you're working with zk-SNARKs, study the trusted setup ceremony (e.g., Perpetual Powers of Tau) and understand the implications of its security. For MPC protocols, examine the specific adversarial models (malicious vs. honest-but-curious). Engage with the research communities on forums like the ZKProof Forum and review audits from firms like Trail of Bits or OpenZeppelin.

Finally, remember that privacy is not a binary state but a spectrum of guarantees. Your trust assumptions define where on that spectrum your application operates. By methodically defining, documenting, and validating these assumptions, you build systems that are not only more secure but also more transparent about their limitations—a critical factor for adoption in the trust-minimized world of Web3.

How to Define Privacy Trust Assumptions for ZK Systems | ChainScore Guides