Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

How to Define Trust Assumptions Early

A step-by-step guide for developers to formally define the trust model, threat boundaries, and security guarantees for a ZK-SNARK system before writing any code.
Chainscore © 2026
introduction
FOUNDATION

Introduction: Why Formalize Trust First?

Before writing a line of code, successful Web3 projects explicitly define their trust model. This guide explains how to articulate and document these critical assumptions.

In traditional software, trust is often implicit—we trust the cloud provider, the database, and the operating system. Web3 systems invert this model. Here, trust must be explicitly defined, minimized, and verifiable. Formalizing trust assumptions early forces you to answer foundational questions: Who or what must users trust for the system to function correctly? Is it a multisig council, a decentralized validator set, a specific oracle network, or the security of an underlying blockchain like Ethereum? Documenting this creates a single source of truth for your project's security posture.

Skipping this step leads to critical vulnerabilities. A common failure pattern is discovering trust dependencies mid-development, such as realizing a DeFi protocol inadvertently relies on a centralized price feed or that a cross-chain bridge's security depends entirely on a 3-of-5 multisig held by the founding team. By mapping trust explicitly, you identify these centralization vectors and technical debt before they are baked into the architecture. This process is akin to threat modeling in cybersecurity and is the first step in building systems that are resilient by design.

The output of this exercise is a Trust Assumptions Document. This living document should list each external dependency (e.g., Chainlink oracles for ETH/USD price), trusted entities (e.g., the Safe{DAO} multisig for protocol upgrades), and cryptographic assumptions (e.g., the security of the Keccak-256 hash function). For each item, note the consequence of failure and any mitigation plans. This clarity is invaluable for security audits, user communication, and guiding the team's technical priorities toward minimizing these trust requirements over time.

prerequisites
PREREQUISITES AND CORE CONCEPTS

How to Define Trust Assumptions Early

Before deploying a cross-chain application, explicitly mapping its trust model is a critical security and design exercise.

A trust assumption defines the entities or mechanisms you rely on for a system's security and correctness. In Web3, these range from cryptographic proofs and economic incentives to multisig committees and centralized operators. For a cross-chain bridge, trust assumptions answer: Who or what can steal, freeze, or censor user funds? Failing to document these creates architectural blind spots and exposes users to unquantified risks. The process begins by listing all external dependencies in your stack—from the underlying blockchains and their consensus to the oracle networks, relayers, and governance contracts you integrate.

To analyze an assumption, categorize it by type and failure mode. Cryptographic trust relies on mathematical proofs (e.g., zk-SNARKs in zkBridge). Economic trust depends on financial penalties (e.g., staked collateral in optimistic bridges). Federated/multisig trust vests power in a known set of entities. Centralized trust places control with a single operator. For each, assess the consequences of failure: is it a liveness issue (transactions halt) or a safety issue (funds are lost)? A bridge using a 5-of-9 multisig for attestations has clear, quantifiable trust: it fails if 5 signers collude maliciously.

Document these assumptions in a structured trust matrix. For each component (e.g., Light Client, Relayer Network, Message Queue), specify: the trust type, the trusted entities, the failure condition, and the impact on users. This exercise often reveals hidden centralization, such as an admin key that can upgrade a critical contract without delay. Publicly sharing this matrix, as projects like Chainlink (Data Feeds) and Across (Optimistic Bridge) do, builds transparency and allows the community to audit the security model. Tools like the L2Beat risk framework provide a template for this analysis.

Finally, use this analysis to guide protocol design and user communication. If your application requires trusting a federation, design incentives and slashing to deter malice. If you rely on a light client, ensure its battle-tested and has a robust fraud-proof window. Communicate assumptions clearly to end-users; a dApp interface might display a "Security Profile" badge indicating, for instance, "Trusted: 8-of-12 Federation." Defining trust early isn't about eliminating it—often that's impossible—but about minimizing, diversifying, and making it explicit, turning a nebulous risk into a managed parameter.

step-1-identify-parties
SECURITY FOUNDATION

Step 1: Identify System Actors and Trust Boundaries

The first step in designing a secure Web3 system is to explicitly map out all participants and the trust relationships between them. This creates a clear threat model.

Every decentralized application involves multiple system actors—human users, smart contracts, oracles, and external services. An actor is any entity that can initiate a transaction, sign a message, or hold assets. For a DeFi lending protocol, key actors include: the borrower, the liquidity provider, the liquidator bot, the price oracle, and the protocol governance contract. Listing them forces you to consider whose actions can impact system state and funds.

With actors defined, you must establish trust boundaries. This is the explicit set of assumptions about which actors you trust, and for what. A critical boundary exists between on-chain and off-chain components. You might trust your own smart contract's logic, but you must never inherently trust an external price feed or a user-provided input. Document assumptions like: "We assume the Chainlink oracle network is honest for price data," or "We assume governance will not pass a malicious proposal." These are your system's acknowledged risks.

This process directly informs your security design. If your DApp uses a decentralized sequencer for rollup transactions, the trust boundary includes the assumption that the sequencer does not censor your users. If you rely on a multi-signature wallet for treasury management, you are trusting a majority of key holders. Writing these down is not an admission of weakness; it is the prerequisite for mitigating those risks through design, such as adding escape hatches, using decentralized oracles, or implementing timelocks on governance power.

common-trust-models
TRUST ASSUMPTIONS

Common Trust Models in ZK Applications

Zero-knowledge proofs shift, but do not eliminate, trust. Understanding the underlying trust models is critical for secure application design.

step-2-define-adversary
SECURITY FUNDAMENTALS

Step 2: Define the Adversarial Model and Capabilities

Before writing a line of code, you must explicitly define who your system defends against and what powers they possess. This formalizes your trust assumptions.

An adversarial model is a formal specification of potential attackers. It answers: Who are we defending against? Common models include the honest-but-curious adversary (passive observer), the malicious adversary (active attacker), and the Byzantine adversary (arbitrary, potentially irrational failure). For smart contracts, you typically assume a malicious adversary with on-chain capabilities, meaning they can send any transaction, front-run, and manipulate mempool order.

Next, define the adversary's capabilities. What resources and access do they control? Key dimensions are: - Computational power: Can they perform expensive computations or brute-force attacks? - Financial resources: What is the maximum capital they can expend in an attack (their cost of corruption)? - Network position: Can they censor, delay, or reorder transactions? - Collusion: Can multiple adversaries coordinate? For Ethereum, assume an adversary can rent hashing power for a 51% attack, though the cost is prohibitive for mainnet.

This process forces you to document trust assumptions. For example, a bridge's security might assume: 1) The underlying blockchain (e.g., Ethereum) is live and secure. 2) A majority of the validator set is honest. 3) Oracles report price data correctly. Writing these down reveals dependencies. A system claiming to be "trustless" often still trusts the security of its host chain and the correctness of its cryptographic primitives.

Use this model to guide design. If your adversary can censor transactions, you might need a commit-reveal scheme or timelocks. If they can manipulate oracle prices, you need circuit breakers or time-weighted average prices (TWAPs). The Chainlink documentation on oracle security provides a concrete example of defining oracle-specific adversarial capabilities and mitigations.

Finally, quantify security. State the economic cost to break your system. For a Proof-of-Stake bridge, this might be "An attacker must control >33% of the staked value to finalize a fraudulent block." This creates a measurable security threshold. Your adversarial model is not static; revisit it when upgrading contracts or integrating new components, as new attack vectors may emerge.

TRUST MODEL ANALYSIS

Comparing Trust Assumptions for ZK-SNARK Components

A comparison of the core trust assumptions required by different components of a ZK-SNARK proving system, from setup to verification.

Component / PropertyTrusted Setup (Groth16)Transparent Setup (STARKs)Universal Setup (Plonk)

Initial Parameter Generation

Requires a one-time, multi-party ceremony. Trust is distributed among participants.

No ceremony required. Parameters are generated transparently from public randomness.

Requires a one-time, universal ceremony. The resulting SRS is reusable.

Trusted Third Parties

Post-Setup Toxic Waste

Must be securely discarded by all ceremony participants. Failure compromises all proofs.

Not applicable. No toxic waste is generated.

Must be securely discarded. Compromise allows forgery of proofs for any circuit using the SRS.

Prover Honesty Assumption

Prover must follow the protocol. A malicious prover cannot create a false proof.

Prover must follow the protocol. Soundness relies on cryptographic assumptions (e.g., collision-resistant hashes).

Prover must follow the protocol. A malicious prover cannot create a false proof.

Verifier Computation

Constant time (~3 pairings).

Polylogarithmic in witness size.

Constant time (~3 pairings).

Proof Size

~200-300 bytes

~40-200 KB

~400-800 bytes

Recursive Proof Support

step-3-document-guarantees
TRUST ASSUMPTIONS

Step 3: Document Security and Privacy Guarantees

Clearly defining trust assumptions is a foundational step in designing secure and transparent blockchain systems. This guide explains how to formally document these assumptions to establish clear security boundaries and user expectations.

A trust assumption is a statement about what components, actors, or cryptographic properties your system relies on to be secure. Explicitly documenting these is not an admission of weakness but a critical practice for transparency and risk assessment. For example, a decentralized application (dApp) might assume that a specific oracle network provides accurate price feeds, or that the underlying blockchain's consensus (e.g., Ethereum's Proof-of-Stake) is secure against 51% attacks. Without this clarity, users cannot accurately gauge their exposure.

Start by cataloging every external dependency and its failure modes. For a cross-chain bridge, key assumptions might include: the security of the connected chains, the honesty of a multisig committee or validator set, the correctness of light client verifications, and the integrity of the underlying cryptographic primitives. Document each assumption with its impact radius—what breaks if it fails—and its realism. Is it economically incentivized? Is it backed by substantial stake or audited code?

Formalize assumptions using a structured template. For each, specify: 1) The Trusted Component (e.g., Chainlink Oracles), 2) The Assumption (e.g., "provides tamper-proof data"), 3) Consequence of Violation (e.g., "incorrect liquidation triggers"), and 4) Mitigations/Risk Score (e.g., "multi-source validation, medium risk"). This creates a living document that guides development, informs audits, and serves as a clear disclosure for users. Reference frameworks like the Smart Contract Security Field Guide for best practices.

Integrate these documented assumptions into your system's architecture and user-facing materials. Smart contracts can include comments referencing the assumption document (e.g., // Trust: Relies on Chainlink ETH/USD feed). Whitepapers and documentation should have a dedicated "Security Model" section. This practice transforms vague promises into auditable claims, building credibility with developers and users who can now make informed decisions about interacting with your protocol.

step-4-validate-model
IMPLEMENTATION

Step 4: Validate and Test the Trust Model

After defining your trust assumptions, you must rigorously test them against real-world scenarios and adversarial conditions. This step moves from theory to practice, ensuring your system's security model is sound.

Validation begins with formalizing your assumptions into testable statements. For a bridge, this could be "the majority of the multisig signers are honest" or "the light client can detect invalid state transitions." Write these down as explicit preconditions in your system's documentation and, where possible, in your code as require statements or invariant checks. This creates a clear benchmark for what "correct" behavior looks like and what constitutes a failure of the trust model.

Next, design targeted test scenarios that probe the boundaries of each assumption. Don't just test for happy paths. For a system trusting a committee, simulate scenarios like a member going offline, a malicious majority attempting to sign a fraudulent transaction, or a Sybil attack on the selection process. Use tools like Foundry for smart contracts or custom simulation frameworks for off-chain components to model these attacks. The goal is to confirm the system fails safely—for example, by halting—when its core trust assumptions are violated.

Incorporate failure testing into your CI/CD pipeline. Automate tests that intentionally break trust assumptions to verify your system's response. For a protocol relying on a price oracle, this means testing with stale data or a manipulated price feed. Document every test case and its expected outcome. This practice, inspired by Chaos Engineering, ensures resilience is continuously validated as the codebase evolves. Tools like Ganache or Anvil can fork mainnet state to create realistic test environments for these scenarios.

Finally, compare your model against real-world data. If your assumption is that "validators will be sufficiently decentralized," analyze the current validator set for geographic, client, and provider concentration. Use blockchain explorers like Etherscan or dedicated dashboards (e.g., for the Beacon Chain) to gather data. Quantitative analysis often reveals gaps between theoretical assumptions and practical deployment, prompting necessary adjustments to slashing conditions, incentive structures, or governance parameters before mainnet launch.

FOR DEVELOPERS

Frequently Asked Questions on Trust Assumptions

Clear trust assumptions are the foundation of secure blockchain applications. This FAQ addresses common developer questions on identifying, documenting, and managing these critical dependencies.

A trust assumption is a specific, external dependency that your system or smart contract relies on to function correctly and securely. It's a component you must trust because you cannot verify its behavior on-chain or within your code's logic. Common examples include:

  • Oracles: Trusting that the data feed (e.g., Chainlink) provides accurate price information.
  • Bridge Validators: Trusting that the multi-signature committee or proof system of a cross-chain bridge (like Wormhole or LayerZero) is honest.
  • Governance: Trusting that token holders will vote rationally in a DAO.
  • Upgradeable Contracts: Trusting that the contract owner (a multi-sig or DAO) will not deploy a malicious upgrade.

Defining these explicitly is the first step in threat modeling and security auditing.

conclusion-next-steps
ARCHITECTURAL BEST PRACTICES

Conclusion and Implementation Next Steps

Defining trust assumptions is not a one-time task but a foundational practice for secure and resilient system design. This final section outlines how to operationalize this framework.

The process begins by formally documenting your trust assumptions in a trust matrix. This is a living document that maps each component of your system—oracles, bridges, sequencers, governance—to its specific trust model. For a DeFi protocol, this might list the price feed oracle (trusted for data integrity), the underlying blockchain (trusted for liveness and consensus), and the multisig admin (trusted for upgrade execution). This explicit documentation serves as a single source of truth for your team and auditors, clarifying the security perimeter of your application.

With assumptions documented, the next step is to implement continuous validation. This involves writing automated checks, often as part of your CI/CD pipeline or off-chain monitors, that verify these assumptions hold. For example, you could write a script that periodically checks an oracle's on-chain heartbeat, verifies a bridge's attestation signatures, or monitors a sequencer's submission latency. Tools like Chainlink Functions or Gelato can automate these off-chain checks and trigger on-chain actions if a violation is detected, moving from passive assumption to active enforcement.

Finally, integrate trust assumption analysis into your development lifecycle. During the design phase, use the framework to evaluate third-party dependencies. In code reviews, explicitly question the trust model of new integrations. For incident response, your trust matrix becomes the first place to look for potential failure points. By making the consideration of trust explicit and procedural, you shift security left and build systems that are not only functional but also verifiably secure against their defined risk profile.

How to Define Trust Assumptions Early in ZK-SNARK Design | ChainScore Guides