Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Design a Proof Aggregation Layer for Scalability

A technical guide on architecting a layer to batch and verify proofs from millions of DePIN devices using cryptographic primitives and economic incentives.
Chainscore © 2026
introduction
DESIGN PATTERNS

How to Design a Proof Aggregation Layer for Scalability

A technical guide to architecting a proof aggregation layer that enhances throughput and reduces costs for Decentralized Physical Infrastructure Networks (DePINs).

A proof aggregation layer is a critical middleware component for scaling DePINs. Its primary function is to collect, batch, and verify multiple off-chain proofs—such as Proof of Location, Proof of Bandwidth, or Proof of Compute—into a single, succinct cryptographic proof on-chain. This design addresses the fundamental scalability bottleneck where submitting and verifying each proof individually is prohibitively expensive in terms of gas costs and blockchain throughput. By aggregating proofs, the layer dramatically reduces the on-chain footprint, enabling networks with thousands of devices to operate economically on L1s like Ethereum or high-throughput L2s.

The core architectural decision involves choosing an aggregation scheme. ZK-SNARKs (like Groth16, Plonk) offer small, constant-sized proofs with fast verification but require a trusted setup and complex circuit generation. ZK-STARKs provide post-quantum security and transparency (no trusted setup) but generate larger proofs. For many DePIN use cases, BLS signature aggregation is a simpler, highly efficient starting point for combining attestations from multiple devices. The choice depends on the proof type, required trust assumptions, and the verification cost profile of your target chain. A common pattern is to use a rollup-style sequencer that batches proofs off-chain and periodically submits a commitment to the aggregated state.

Implementing the layer requires a clear data flow. Devices generate raw proofs and send them to an aggregator node. This node runs the aggregation algorithm (e.g., creating a SNARK proof that attests to the validity of all input proofs) and submits the final aggregated proof to a verifier contract on-chain. The smart contract contains the verification key for the aggregation scheme and validates the single proof in one transaction. Key design considerations include the aggregation window (time-based or proof-count-based), incentives for aggregators, slashing conditions for malicious aggregation, and data availability for dispute resolution. Projects like Espresso Systems for rollup sequencing or Automata Network's 2FAir attestation service provide real-world references for these patterns.

For developers, a minimal proof-of-concept using BLS aggregation in Solidity and Go illustrates the pattern. The off-chain aggregator would use a library like herumi/bls to aggregate signatures from devices. The corresponding on-chain verifier, using a precompiled contract on Ethereum or a library like succinctlabs/telepathy for EVM chains, would then verify the single aggregated BLS signature. This reduces verification cost from O(n) to O(1). More complex SNARK-based aggregation can be prototyped with frameworks like Circom for circuit design and snarkjs for proof generation, linking to a verifier contract generated by snarkjs's zkey export functionality.

Optimizing the layer involves balancing latency, cost, and decentralization. A purely permissionless aggregation pool may introduce latency, while a designated sequencer creates a centralization point. Hybrid models, like a PoS-selected aggregator set, are common. Furthermore, leveraging Ethereum's EIP-4844 blob storage or Celestia for posting proof batch data can drastically reduce costs compared to calldata. The ultimate goal is to create a system where the cost per proof asymptotically approaches zero as batch size increases, enabling truly scalable DePIN economies that can support millions of devices without congesting the base layer.

prerequisites
PREREQUISITES AND CORE CONCEPTS

How to Design a Proof Aggregation Layer for Scalability

Before building a proof aggregation layer, you need a solid grasp of the underlying cryptographic primitives and the scaling problem it solves.

A proof aggregation layer is a cryptographic system that compresses multiple zero-knowledge proofs (ZKPs) or validity proofs into a single, succinct proof. This is a cornerstone technology for scaling blockchains, particularly in ZK-Rollup architectures. The core problem it addresses is verification cost: while a single ZK-SNARK proof like Groth16 can be verified in constant time, the gas cost of verifying thousands of individual proofs on-chain remains prohibitive. Aggregation amortizes this fixed cost across many proofs, dramatically reducing the per-transaction verification overhead on the base layer (L1).

To design such a system, you must understand the proof systems involved. Most aggregation layers work with SNARKs (Succinct Non-interactive Arguments of Knowledge) or STARKs (Scalable Transparent Arguments of Knowledge). Common choices include Groth16, Plonk, and Halo2 for SNARKs. The aggregator itself often uses a recursive proof circuit. This is a meta-circuit that takes the proofs you want to aggregate as its private inputs, verifies each one internally, and outputs a single new proof attesting to the validity of all the original proofs. Libraries like gnark, circom, and arkworks provide the tooling to construct these recursive verifier circuits.

The architectural design involves several key components. You need a prover service to generate the aggregated proofs, which is computationally intensive and often run off-chain. A smart contract acting as a verifier must be deployed on the destination chain (e.g., Ethereum) with the necessary elliptic curve pairing precompiles or arithmetic operations. A critical design decision is the aggregation strategy: will you aggregate proofs in a tree-like structure (e.g., using a Merkle tree of proofs) or in a sequential batch? Tree-based aggregation offers parallel proving but adds complexity. You must also define the data availability and state commitment scheme that the aggregated proofs are attesting to.

Security considerations are paramount. The entire system's security reduces to the soundness of the underlying cryptographic assumptions (e.g., the discrete log problem for pairing-based SNARKs) and the correct implementation of the recursive verifier circuit. A bug in this circuit is catastrophic. Furthermore, you must ensure the trusted setup (if required for your SNARK) is performed securely and that the prover and verifier use consistent system parameters and circuit definitions. The on-chain verifier contract must be rigorously audited, as it is the single point of failure for validating the entire batch of aggregated transactions.

In practice, you'll start by defining your circuit for a single transaction proof using a framework like Circom. Then, you design a wrapper circuit that instantiates multiple verifiers for these individual proofs. Using a tool like SnarkJS, you would generate the proving and verification keys for this aggregator circuit. The final step is to write the Solidity verifier contract, which will contain the fixed verification key and the logic to check the final aggregated proof. The efficiency of your design is measured by the gas cost of the verifier contract and the time/CPU resources required to generate the aggregated proof off-chain.

architectural-overview
ARCHITECTURAL OVERVIEW AND DATA FLOW

How to Design a Proof Aggregation Layer for Scalability

A proof aggregation layer compresses multiple zero-knowledge proofs into a single, verifiable proof, drastically reducing on-chain verification costs and enabling scalable blockchain applications.

A proof aggregation layer is a critical component for scaling blockchains and Layer 2 networks. Its core function is to take many individual zero-knowledge proofs (ZKPs)—such as those from rollup transactions—and generate a single, succinct proof that attests to the validity of all of them. This aggregated proof is then submitted to a base layer (like Ethereum) for final verification. The primary benefit is a massive reduction in on-chain gas costs; verifying one proof for 10,000 transactions is exponentially cheaper than verifying 10,000 proofs individually. This architecture is fundamental to ZK-rollups (e.g., zkSync, StarkNet) and decentralized provers like Risc Zero and Succinct.

The data flow through an aggregation layer follows a clear pipeline. First, client applications generate individual transaction proofs using a proving system like Groth16, PLONK, or STARK. These proofs are sent to an aggregator node. The aggregator does not verify each proof individually on-chain; instead, it runs an aggregation circuit—a specialized ZK program that takes the proofs as private inputs. This circuit cryptographically combines them, outputting a single, much smaller aggregated proof. This final proof is what gets published to the blockchain, where a verifier contract checks it against a single, fixed verification key.

Designing the aggregation circuit is the most complex engineering challenge. It must be universal, meaning it can accept proofs from different applications or rollup instances. Libraries like snarkjs and circom or frameworks like Halo2 are used to construct these circuits. The circuit's logic confirms that each input proof would have been validated by its original verification key. A critical optimization is recursive proof composition, where the aggregation circuit itself generates a ZKP of its own correct execution, creating a proof that proves other proofs are valid. This recursive property is what enables efficient scaling.

The aggregator node's architecture must prioritize decentralization and liveness. A single, centralized aggregator creates a trust bottleneck. A robust design uses a permissionless network of nodes that can challenge invalid aggregations via fraud proofs or a proof-of-stake slashing mechanism. Nodes are incentivized with fees from the applications whose proofs they aggregate. For high throughput, the system must efficiently batch proofs, which involves scheduling, proof queuing, and potentially using different aggregation strategies for proofs of varying size and type from sources like Polygon zkEVM or Scroll.

Integrating this layer requires standardizing interfaces. Applications need a simple API to submit proofs, such as a REST endpoint or a smart contract function. The aggregator must expose the final proof and related public data (like state roots) for the base layer verifier. A well-designed system will provide SDKs (like those from Espresso Systems or Herodotus) for easy integration. The end-to-end latency—from proof submission to on-chain verification—is a key performance metric, directly impacting user experience for cross-chain bridges and high-frequency DeFi apps.

In practice, you can implement a basic aggregation test using the gnark library. The following snippet outlines the structure of an aggregator circuit that takes two Groth16 proofs as witness data and outputs a verification:

go
// Pseudocode for a simple aggregation circuit in gnark
type AggregationCircuit struct {
    Proof1 groth16.Proof `gnark:",witness"`
    Proof2 groth16.Proof `gnark:",witness"`
    VK1    groth16.VerifyingKey
    VK2    groth16.VerifyingKey
}

func (circuit *AggregationCircuit) Define(api frontend.API) error {
    // Verify Proof1 internally against VK1
    // Verify Proof2 internally against VK2
    // If both checks pass, the circuit is satisfied
    return nil
}

This circuit, when proven, generates a single proof attesting that both Proof1 and Proof2 are valid, demonstrating the core aggregation logic.

aggregation-methods
PROOF AGGREGATION

Core Aggregation Techniques

Proof aggregation layers combine multiple zero-knowledge proofs into a single, verifiable proof to drastically reduce on-chain verification costs and enable scalable blockchain applications.

TECHNICAL TRADEOFFS

Aggregation Method Comparison

A comparison of common proof aggregation approaches for Layer 2 scaling, highlighting key architectural and performance differences.

Feature / MetricRecursive Proofs (e.g., zk-SNARKs)Proof Batching (e.g., Plonky2, Halo2)Aggregation Trees (e.g., Mina, Nova)

Verification Time (on L1)

< 200k gas

~500k gas

< 100k gas

Prover Memory Overhead

High

Medium

Low

Trusted Setup Required

Proof Size

~200 bytes

~45 KB

~1 KB

Parallel Proving Support

Incremental Computation

Recursion Overhead per Layer

~20 ms

~5 ms

< 1 ms

EVM Verification Compatibility

merkle-tree-implementation
SCALABILITY GUIDE

Implementing Merkle Tree Aggregation

This guide explains how to design a proof aggregation layer using Merkle trees to batch and verify multiple zero-knowledge proofs efficiently, a critical technique for scaling blockchain applications.

Merkle tree aggregation is a cryptographic technique that compresses many individual proofs into a single, verifiable proof. In the context of zero-knowledge rollups (ZK-rollups) like zkSync and StarkNet, this is essential for scalability. Instead of verifying thousands of separate ZK-SNARKs or ZK-STARKs on-chain, a prover can generate a single aggregated proof for an entire batch of transactions. This reduces the computational load and gas costs on the main chain by orders of magnitude, making high-throughput decentralized applications feasible.

The core design involves constructing a Merkle tree where each leaf is a commitment to an individual proof. For example, you might use a Poseidon hash to commit to a Groth16 proof's public signals. An aggregator then generates a single proof that attests to the validity of all leaf commitments and their correct inclusion in the Merkle root. This aggregated proof, along with the final root, is what gets submitted to the on-chain verifier. Key libraries for implementation include arkworks for Rust or circom and snarkjs for circuit-based approaches.

Implementing this layer requires careful circuit design. Your aggregation circuit must verify: 1) Each leaf proof is valid under its original verification key, 2) Each leaf is correctly hashed into the Merkle tree, and 3) The provided Merkle path for each leaf is valid against the published root. This creates a recursive proof system. Tools like Plonky2 or Halo2 are built with recursion in mind, making them suitable for this task. The main challenge is managing the circuit size and prover time, which grows with the batch size.

For on-chain verification, you deploy a single, fixed verifier contract. This contract only needs to check the one aggregated proof and the final Merkle root against a known verification key. This is significantly cheaper than the alternative. For instance, verifying 1000 individual Groth16 proofs might cost over 100 million gas, while verifying one aggregated proof for the same batch could cost less than 500,000 gas. This efficiency is why projects like Polygon zkEVM and Scroll use proof aggregation in their architecture.

When designing your system, consider the trade-offs. Batch size directly impacts prover time and hardware requirements. Larger batches amortize cost better but require more powerful provers. You must also decide on a trusted setup for SNARKs or use a transparent setup with STARKs. Furthermore, the aggregation layer itself can become a bottleneck, so optimizing the underlying proof system and hash function (like using Poseidon over Keccak) is critical for performance. Always benchmark with real-world transaction loads.

To get started, examine open-source implementations. The zkEVM circuits from Polygon or the boojum library from zkSync provide practical references for aggregation logic. Begin by aggregating a small batch of simple proofs in a test environment, measure gas costs and prover times, and iterate on the Merkle tree depth and hash function. A well-designed aggregation layer is the cornerstone of a scalable L2 or co-processor, enabling complex dApps without congesting the base layer.

recursive-zk-snark-implementation
RECURSIVE ZK-SNARKS

How to Design a Proof Aggregation Layer for Scalability

A proof aggregation layer uses recursive zk-SNARKs to bundle many proofs into a single, verifiable proof, drastically reducing on-chain verification costs and enabling scalable applications.

A proof aggregation layer is a critical component for scaling zero-knowledge applications. In systems like rollups or decentralized provers, generating individual proofs for thousands of transactions is computationally expensive and creates a verification bottleneck on-chain. Aggregation solves this by using a recursive zk-SNARK, a proof that can verify other proofs. The core concept is to treat the verification of multiple existing proofs as a computational statement, and then generate a single new proof attesting to the correctness of all prior verifications. This creates a logarithmic compression of verification work.

Designing this layer requires selecting a recursive proving system. Popular choices include Plonky2 (using Goldilocks field for fast recursion), Halo2 (with its accumulator-based approach), or Nova (for incremental computation). The architecture typically involves two circuits: an inner circuit that encodes your application logic (e.g., a batch of valid transactions), and an outer (recursive) circuit whose job is to verify the proofs from the inner circuit. The recursive circuit must be able to efficiently verify the cryptographic checks—like elliptic curve operations and hash functions—required by the inner proof's verification algorithm.

Implementation involves careful engineering of the recursion stack. You start by generating proofs for your base operations (inner proofs). These proofs and their public inputs become the private and public inputs to the recursive aggregator circuit. The circuit executes the native verification logic of the inner proof system. For performance, the choice of a cycle of curves (like Pasta curves) is common, where one curve is used for the inner proof and a paired curve is used for the recursive wrapper, allowing efficient verification of elliptic curve operations within a circuit. Libraries like arkworks or circom provide primitives for building these components.

The final aggregated proof must be verified on-chain. The smart contract verifier only needs to run a single, fixed-cost verification, regardless of how many proofs were aggregated. For example, verifying an aggregated proof for 1,000 transactions might cost the same as verifying one. This is the key scalability benefit. When designing the system, you must benchmark aggregation throughput (proofs per second), final proof size (should be constant, e.g., ~45 KB for Groth16), and the gas cost of the on-chain verifier contract. Tools like gnark or snarkjs can be used to generate the verifier smart contract code.

Real-world use cases include zk-rollup sequencers aggregating batch proofs, bridges proving the validity of cross-chain message bundles, and decentralized prover networks where many provers contribute partial proofs that are recursively merged. The design must also consider economic incentives for aggregators and data availability for the public inputs of the aggregated proof. By implementing a robust proof aggregation layer, you can build applications that scale to millions of users while maintaining the security guarantees of zero-knowledge cryptography.

bls-signature-aggregation
SCALABILITY GUIDE

Aggregating Signatures with BLS

BLS signature aggregation compresses thousands of validator signatures into a single proof, drastically reducing blockchain data overhead. This guide explains the cryptographic principles and practical design for building a proof aggregation layer.

Boneh-Lynn-Shacham (BLS) signatures are a cornerstone for blockchain scalability, particularly in proof-of-stake networks like Ethereum. Unlike ECDSA signatures which are 65 bytes, a single BLS signature is 96 bytes. The real power lies in signature aggregation: multiple signatures on the same message can be combined into a single, constant-sized signature. This is possible due to the mathematical properties of elliptic curve pairings on the BLS12-381 curve, which allow for efficient verification of the combined signature against the aggregated public keys of all signers.

Designing an aggregation layer requires a clear separation of duties. A typical architecture involves attesters (validators who sign), aggregators (nodes that collect and combine signatures), and verifiers (nodes that check the final proof). Aggregators listen for individual BLS signatures, validate them, and then use the Aggregate function from libraries like blst or herumi to create a single aggregate signature. The corresponding public keys must also be aggregated using the same function to enable verification. This reduces the on-chain data for 10,000 validator attestations from ~9.6 MB to just 96 bytes.

The core verification uses a pairing check. For an aggregate signature agg_sig on message m from signers with aggregated public key agg_pubkey, the verifier checks: e(agg_pubkey, H(m)) == e(G1, agg_sig). Here, e is the bilinear pairing function, H is a hash-to-curve function mapping the message to the G2 group, and G1 is the generator point. Libraries handle this complex math, but understanding the equation is key for debugging. Critical design considerations include signature non-malleability and protection against rogue-key attacks, often mitigated by requiring proof-of-possession for each public key.

For implementation, start with a robust cryptographic library. In a Go-based system, you might use github.com/supranational/blst. The aggregation logic involves collecting signature objects and calling blst.AggregateSignatures(sigs []*blst.P1Affine). Always verify each individual signature before aggregation to prevent poisoning the batch. The aggregated signature and public key can then be serialized and submitted on-chain. For Ethereum's beacon chain, this pattern is encapsulated in the AggregateAndProof struct, demonstrating a production-ready specification.

Beyond basic aggregation, advanced techniques improve security and efficiency. Threshold signatures use BLS to create a signature from a subset of signers, ideal for distributed custody. Batch verification allows checking multiple aggregate signatures simultaneously, amortizing pairing operation costs. When designing your layer, consider network overhead for signature collection, aggregator incentives for timely proof submission, and slashing conditions for malicious aggregation. Testing against edge cases, like empty signer sets or invalid curve points, is non-negotiable for mainnet deployment.

Integrating BLS aggregation transforms scalability. It's the engine behind Ethereum's lightweight sync committees and efficient rollup proof systems. By compressing verification data, it enables higher validator counts, lower gas costs for bridge operations, and feasible cross-chain messaging. The final design should produce a verifier smart contract or native module that can authenticate the 96-byte aggregate proof, trusting the cryptographic guarantee that thousands of participants have endorsed the underlying message.

economic-model
ECONOMIC MODEL

How to Design a Proof Aggregation Layer for Scalability

A robust economic model is essential for a decentralized proof aggregation layer, aligning incentives for provers, verifiers, and users to ensure security and liveness.

A proof aggregation layer, like a zk-rollup or validium, relies on a network of provers to generate succinct cryptographic proofs of transaction batches. The core economic challenge is to create a system where it is financially rational for provers to act honestly and for users to trust the system's outputs. The model must account for proof generation costs (computational resources), staking mechanisms for slashing malicious actors, and fee markets to prioritize transactions. Without proper incentives, the network risks liveness failures or security compromises.

The fee structure is the primary revenue mechanism. Users pay a fee for their transactions to be included and proven. This fee is split between the sequencer (for ordering transactions) and the aggregator (for proof generation). To prevent centralization, the protocol should implement a permissionless proving market where any prover can participate by staking collateral. Fees can be distributed via a leader election or proof-of-stake mechanism, rewarding the selected prover. Protocols like zkSync and StarkNet use variations of this model to fund their provers.

Security is enforced through cryptoeconomic slashing. Aggregators must post a bond or stake in the native token or ETH. If a prover submits an invalid proof or fails to submit a proof within a deadline (liveness fault), their stake can be slashed. A portion of the slashed funds can be used to reward verifiers—network participants who challenge invalid proofs. This creates a verifier's dilemma; the model must ensure the reward for catching fraud exceeds the cost of verification, which is a key design consideration documented in Ethereum's rollup-centric roadmap.

To ensure long-term sustainability, the model must balance costs. Proof generation on zero-knowledge virtual machines (zkVMs) like SP1 or RISC Zero incurs significant compute expense. The fee market must dynamically adjust to cover these real-world costs. Furthermore, a portion of transaction fees can be directed to a protocol treasury to fund future development and security audits. Mechanisms like EIP-1559-style fee burning can also be incorporated to create deflationary pressure on the native token, aligning long-term holder incentives with network security.

Designing the tokenomics requires careful parameter selection. Key variables include: the minimum stake amount for provers, the challenge period duration for fraud proofs, the slash percentage for faults, and the fee split ratios. These parameters are often tuned via governance. A well-calibrated model, as seen in networks like Polygon zkEVM, ensures the system remains trust-minimized and cost-effective for users while providing adequate rewards for network operators, creating a flywheel for sustainable scaling.

PROOF AGGREGATION

Frequently Asked Questions

Common technical questions about designing and implementing a proof aggregation layer to scale blockchain execution.

A proof aggregation layer is a system that compresses multiple zero-knowledge proofs (ZKPs) or validity proofs into a single, succinct proof. It's a critical component for scaling blockchains via ZK-Rollups or Validiums.

How it works:

  1. Multiple execution proofs (e.g., from different rollup batches) are generated.
  2. An aggregator circuit recursively verifies these proofs and outputs a single aggregated proof.
  3. This final proof is posted to the base layer (L1), where a single verification step confirms the validity of all original transactions.

It's needed because verifying each individual proof on-chain is gas-intensive and slow. Aggregation reduces the on-chain verification cost from O(n) to O(1), enabling higher throughput. Protocols like Polygon zkEVM and zkSync Era use this technique.

conclusion
IMPLEMENTATION PATH

Conclusion and Next Steps

This guide has outlined the core components of a proof aggregation layer. Here's how to proceed from theory to a production-ready system.

You now understand the architectural trade-offs: choosing between a centralized aggregator for simplicity or a decentralized network for censorship resistance, selecting a proof system like Plonk or Halo2 based on your trust model and performance needs, and designing a data availability solution using data blobs or DACs. The next step is to prototype the core proving flow. Start by implementing a simple aggregator contract on a testnet (e.g., Sepolia) that can verify a single proof, then extend it to accept a batch. Use libraries like snarkjs for Groth16 or the halo2 crate in Rust to generate your proofs programmatically.

For a production system, focus on economic security and incentive alignment. Design your fee model to cover the real cost of proof generation, which includes expensive hardware (high-RAM servers or GPUs for GPU-based provers). Consider implementing a staking and slashing mechanism for decentralized provers to penalize incorrect work. You must also plan for upgradability; your verification keys are hardcoded into the aggregator contract, so you need a secure governance process to update them for new circuit versions. Tools like the Ethereum Attestation Service (EAS) can be used to create off-chain attestations for prover reputations.

To test your system's limits, benchmark it. Measure the gas cost per verified transaction in your aggregator contract—this is the ultimate scalability metric. Profile the time and memory required for your prover to aggregate 100, 1,000, and 10,000 proofs. Explore advanced optimizations: can you use custom gate constructions in your ZK circuit to reduce constraints? Could recursive proof aggregation (proving a proof of aggregated proofs) further compress your final proof? The field evolves rapidly; follow research from teams like Polygon Zero, Scroll, and zkSync to incorporate new techniques like Plonky2 or Boojum into your design.

Finally, integrate your aggregation layer with a real application. The most straightforward path is to connect it to an L2 rollup as its proof settlement layer. Alternatively, use it to create a proof of solvency system for a CEX, or a privacy-preserving batch auction for a DeFi protocol. Start with a closed testnet with known participants, proceed to a public incentivized testnet to stress-test your economic model, and only then move to mainnet. Your goal is to create a verifiable compute layer that is not just theoretically sound but practically unstoppable and cost-effective.