Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Design a Protocol for Data Monetization with Privacy

A technical guide for developers on architecting a protocol that enables users to monetize their social data while maintaining privacy through ZK proofs, data vaults, and secure computation.
Chainscore © 2026
introduction
ARCHITECTURE GUIDE

How to Design a Protocol for Data Monetization with Privacy

A technical guide to designing a decentralized protocol that enables users to monetize their data while preserving privacy through cryptographic techniques and smart contract logic.

Designing a privacy-preserving data monetization protocol requires a fundamental shift from centralized data silos to a user-centric model. The core architecture must separate data custody from computation and value transfer. Users retain ownership of their raw data, which is stored locally or in a decentralized storage network like IPFS or Arweave. The protocol's smart contracts act as a neutral marketplace, facilitating requests for computation on this data and ensuring fair compensation via crypto payments, without the platform ever accessing the raw information directly.

The privacy layer is typically built using zero-knowledge proofs (ZKPs) or fully homomorphic encryption (FHE). For example, a data buyer (e.g., an AI model trainer) submits a computation task to the protocol's smart contract. The user's client software executes the task on their private data locally and generates a ZK-SNARK proof. This proof cryptographically verifies that the computation was performed correctly on valid data, revealing only the specific, aggregated result (like a model gradient or statistical average) to the buyer. Frameworks like Circom or zkSNARKs libraries are essential for this component.

The economic and coordination layer is managed by smart contracts on a blockchain like Ethereum, Arbitrum, or a dedicated appchain. Key contract functions include: listing data/computation types, staking for service reliability, submitting verifiable computation requests, and processing payments. A basic request flow in Solidity might involve a DataRequest struct and a fulfillRequest function that accepts a ZK proof. Oracles like Chainlink can be integrated to bring off-chain results or real-world data triggers on-chain in a trust-minimized way.

To ensure practical usability, the protocol must define clear data schemas and computation modules. Schemas standardize data formats (e.g., a health data schema with fields for heart rate and steps), enabling interoperability. Computation modules are pre-verified, auditable functions (e.g., calculateAverage or trainMLModel) that users agree to run. This modular design allows developers to build atop the protocol, creating specific marketplaces for healthcare analytics, financial behavior modeling, or decentralized advertising.

Finally, consider incentive alignment and sybil resistance. Compensating users fairly requires a pricing mechanism, potentially via auctions or fixed rates set in the smart contract. To prevent spam or fake data, users may need to stake tokens or provide a proof of legitimate data ownership. Successful implementations in this space, such as Ocean Protocol's Compute-to-Data framework or NYM's mixnet for private data queries, demonstrate the viability of separating data access from data possession, creating a new paradigm for ethical data economies.

prerequisites
PREREQUISITES AND CORE TECHNOLOGIES

How to Design a Protocol for Data Monetization with Privacy

This guide outlines the foundational technologies required to build a protocol that enables data owners to monetize their information while preserving privacy and control.

Designing a privacy-preserving data monetization protocol requires a stack of core technologies that work in concert. At the base layer, you need a decentralized ledger like Ethereum, Solana, or a custom L2 for transaction finality and programmability. This is where ownership rights and payment logic are encoded via smart contracts. For data storage, decentralized solutions like IPFS, Arweave, or Filecoin are essential to avoid centralized points of failure and censorship. These technologies form the non-negotiable infrastructure for any credible Web3 data protocol.

The critical innovation lies in the privacy layer. Zero-Knowledge Proofs (ZKPs) are the cornerstone technology for enabling computation on private data. Protocols like zk-SNARKs (used by Zcash and Aztec) or zk-STARKs allow a user to prove a statement about their data—such as "my credit score is above 700"—without revealing the underlying data itself. This enables selective disclosure, where data can be monetized based on verified attributes rather than raw, sensitive information. Familiarity with ZKP frameworks like Circom, Halo2, or StarkWare's Cairo is a key prerequisite.

To manage data access and computation securely, Trusted Execution Environments (TEEs) like Intel SGX or AMD SEV offer a complementary approach. A TEE is a secure, isolated area of a processor that guarantees code execution and data confidentiality. In a data monetization model, raw data can be sent into a TEE for processing (e.g., running a machine learning model), with only the output (and proof of correct execution) being revealed. This is particularly useful for complex computations that are currently impractical with pure ZKPs.

Data cannot be monetized if its provenance and quality are in doubt. Decentralized Identity (DID) and Verifiable Credentials (VCs) are essential for establishing trust. A DID, managed in a user's wallet, serves as a persistent identifier. Verifiable Credentials, issued by attesters (e.g., a university, a KYC provider), are cryptographically signed claims linked to that DID. A protocol can use these to verify a data subject's qualifications or attributes on-chain without relying on a central database, creating a portable and user-owned reputation system.

Finally, the protocol needs a mechanism for discovery, pricing, and exchange. This often involves designing a data marketplace smart contract. Key components include a listing mechanism for data or compute services, a bidding or fixed-price model, and a secure settlement layer that releases payment upon cryptographic proof of delivery. Oracles like Chainlink may be integrated to bring off-chain data or computation results on-chain to trigger contract execution. The design must prioritize minimizing gas costs and maximizing user autonomy throughout the transaction flow.

Putting it all together, a robust architecture might flow as follows: A user stores raw data on IPFS, generates a ZKP about a specific attribute, and lists a service based on that proof on a marketplace contract. A buyer pays into an escrow, the user's data is computed upon within a TEE, and an oracle submits the output with a proof. The contract verifies the proof and releases payment. Mastery of these interconnected technologies—decentralized storage, ZKPs/TEEs, DIDs, and marketplace mechanics—is the essential foundation for building a functional and trustworthy data monetization protocol.

architectural-overview
SYSTEM ARCHITECTURE

How to Design a Protocol for Data Monetization with Privacy

This guide outlines the core architectural components and design patterns for building a decentralized protocol that enables users to monetize their data while preserving privacy through cryptographic techniques like zero-knowledge proofs and secure computation.

A privacy-preserving data monetization protocol must reconcile two opposing forces: enabling verifiable computation on user data for commercial use and preventing the raw data from being exposed. The foundational layer typically involves a decentralized data marketplace where data providers (users) and consumers (algorithms, researchers, advertisers) can interact. Users submit their encrypted or hashed data to a storage layer, while consumers post tasks or queries they wish to run against that data. The protocol's smart contracts act as the neutral, trust-minimized coordinator, managing listings, payments, and the attestation of results.

The privacy engine is the most critical technical component. To process data without revealing it, the architecture must integrate advanced cryptographic primitives. Zero-knowledge proofs (ZKPs), such as zk-SNARKs or zk-STARKs, allow a user to prove a specific property about their data (e.g., "I am over 18," "my average transaction volume is X") without revealing the underlying data points. For more complex computations, secure multi-party computation (MPC) or fully homomorphic encryption (FHE) can be employed, enabling analytics or machine learning model training on encrypted data. The choice depends on the trade-off between computational overhead and the required trust model.

Data storage and access control require careful design. Raw user data should never be stored on-chain. Instead, use decentralized storage solutions like IPFS, Arweave, or Filecoin for persistent, content-addressed storage. The data is encrypted client-side with the user's key before upload. Access permissions are managed via decentralized identifiers (DIDs) and verifiable credentials, allowing users to grant and revoke fine-grained access to their data payloads or specific data attributes. This ensures users retain sovereignty and can participate in multiple marketplaces without data siloing.

The economic and incentive layer must align the interests of all participants. This involves designing a native utility token for payments and protocol governance. Consumers pay in tokens to access computation results, and these fees are distributed to data providers and node operators who perform the private computation work (e.g., generating ZK proofs). Slashing mechanisms and reputation systems are necessary to penalize malicious actors who submit false data or incorrect computations. Staking can be used to ensure node operators have skin in the game, securing the network's integrity.

Finally, consider the developer and user experience. Provide robust SDKs and APIs for easy integration. For users, implement meta-transactions or account abstraction to abstract away gas fees and private key management. A successful architecture, like those explored by projects such as Ocean Protocol for data marketplaces or Aztec Network for private computation, demonstrates that with careful layering of cryptography, decentralized coordination, and incentive design, it is possible to build a functional and ethical data economy.

core-components
DESIGNING FOR DATA MONETIZATION

Core Protocol Components

Building a data monetization protocol requires specific architectural components to handle privacy, computation, and market mechanics. This section covers the essential building blocks.

step-1-data-vault
CORE INFRASTRUCTURE

Step 1: Implementing the Encrypted Data Vault

The foundation of a privacy-preserving data monetization protocol is a secure, decentralized storage layer. This step details how to implement an encrypted data vault using smart contracts and cryptographic primitives.

An encrypted data vault is a user-controlled storage primitive that decouples data custody from application logic. Unlike traditional cloud storage, the vault's access control is managed on-chain via a smart contract, while the encrypted data payload itself is stored off-chain on a decentralized network like IPFS, Arweave, or Filecoin. This separation ensures that the public blockchain only stores permission manifests and data references (CIDs), not the sensitive data itself, maintaining scalability and privacy. The core smart contract must manage a mapping between user addresses and their stored data CIDs, enforcing that only the data owner or explicitly authorized parties can retrieve the decryption key.

Data encryption must occur client-side before storage. Use a symmetric encryption algorithm like AES-256-GCM to encrypt the raw user data. The encryption key should be generated ephemerally for each data upload session. This data encryption key (DEK) is then itself encrypted using the user's public key, a process known as key wrapping. This creates an encrypted data encryption key (EDEK) that can safely be stored on-chain or alongside the data pointer. Crucially, the protocol never handles the raw DEK or unencrypted data. Popular libraries for this in web3 include ethers.js for cryptographic operations and web3.storage or lighthouse.storage for decentralized uploads.

The permissioning logic is the vault's most critical component. The smart contract must implement a function, callable only by the data owner, to grant access to another address. This function would store a new EDEK, encrypted to the grantee's public key. For example:

solidity
function grantAccess(address grantee, bytes memory encryptedKeyForGrantee) public onlyDataOwner {
    permissions[msg.sender][grantee] = encryptedKeyForGrantee;
}

More advanced models can implement token-gated access using ERC-20 or ERC-721 holdings, or time-locked decryption where keys are released via a smart contract after a specific block height. The Lit Protocol is a notable network specializing in such programmable key management for decentralized applications.

To complete the architecture, you need a reliable data availability layer. Storing only the CID on-chain is a commitment; the corresponding data must be persistently available off-chain. Using Filecoin for paid, long-term storage or Arweave for permanent storage are robust choices. For redundancy, consider using a data DA (Data Availability) committee or a service like Crust Network which mirrors data across multiple IPFS nodes. The vault contract should allow the owner to update the CID if the data is migrated, preserving the immutable permission ledger while allowing the underlying storage location to evolve.

Finally, design the user flow. The typical sequence is: 1) User generates data and a DEK client-side, 2) Data is encrypted and uploaded to decentralized storage, returning a CID, 3) The DEK is wrapped with the user's public key, creating an EDEK, 4) A transaction is sent to the vault contract, storing the CID and the user's EDEK. To monetize data, the owner grants access to a buyer's address, which involves creating and storing a new EDEK for that buyer. The buyer can then retrieve the CID and their EDEK from the contract, fetch the encrypted data from storage, and decrypt it locally.

step-2-zk-attribute-proofs
CORE CONCEPT

Step 2: Designing ZK Proofs for Data Attributes

This section details the process of defining and constructing zero-knowledge proofs to verify specific data attributes for monetization, ensuring user privacy is preserved.

The first step in designing a ZK proof for data monetization is to define the precise statement you want to prove about your private data. This is not about revealing the data itself, but proving a property of it. For example, a user could prove they are over 18 without revealing their birth date, or that their monthly income falls within a specific bracket for a loan application without disclosing the exact figure. The statement must be formulated as a computational problem, often represented as a circuit in frameworks like Circom or a program in Cairo (StarkNet) or Noir (Aztec).

Next, you must construct the circuit or program logic that enforces this statement. Using a domain-specific language (DSL), you write code that takes the private inputs (the raw data), some public inputs (the parameters to check against, like age >= 18), and outputs a proof. For instance, a circuit to prove income X is greater than $50,000 would perform the comparison X > 50000 entirely within the constraints of the ZK system. The user runs this circuit with their private data to generate a witness, which is then used by a prover algorithm to create the final cryptographic proof.

A critical design consideration is data normalization and preprocessing. Real-world data is messy. A proof about "monthly transaction volume" requires defining the exact source (e.g., a specific Ethereum address over the last 30 blocks), the currency (converted to a stable unit like USD), and handling missing data. This logic must be baked into the circuit. Tools like zkOracles (e.g., Chainlink Functions with zk-proofs) can be integrated to attest to off-chain data feeds in a privacy-preserving manner, providing verifiable inputs to your circuit.

Finally, you must decide on the proof system and trust setup. Different systems offer trade-offs: Groth16 (used by ZK-SNARKs) requires a trusted setup for each circuit but generates very small, fast-to-verify proofs, ideal for on-chain verification. STARKs (like those from StarkWare) are transparent (no trusted setup) and scalable but have larger proof sizes. PLONK and its variants offer universal trusted setups. The choice impacts the protocol's trust assumptions, gas costs for on-chain verification, and the complexity of the development toolchain.

step-3-compute-marketplace
PROTOCOL DESIGN

Step 3: Building the Compute-to-Data Marketplace

This step details the architectural components and smart contract logic required to create a marketplace where data can be analyzed without being moved.

A Compute-to-Data marketplace is a protocol that facilitates private data analysis. Its core principle is that sensitive data never leaves the data provider's secure environment. Instead, the marketplace coordinates the execution of an algorithm (provided by a data consumer) on the raw data, returning only the computed results. This design protects data sovereignty and privacy while enabling valuable insights. The protocol must manage three key actors: the Data Provider, the Algorithm Provider, and the Data Consumer. Smart contracts govern the lifecycle of a job, from publishing assets to payment settlement.

The protocol requires several core smart contracts. A Data NFT contract (e.g., an ERC-721 variant) represents ownership and access rights to a dataset. A separate Access Control contract manages permissions, allowing the data owner to grant compute rights to specific algorithms or consumers. The central Marketplace contract handles job orchestration: it lists available datasets and algorithms, accepts compute job requests, verifies payments, and emits events to trigger off-chain compute execution. Payments are typically handled via a veOCEAN staking model or escrow, where consumers stake tokens that are released upon successful, verifiable job completion.

Here is a simplified structure for a job request in a Solidity smart contract. The ComputeJob struct defines the agreement between parties, and the requestCompute function initiates the process.

solidity
struct ComputeJob {
    address consumer;
    address datasetNFT;
    address algorithmNFT;
    uint256 price;
    bytes32 jobId;
    JobStatus status;
}
enum JobStatus { Pending, Running, Completed, Failed }

function requestCompute(
    address _datasetAddr,
    address _algoAddr,
    uint256 _price
) external payable returns (bytes32 jobId) {
    require(
        IAccessControl(accessControlAddr).canCompute(
            msg.sender,
            _datasetAddr,
            _algoAddr
        ),
        "Access denied"
    );
    require(msg.value >= _price, "Insufficient payment");
    
    jobId = keccak256(abi.encodePacked(_datasetAddr, _algoAddr, block.timestamp));
    jobs[jobId] = ComputeJob({
        consumer: msg.sender,
        datasetNFT: _datasetAddr,
        algorithmNFT: _algoAddr,
        price: _price,
        jobId: jobId,
        status: JobStatus.Pending
    });
    emit JobRequested(jobId, msg.sender, _datasetAddr, _algoAddr);
}

Off-chain components are critical for execution. An Operator Service (or a decentralized network of them) listens for the JobRequested event. It retrieves the dataset from the provider's secure node and the algorithm code, runs the computation in a trusted execution environment (TEE) or secure container, and posts the encrypted results back to the blockchain, often to a decentralized storage solution like IPFS. The result's cryptographic proof (e.g., a TEE attestation) is also submitted. The smart contract then verifies this proof before releasing payment to the data and algorithm providers, completing the trustless transaction.

Key design considerations include cost models (fixed price, gas reimbursement), algorithm validation (to prevent malicious code), and result verifiability. Using a decentralized oracle network like Chainlink can enhance reliability for proof verification and triggering off-chain work. For scalability, the computation logic itself should remain off-chain, with the blockchain acting as a tamper-proof coordination and settlement layer. This architecture, pioneered by projects like Ocean Protocol, creates a new paradigm for data economies where privacy and utility are not mutually exclusive.

ARCHITECTURE SELECTION

Comparison of Privacy-Preserving Technologies

A technical comparison of cryptographic primitives for implementing privacy in data monetization protocols.

Feature / MetricZero-Knowledge Proofs (ZKPs)Secure Multi-Party Computation (MPC)Fully Homomorphic Encryption (FHE)

Cryptographic Foundation

zk-SNARKs, zk-STARKs, Bulletproofs

Secret Sharing, Garbled Circuits

Lattice-based cryptography

Data Processing Capability

Verifiable computation on private inputs

Joint computation on distributed private inputs

Computation on encrypted data

On-Chain Gas Cost (Typical)

High (100k-1M+ gas)

Medium-High (varies with parties)

Extremely High (currently impractical)

Off-Chain Computation Cost

High

High (scales with participants)

Prohibitively High

Trust Assumptions

Trusted setup for some (zk-SNARKs)

Honest majority / non-colluding parties

Information-theoretic security

Primary Use Case

Private state transitions, selective disclosure

Private auctions, federated learning

Private queries on encrypted databases

Developer Tooling Maturity

High (Circom, Halo2, Noir)

Medium (MPC libraries, frameworks)

Low (active research, few production libs)

Suitability for On-Chain Settlement

implementation-resources
DESIGNING FOR DATA MONETIZATION

Implementation Resources and Tools

Essential frameworks, libraries, and protocols for building a system that enables data monetization while preserving user privacy.

06

Consent Management & Legal Frameworks

Integrate on-chain mechanisms for user consent that comply with regulations like GDPR. Technical components include:

  • Verifiable Credentials (W3C Standard): Allow users to hold attested claims in a wallet.
  • EIP-4361 (Sign-In with Ethereum): Standard for authentication, which can be extended to log consent events.
  • Keep a clear audit trail: Use event logs to record when consent was given, for what data, and for how long, enabling provable compliance.
€20M+
Max GDPR Fine
DESIGNING FOR PRIVACY

Frequently Asked Questions

Common technical questions and solutions for developers building protocols that enable data monetization while preserving user privacy.

The fundamental challenge is creating a system that can prove a statement about private data without revealing the data itself. This requires cryptographic primitives like Zero-Knowledge Proofs (ZKPs) or Secure Multi-Party Computation (MPC). For example, a protocol must allow a user to prove their credit score is above 700 to a lender, without disclosing the exact score or their transaction history. This shifts the paradigm from "trust us with your data" to "trust the cryptographic proof." Implementing this efficiently on-chain, where computation and storage are expensive, is the primary engineering hurdle.

security-considerations
ARCHITECTURE GUIDE

How to Design a Protocol for Data Monetization with Privacy

Designing a data monetization protocol requires a privacy-first architecture that separates data custody from computation, enabling value extraction without exposing raw information.

The core challenge in private data monetization is enabling computation on encrypted data. Modern cryptographic primitives like Zero-Knowledge Proofs (ZKPs) and Fully Homomorphic Encryption (FHE) allow protocols to process user data without decrypting it. For instance, a protocol could use ZK-SNARKs to allow a user to prove they have a credit score above 700 for a loan application, revealing only the validity of the statement, not the score itself. This shifts the paradigm from "trust us with your data" to "verify our computation without seeing your data."

A robust architecture must enforce data minimization and purpose limitation at the protocol level. This is achieved through granular, user-controlled access policies. Techniques like zkAttestations or verifiable credentials allow users to cryptographically prove specific attributes (e.g., "is over 21") derived from a private data source. The protocol's smart contracts should only accept these verifiable proofs, never raw personal data. This design ensures that data buyers receive only the insights they paid for, while the underlying sensitive information remains with the user or a decentralized custodian.

Secure computation and incentive alignment are implemented via a clear market mechanism. Data buyers submit tasks (e.g., "find the average age of users in this cohort") to a smart contract with a bounty. A network of trusted execution environments (TEEs) or multi-party computation (MPC) nodes then performs the computation on encrypted data shards. The contract releases payment only upon verification of a ZK proof attesting to the correct execution. This model, used by projects like Phala Network for TEEs or Secret Network for encrypted state, prevents malicious actors from accessing or tampering with the data during processing.

On-chain components must be designed for privacy preservation. Avoid storing any personal data or raw computation results directly on-chain. Instead, store only cryptographic commitments (like hashes or Merkle roots) and ZK proofs. For example, a user's data profile can be represented as a Merkle tree where each leaf is a hashed attribute; proving membership in a dataset for a computation requires only a Merkle proof. Use privacy-preserving smart contract platforms like Aztec or Mina which are built for this paradigm, ensuring that even metadata and transaction graphs do not leak sensitive information.

Finally, the protocol must include mechanisms for data sovereignty and auditability. Users should be able to revoke access or delete their data provenance at any time, enforced via cryptographic key rotation or nullifier schemes. Simultaneously, to build trust with data buyers, the protocol should allow for auditable verification of the data's provenance and the computation's integrity without compromising privacy. Implementing a decentralized identity (DID) framework, such as W3C Verifiable Credentials, allows for portable, user-owned identities that can interact with the monetization protocol without creating siloed profiles.

conclusion-next-steps
IMPLEMENTATION ROADMAP

Conclusion and Next Steps

This guide has outlined the core architectural patterns for building a data monetization protocol with privacy. The next step is to implement these concepts.

You now have a blueprint for a privacy-preserving data monetization protocol. The core components are: a zero-knowledge proof system (like zk-SNARKs via Circom or Noir) for verifying computations without revealing raw data, a decentralized storage layer (such as IPFS or Arweave with encryption) for data custody, and a smart contract marketplace on a scalable L2 (e.g., Arbitrum, zkSync) to manage data licenses and payments. The critical design principle is to keep raw user data off-chain and only allow verifiable, privacy-respecting queries on-chain.

For implementation, start by defining your data schema and the specific computations users can monetize. Build your ZK circuit to generate proofs for these computations. For example, a circuit could prove a user's location is within a geographic zone without revealing the coordinates, or that their transaction history meets a spending threshold without disclosing individual transactions. Use libraries like circomlib for common primitives. Host the encrypted raw data and the verification key for your circuit on decentralized storage, storing only the content identifiers (CIDs) on-chain.

Your smart contracts should implement a clear data licensing model. Consider using ERC-721 for non-fungible data licenses or ERC-1155 for batch licenses, with payment streams via Sablier or Superfluid. The purchase flow involves: 1) a buyer submits a query request with payment, 2) the protocol triggers an off-chain computation using the private data, 3) a ZK proof is generated and submitted on-chain, and 4) the contract verifies the proof and releases payment to the data owner. Always include a fee mechanism for protocol sustainability.

Next, focus on security and user onboarding. Audit your ZK circuits with tools like ecne and your smart contracts with established firms. For users, abstract away cryptographic complexity: provide SDKs and frontend widgets that handle key management (e.g., using Web3Auth or Lit Protocol) and proof generation in the background. Prioritize integration with existing data sources through oracles like Chainlink Functions to enrich off-chain data before computation.

The final step is planning for decentralization and governance. Initially, you may need a trusted operator to run the off-chain prover. Your roadmap should include decentralizing this role via a proof-of-stake network of node operators or a co-processor network like Brevis or Axiom. Implement a governance token (e.g., using OpenZeppelin's Governor) to let the community vote on parameters like fee changes, supported computation types, and treasury allocations. Start with a testnet deployment on a chain like Sepolia or Amoy to refine the economic model.

To continue your learning, explore advanced topics like fully homomorphic encryption (FHE) for arbitrary computations on encrypted data with projects like Zama, or multi-party computation (MPC) for collaborative data analysis. Follow the development of EigenLayer AVSs for decentralized proving networks. The goal is to move beyond simple data sales to creating a vibrant marketplace for verifiable insights, where privacy is the default and users retain sovereignty over their digital footprint.