Verifiable compute is a cryptographic protocol that allows a third party to verify the correct execution of a program without re-running the entire computation. For DePIN, this is transformative. It enables a network of independent operators to perform tasks—like rendering video, training AI models, or processing sensor data—while providing a succinct zero-knowledge proof (ZKP) or validity proof that the work was done correctly. This proof is then verified on-chain, allowing for automated, trust-minimized payments and slashing. The core components are a prover (the compute node), a verifier (a smart contract), and a verifiable virtual machine (VM) like RISC Zero, SP1, or Jolt.
Launching a Verifiable Compute Network for DePIN
Introduction to Verifiable Compute for DePIN
Verifiable compute enables decentralized physical infrastructure networks (DePIN) to execute critical workloads with cryptographic proof of correctness, creating trustless backends for real-world services.
Launching a verifiable compute network requires selecting the right proving system. zkSNARKs offer small, fast-to-verify proofs but require a trusted setup and can be complex to program. zkSTARKs are transparent (no trusted setup) and faster to prove, but generate larger proofs. New frameworks like RISC Zero and SP1 abstract this complexity by allowing developers to write provable programs in Rust, compiling them to a zkVM. For example, a DePIN for weather data aggregation could use RISC Zero to prove that a node correctly processed raw sensor inputs into a standardized forecast, without revealing the raw data.
The architecture involves several layers. The Application Layer defines the specific compute task (e.g., "render this 3D frame"). The Proving Layer executes the task inside the zkVM and generates the proof. The Settlement Layer, typically a blockchain like Ethereum, Solana, or a rollup, hosts the verifier contract that checks the proof. Upon successful verification, the contract releases payment from an escrow to the prover. This creates a cryptoeconomic flywheel: reliable proof leads to payment, incentivizing more operators to join the network, increasing its capacity and resilience.
Key technical challenges include proof generation time and cost. Generating a ZKP for a non-trivial computation can take minutes and significant memory. Optimizations involve designing efficient circuits, using continuations to break work into chunks, and leveraging GPU acceleration. The cost to verify on-chain is also critical; a proof that costs $50 in gas to verify is impractical for microtasks. Solutions include using proof aggregation (batching many proofs into one) and settling on low-cost L2s or appchains. The choice of zkVM and blockchain settlement layer is therefore a primary engineering decision.
For builders, the workflow starts with defining a deterministic, reproducible task. Using a framework like RISC Zero, you write your core logic in Rust, using its SDK to interface with the zkVM. After testing locally, you deploy a verifier contract (often provided as a template) to your chosen chain. Node operators then run your client software, which performs the computation and submits the proof to the verifier. Successful projects like io.net (for GPU compute) and Render Network (transitioning to verifiable rendering) demonstrate this model in production, securing billions in compute value through cryptographic verification instead of centralized audits.
Prerequisites and Core Components
Before launching a verifiable compute network for DePIN, you need to understand the core technical stack and infrastructure requirements. This guide outlines the essential components.
A verifiable compute network for DePIN requires a robust foundation of hardware, software, and cryptographic protocols. The core components include a decentralized network of compute nodes, a verifiable execution layer (like a zkVM or optimistic fraud-proof system), a blockchain settlement layer, and a tokenomics model for incentives. The network's security and utility depend on the correct integration of these elements. You can explore existing frameworks like zkWASM for zero-knowledge proofs or EigenLayer for restaking security.
On the hardware side, node operators need reliable machines with sufficient CPU, RAM, and storage to perform the designated compute tasks and generate proofs. For networks using zero-knowledge proofs (ZKPs), this often requires specialized hardware accelerators (GPUs/FPGAs) to manage the significant computational overhead of proof generation. The software stack typically involves a node client, a proof generation library (e.g., Circom, Halo2), and an oracle service for fetching real-world data feeds required by DePIN applications.
The blockchain layer acts as the trust anchor and settlement venue. It is where compute tasks are dispatched, results and proofs are verified, and node operators are rewarded or slashed. Networks often build on established L1s like Ethereum or high-throughput L2s like Arbitrum or Base for finality. Smart contracts on this chain manage the network's state, including job queues, staking registries, and payment distributions. The choice of chain impacts security, transaction costs, and interoperability with other DeFi and DePIN protocols.
A sustainable tokenomics model is critical for bootstrapping and securing the network. This involves a native utility token used for paying for compute, staking by node operators to guarantee performance, and governance. Mechanisms like work tokens (where staking grants the right to perform work) or protocol-owned liquidity are common. The model must balance incentives for node operators, developers building on the network, and end-users to ensure long-term growth and decentralization.
Finally, you need developer tooling and clear documentation. This includes SDKs for dApp developers to submit jobs, client libraries for node operators, and monitoring dashboards. Providing easy integration, similar to how The Graph offers subgraphs for indexing, lowers the barrier to entry. Thoroughly testing all components—from proof system correctness under load to smart contract security via audits—is a non-negotiable prerequisite before a mainnet launch.
Launching a Verifiable Compute Network for DePIN
This guide outlines the core architectural components required to build a decentralized physical infrastructure network (DePIN) powered by verifiable off-chain computation.
A verifiable compute network for DePIN is a decentralized system where off-chain workers (nodes) execute computational tasks, such as processing sensor data or training AI models, and produce cryptographic proofs of correct execution. The architecture is built on a blockchain oracle layer that connects the on-chain smart contracts, which define jobs and manage payments, to the off-chain compute network. This separation allows for heavy computation to be performed efficiently off-chain while maintaining cryptographic security guarantees on-chain. Key architectural goals include scalability, cost-efficiency for complex tasks, and resistance to malicious or faulty nodes.
The system comprises several interacting layers. The Smart Contract Layer (e.g., on Ethereum, Solana, or a dedicated L2) hosts job registries, staking contracts for node operators, and a verification contract to check computational proofs. The Off-Chain Compute Layer consists of a peer-to-peer network of worker nodes running a client like Bacalhau, Gensyn, or a custom implementation. These nodes receive job specifications, execute them within secure environments (e.g., Docker containers or WASM runtimes), and generate verifiable proofs such as zk-SNARKs or fraud proofs. A Coordinator/Dispatcher Service (which can be decentralized) matches jobs with available nodes based on staking, reputation, and hardware specs.
Verifiability is the cornerstone. After a worker node completes a task, it does not simply return a result; it generates a cryptographic proof. For succinct zero-knowledge proofs (zk-SNARKs/STARKs), this proof is small and cheap to verify on-chain, providing instant finality. For optimistic or fraud-proof systems, results are posted with a challenge period, allowing other nodes to dispute and prove fraud if necessary. The choice depends on the trade-off between proof generation cost, verification cost, and time-to-finality. This mechanism ensures that the network can trust the output of anonymous, potentially untrusted hardware providers.
Integrating this compute layer with physical infrastructure (DePIN) involves defining standard job types. For example, a decentralized wireless network (like Helium) could offload signal quality analysis or mapping computations. An imaging satellite constellation could use the network for on-demand object detection or change analysis on terabytes of data. The smart contract defines the job—input data references (often from decentralized storage like IPFS or Arweave), the container image for the computation, and the required resources (GPU, RAM). Nodes that meet the requirements bid for the job by staking tokens, creating a cryptoeconomic security model.
Launching such a network requires careful planning of the cryptoeconomic incentives. Node operators stake tokens to participate and receive rewards for successful, verifiable work; slashing occurs for provable malfeasance. Clients pay for compute jobs, with fees distributed to workers and a protocol treasury. The architecture must also include mechanisms for node discovery, secure job delivery, and result aggregation. Frameworks like Cosmos SDK or Substrate can be used to build the coordinating blockchain, while compute clients interface with it. The end goal is a trust-minimized, global supercomputer capable of powering the next generation of physical infrastructure applications.
Comparison of Proving Schemes for DePIN
Key trade-offs between cryptographic proof systems for verifying off-chain compute in decentralized physical infrastructure networks.
| Feature / Metric | zk-SNARKs | zk-STARKs | Optimistic Proofs |
|---|---|---|---|
Proof Size | ~200 bytes | ~45-200 KB | N/A (no proof) |
Verification Time | < 100 ms | < 200 ms | ~7 days (challenge period) |
Trust Assumption | Trusted Setup required | Transparent (no trusted setup) | 1-of-N honest verifier |
Quantum Resistance | |||
Proving Time | Minutes to hours | Seconds to minutes | Seconds |
Gas Cost for On-Chain Verify | ~500k gas | ~2-5M gas | Variable (dispute resolution) |
Primary Use Case | Final state verification | High-throughput, transparent proofs | Cost-effective for low-value ops |
Step 1: Designing the Task Marketplace Smart Contracts
The core of a verifiable compute network is a decentralized marketplace that matches compute tasks with provider nodes. This step outlines the essential smart contract architecture to facilitate this coordination.
The marketplace smart contract acts as the central coordinator, managing the lifecycle of compute tasks. Its primary functions are to register verified providers, accept task submissions from clients, handle escrow payments, and record results for verification. A common design pattern uses a request-fulfill model, where a client contract creates a Task struct containing the required computation, a bounty, and a deadline. Providers then bid on or accept these tasks by calling a fulfillTask function.
Key data structures must be carefully designed. A Task struct typically includes fields like id, client, bounty, status (Pending, Fulfilled, Verified, Disputed), and a data field for the computation payload (e.g., a Docker image hash or WASM bytecode). A Provider struct tracks a node's stake, reputation score, and a list of completed tasks. Using access control modifiers is critical to ensure only registered providers can fulfill tasks and only task clients can submit results for verification.
Payment and slashing logic secures the network's economic layer. Bounties are held in escrow by the marketplace contract. Upon successful verification (handled by a separate verifier contract or oracle), the bounty is released to the provider. If a provider submits an invalid result or times out, a portion of their stake can be slashed and the bounty may be returned to the client. This disincentivizes malicious behavior. Implementing this requires careful state management to track escrow balances and stake locks.
For scalability, consider separating concerns into multiple contracts. A canonical architecture includes: a Registry for node identity and staking, a Marketplace for task lifecycle management, and a Verifier (or an interface to one) for validating compute results. This modular approach, inspired by systems like Livepeer's protocol, makes the system easier to audit and upgrade. The Marketplace contract would hold references to the addresses of the other core components.
Developers should write and test these contracts using frameworks like Foundry or Hardhat. A basic task submission in Solidity might look like this:
solidityfunction submitTask(bytes calldata _data, uint256 _bounty) external payable { require(msg.value == _bounty, "Incorrect bounty"); taskId++; tasks[taskId] = Task({ id: taskId, client: msg.sender, bounty: _bounty, data: _data, status: TaskStatus.Pending }); emit TaskSubmitted(taskId, msg.sender, _bounty); }
Thorough unit tests should cover all state transitions and edge cases, especially around payment handling and dispute resolution.
Finally, the contract design must plan for upgradability and governance. Using proxy patterns like the Transparent Proxy or UUPS allows for fixing bugs and adding features post-deployment. Consider integrating a timelock and a governance token for decentralized control over parameter updates, such as staking requirements or slashing penalties. The initial deployment should be on a testnet (like Sepolia) for rigorous testing before a mainnet launch on an L2 like Arbitrum or Base to minimize gas costs for users.
Step 2: Building the Worker Node Client Software
This guide details the process of building and configuring the client software that allows a physical machine to join a verifiable compute network as a worker node.
The worker node client software is the core application that runs on your hardware, connecting it to the network's coordination layer. Its primary responsibilities are to receive compute tasks, execute them in a secure, isolated environment, and generate cryptographic proofs of correct execution. This software bundle typically includes the core client daemon, a proof generation engine (like a zkVM or fraud proof module), and a local task scheduler. For networks like Akash or Render, this involves installing their specific akash-node or render-node packages, while custom networks may require building from a source repository.
A critical component is the execution environment isolation. To ensure security and determinism, tasks must run in sandboxed containers or virtual machines. Most clients use Docker containers for isolation, managed by the client software. The configuration file (e.g., config.toml or config.yaml) defines key parameters: the network RPC endpoint, your node's wallet/identity key, resource allocation limits (CPU cores, RAM, GPU), storage paths, and the staking contract address. Proper configuration here dictates your node's capabilities and eligibility for tasks.
Here is a simplified example of a configuration snippet for a hypothetical vc-client:
yamlnode: rpc_endpoint: "https://rpc.mainnet.vcnetwork.io" private_key_path: "/etc/vcnode/operator.key" resources: max_cpu_cores: 8 max_memory_gb: 32 gpu_enabled: true gpu_model: "nvidia-rtx-4090" staking: contract_address: "0x1234..." auto_stake: true
After configuration, you start the client as a systemd service to ensure it runs persistently: sudo systemctl start vc-node-client.
Once running, the client registers your node's resource attestation on-chain. This is a signed message detailing your hardware specs, which is verified by the network. The client then enters a polling loop, querying the network's task queue or listening for on-chain events for assigned work. Upon receiving a task, it fetches the required dependencies (like a Docker image or dataset), executes the job, generates the requisite validity proof, and submits both the result and proof back to the network. Monitoring logs is essential for troubleshooting initial setup issues.
The final step is integration with the network's economic layer. Ensure your client's configured wallet has the necessary tokens staked or bonded as required by the network's security model. This stake is often slashed for malicious behavior or downtime. Successful setup is confirmed when your node appears as "Active" in the network's explorer and begins receiving and completing tasks, earning rewards in the protocol's native token for its verifiable compute work.
Step 3: Defining the Proof Circuit for Your Workload
Transform your computational task into a verifiable statement by defining its logic as a zero-knowledge proof circuit.
A proof circuit is a program written in a specialized language like Circom or Noir that defines the exact computation you want to prove. It doesn't execute the workload itself; instead, it creates a set of mathematical constraints that any valid execution must satisfy. Think of it as a blueprint or a set of rules. When a prover runs your original workload, they also generate a witness—the set of inputs, intermediate values, and outputs. The circuit's job is to verify that this witness is consistent with the predefined constraints, proving correct execution without revealing the private data.
Designing an efficient circuit is critical for performance and cost. Complex operations like hashing (SHA256, Poseidon) or signature verification (ECDSA) are expensive in ZK terms. You must optimize by using ZK-friendly primitives where possible. For a DePIN sensor data aggregation task, your circuit might take private sensor readings and a public aggregation key as inputs. Its constraints would enforce that the outputted aggregate (e.g., a sum or average) is correctly computed from the inputs, and that the sensor readings are valid according to a known format or signature.
Here is a simplified conceptual structure for a DePIN data attestation circuit in pseudocode:
code// Public Inputs: aggregated_value, merkle_root_of_accepted_data // Private Inputs: raw_sensor_data, sensor_signature 1. Verify sensor_signature matches raw_sensor_data and a known public key. 2. Check that raw_sensor_data meets validity constraints (e.g., within range, correct timestamp). 3. Compute the contribution of this data to the aggregate (e.g., add to a sum). 4. Verify the computed contribution is consistent with the public aggregated_value. 5. Verify the data's hash is included in the public merkle_root.
Each of these steps translates into many individual constraints over finite field elements.
After writing your circuit, you compile it to generate two key artifacts: the Proving Key and the Verification Key. The proving key is used by workers to generate proofs for specific executions, while the verification key is embedded into your on-chain verifier contract. This compilation step also reveals the circuit's constraint count, which directly impacts proving time and cost. For scalability, aim to minimize constraints and leverage recursive proof composition if your workload is very large, breaking it into smaller, provable chunks.
Finally, thoroughly test your circuit with a variety of inputs, including edge cases and invalid data, to ensure it correctly accepts valid witnesses and rejects invalid ones. Use the testing frameworks provided by your chosen language (like Circom's circom_tester). A bug in the circuit logic is a critical security flaw, as it could allow a malicious prover to generate a valid proof for an incorrect computation. This circuit definition becomes the core trust anchor for your entire verifiable compute network.
Essential Tools and Resources
These tools and frameworks are commonly used when launching a verifiable compute network for DePIN. Each card focuses on a concrete capability required to prove offchain computation, coordinate nodes, and verify results onchain.
Onchain Verification and Settlement
Smart contracts are responsible for verifying proofs or attestations and settling rewards or penalties.
Core contract responsibilities:
- Verify zk proofs or TEE attestations
- Track job inputs, outputs, and hashes
- Enforce staking, slashing, and reward logic
Best practices:
- Use minimal verifier contracts to reduce gas costs
- Separate job registry from settlement logic
- Store only hashes onchain, not raw outputs
Networks typically deploy these contracts on Ethereum, Arbitrum, or Optimism depending on cost and finality requirements.
Decentralized Job Coordination
DePIN networks require coordination layers to match jobs with nodes without centralized schedulers.
Common approaches:
- P2P gossip using libp2p for job discovery
- Onchain job registries with offchain pickup
- Leaderless auctions where nodes bid to execute tasks
Key design considerations:
- Preventing job spam and sybil attacks
- Ensuring deterministic job assignment
- Handling retries and partial failures
Many networks combine light onchain coordination with offchain peer discovery to balance cost and liveness.
Step 4: Implementing the Off-Chain Orchestration Layer
This step details the core off-chain system that coordinates hardware, schedules compute jobs, and manages proofs for a decentralized physical infrastructure network (DePIN).
The orchestration layer is the central nervous system of a verifiable compute DePIN, responsible for managing the lifecycle of computational tasks across a distributed network of physical nodes. Unlike a simple job queue, this layer must handle dynamic resource discovery, task scheduling based on hardware capabilities and geographic location, and the aggregation and verification of zero-knowledge proofs (ZKPs) or other attestations. It acts as the primary interface for users submitting jobs and for node operators joining the network. A common architectural pattern uses a set of microservices, often built with frameworks like Node.js or Go, communicating via a message broker like RabbitMQ or Apache Kafka for reliable job distribution.
A critical component is the resource manager, which maintains a real-time registry of available nodes. Each node registers its specifications—such as CPU cores, GPU models, RAM, storage, and location—upon joining the network. This service must validate node identities, often through a staking mechanism or hardware attestation, and track their health and availability. When a compute job is submitted, the scheduler consults this registry to match the job's requirements (e.g., "needs an NVIDIA A100 GPU") with suitable nodes, optimizing for cost, latency, and redundancy. Implementing efficient matching algorithms is key to network performance.
For verifiable compute, the orchestration layer must integrate a proof management system. After a node completes a task, it generates a cryptographic proof, such as a zk-SNARK, demonstrating correct execution. The orchestrator receives this proof, performs initial validation (like checking the proof's structure), and may batch multiple proofs for efficient on-chain verification. This system often includes a relayer service to submit the batched proofs and results to a smart contract on a settlement layer like Ethereum or a high-throughput L2. The smart contract's verification function provides the final, trustless guarantee of computational integrity.
Here is a simplified code example for a job scheduler service using a pseudo-API. It demonstrates the core logic of matching a job to an available node.
javascriptclass JobScheduler { constructor(nodeRegistry) { this.nodeRegistry = nodeRegistry; // Real-time list of nodes } scheduleJob(jobSpec) { const { requiredGpu, minMemory, region } = jobSpec; const eligibleNodes = this.nodeRegistry.getNodes().filter(node => { return node.gpuModel === requiredGpu && node.availableMemory >= minMemory && node.region === region && node.status === 'available'; }); if (eligibleNodes.length === 0) { throw new Error('No suitable nodes available for job'); } // Simple strategy: select the node with the lowest current load const selectedNode = eligibleNodes.reduce((prev, curr) => prev.currentLoad < curr.currentLoad ? prev : curr ); // Assign job and update registry this.nodeRegistry.assignJob(selectedNode.id, jobSpec.id); return { nodeId: selectedNode.id, jobId: jobSpec.id }; } }
Operational considerations for this layer are paramount. It must be designed for high availability and fault tolerance, as it coordinates the entire network. Using a decentralized oracle network or a committee of staked operators for critical functions like final scheduling decisions can reduce centralization risks. Furthermore, the system needs robust monitoring and metrics for tracking job success rates, node reliability, and proof verification times. Implementing a slashing mechanism for malicious or offline nodes, enforced via the underlying smart contracts, is essential for maintaining network security and service quality. The orchestrator's logic and state should be open-source and verifiable to build trust with network participants.
Integrating Hardware Attestation and Trust
This step establishes the cryptographic foundation for trust in a decentralized physical infrastructure network (DePIN) by verifying the integrity and identity of participating hardware.
Hardware attestation is the process by which a physical device cryptographically proves its identity and the integrity of its software state to a remote verifier. In a DePIN compute network, this is critical for establishing a trusted execution environment (TEE) or verifying that a worker node is running the correct, unmodified client software. Without attestation, the network cannot distinguish between a legitimate node and a malicious actor spoofing hardware to submit fraudulent work or steal data. Protocols like Intel SGX (with attestation via the Intel Attestation Service) and AMD SEV-SNP provide the underlying hardware mechanisms, but the network must integrate a verifier to check these proofs.
The core technical integration involves adding an attestation verification module to your network's consensus or validation layer. When a new node joins, it generates an attestation report—a signed statement from the hardware's root of trust (e.g., the CPU) containing measurements of its firmware and initial software. Your network's smart contract or off-chain verifier must validate this report's signature chain, often by checking it against a known hardware vendor public key or a trusted attestation service. For example, a verifier for an Intel SGX enclave would confirm the report's signature using Intel's root CA and then check that the reported MRENCLAVE (a hash of the enclave's code) matches the expected value for your authorized client.
For decentralized verification, you can implement this logic in a Solidity contract using precompiles or oracles, or run an off-chain verifier service with a whitelist of accepted hardware measurements. The Open Attestation SDK provides tools for parsing and validating reports. Upon successful attestation, the node's public key is registered on-chain, linking its cryptographic identity to a verified hardware state. This process creates a trusted compute base, ensuring that subsequent computations or data handled by that node originate from a known, secure environment.
Beyond initial enrollment, consider continuous attestation or remote attestation for critical operations. A node may need to re-attest periodically or provide a fresh proof before executing a high-value task, demonstrating that its state hasn't been compromised since boot. This can be done by requesting a new quote from the TEE during task assignment. The network's economic and slashing mechanisms should be tied to attestation status; a node that fails an attestation check or is found to be running unauthorized software should be immediately slashed and removed from the active set to protect network integrity.
Finally, document the attested properties for your participants. This includes the exact hardware requirements (CPU model, TEE type), the expected software measurements (MRENCLAVE, MRSIGNER), and the attestation service endpoints. Transparency here allows node operators to self-verify their setup before joining. By rigorously integrating hardware attestation, your DePIN compute network shifts trust from individual operators to verifiable cryptographic proofs, enabling secure decentralized computation on sensitive data for use cases like AI inference, video rendering, or privacy-preserving data analysis.
Frequently Asked Questions
Common technical questions and troubleshooting for developers building or integrating with a Verifiable Compute Network for DePIN.
A Verifiable Compute Network is a decentralized system where off-chain computation is performed by independent nodes, and the correctness of the results is cryptographically proven on-chain. This is essential for DePIN (Decentralized Physical Infrastructure Networks) because it enables trustless automation of real-world operations.
Key reasons include:
- Trust Minimization: Physical actions (e.g., triggering a payment for sensor data, proving a drone completed a delivery) require verifiable proof without relying on a central authority.
- Cost Efficiency: Heavy computation (like processing LiDAR data or training AI models) is done off-chain, with only a tiny proof verified on-chain, drastically reducing gas costs.
- Decentralized Coordination: Networks like IoTeX, peaq, and Helium use this model to manage device fleets, validate work, and distribute rewards based on proven contributions.
Conclusion and Next Steps
Your verifiable compute network is now operational. This section outlines the final steps to secure, scale, and integrate your DePIN.
Launching the network is the beginning, not the end. The immediate priority is operational security and monitoring. Set up comprehensive logging for your Coordinator and Executor nodes using tools like Grafana and Prometheus. Monitor key metrics: job completion rate, average proof generation time, stake slashing events, and network latency. Implement automated alerts for failed jobs or consensus deviations. For on-chain components, use a block explorer like Etherscan to track contract interactions and set up event listeners for critical functions like submitResult or slashStake.
Next, focus on network growth and decentralization. A healthy DePIN requires a robust, permissionless set of operators. Develop clear documentation for node operators, covering hardware requirements, setup scripts, and staking procedures. Consider launching a testnet incentive program to bootstrap participation before migrating to mainnet. Engage with communities on developer forums and at hackathons. The goal is to attract a diverse set of operators to prevent geographic or infrastructural centralization, which strengthens the network's censorship resistance and fault tolerance.
Finally, plan for continuous evolution. The verifiable compute landscape advances rapidly. Stay updated on new proving systems like RISC Zero or SP1 which may offer better performance for your use case. Explore EigenLayer for leveraging Ethereum's restaking ecosystem to secure your network's economic layer. Architect your contracts to be upgradeable via a transparent governance mechanism, allowing for future improvements without fragmentation. Your network's long-term success depends on its ability to integrate new cryptographic primitives and scale to meet the growing demands of decentralized AI, physics simulations, and other compute-intensive DePIN applications.