A decentralized AI training marketplace is a protocol that matches parties who need machine learning models trained with parties who have the data or computational resources to perform the training. Unlike centralized platforms like AWS SageMaker or Google Vertex AI, these marketplaces operate on a peer-to-peer basis, governed by smart contracts on a blockchain. Core participants include task requesters (who submit jobs and pay), data providers (who contribute datasets), and compute providers (who run training workloads). The blockchain acts as a neutral, trust-minimized coordinator for job posting, bidding, result verification, and payment settlement, eliminating single points of control and failure.
Setting Up a Decentralized Training Job Marketplace
Setting Up a Decentralized Training Job Marketplace
A technical guide to building a peer-to-peer marketplace for AI model training, connecting data providers with compute providers using smart contracts.
The foundational smart contract architecture typically involves several key components. A Job Registry contract manages the lifecycle of training tasks, storing metadata like the model architecture, hyperparameters, and bounty. A Reputation System tracks the performance history of providers to mitigate malicious actors. Escrow & Payment contracts hold funds securely until job completion and verification. For verifiable computation, platforms often integrate with proof systems like zk-SNARKs (e.g., using circom) or optimistic verification mechanisms, where results are assumed valid unless challenged within a dispute window. An off-chain oracle or indexer is usually required to fetch job results from providers and submit proofs to the chain.
Implementing a basic job listing starts with defining a struct in your smart contract. Below is a simplified Solidity example for a TrainingJob. The contract uses an escrow model where the requester deposits payment upon listing, which is released upon successful verification.
soliditystruct TrainingJob { address requester; string datasetHash; // IPFS CID of training data string modelSpecHash; // IPFS CID of model architecture uint256 bounty; address assignedProvider; bool isCompleted; bool isVerified; } mapping(uint256 => TrainingJob) public jobs; uint256 public nextJobId; function listJob(string memory _datasetHash, string memory _modelSpecHash) external payable { require(msg.value > 0, "Bounty must be > 0"); jobs[nextJobId] = TrainingJob({ requester: msg.sender, datasetHash: _datasetHash, modelSpecHash: _modelSpecHash, bounty: msg.value, assignedProvider: address(0), isCompleted: false, isVerified: false }); nextJobId++; }
Critical challenges in decentralized training include data privacy, computational integrity, and cost efficiency. To address privacy, data is often encrypted or used within trusted execution environments (TEEs) like Intel SGX, or training is performed via federated learning where models are trained locally on data. Integrity is ensured through verification schemes; for instance, a provider might generate a zk-SNARK proof demonstrating that a training step was executed correctly according to the public model spec. However, generating such proofs for large models like GPT-3 is currently impractical, so most operational marketplaces (e.g., Gensyn, Akash Network for generic compute) use optimistic verification and cryptographic challenges for now.
To launch a functional marketplace, you'll need to integrate off-chain components. A job orchestrator (often a decentralized backend like a Cosmos SDK app or a set of keeper networks) matches jobs with providers based on reputation and price. A result verification module runs challenge games or proof verification. Data storage is typically handled by decentralized solutions like IPFS or Filecoin for datasets and model checkpoints. For developers, existing stacks like Cosmos SDK with Ignite CLI or Substrate for custom blockchains can accelerate development, while Ethereum with Layer 2 solutions (e.g., Arbitrum) is common for settlement to reduce gas costs for frequent micro-transactions.
The future of these marketplaces hinges on advances in verifiable computing and efficient proof systems. Projects are exploring succinct proofs for tensor operations and leveraging specialized co-processors. For builders, the immediate focus should be on creating robust economic incentives, a simple developer SDK for integrating training scripts, and clear SLAs (Service Level Agreements) codified in smart contracts. Starting with a niche use case—like fine-tuning specific open-source models—allows for practical testing of the verification and dispute resolution mechanisms before scaling to more complex general-purpose training.
Prerequisites and Tech Stack
Before building a decentralized training job marketplace, you need a foundational environment. This guide covers the essential tools, languages, and infrastructure required to develop, test, and deploy the core smart contracts and frontend.
The core of a decentralized marketplace is its smart contract system. You will need proficiency in Solidity (version 0.8.x or later) for writing secure, upgradeable contracts. Essential development tools include Hardhat or Foundry for compiling, testing, and deploying contracts, and Node.js (v18+) as the runtime environment. A local blockchain like Hardhat Network or Ganache is crucial for rapid iteration and unit testing before moving to testnets.
For the frontend, you'll interact with the blockchain using a library like ethers.js (v6) or viem. A modern framework such as Next.js or Vite with React is recommended for building the user interface. You must also integrate a wallet connection provider, such as RainbowKit or ConnectKit, to handle user authentication via MetaMask or other EIP-1193 compatible wallets. This allows users to sign transactions and pay for job postings or submit work.
Infrastructure choices are critical for decentralization and data availability. You'll need to decide on a storage solution for training datasets and model artifacts. Options include IPFS (via services like Pinata or web3.storage) or Arweave for permanent storage. For decentralized compute, you may integrate with protocols like Akash Network or Gensyn to execute the training jobs, though initial prototypes can simulate this logic on-chain.
Finally, you must configure access to blockchain networks. Start with a testnet like Sepolia or Holesky for deployment. You will need test ETH from a faucet and an RPC endpoint from a provider like Alchemy or Infura. For production, you'll need to plan for mainnet deployment on Ethereum, or consider Layer 2 solutions like Arbitrum or Optimism to reduce transaction costs for users.
Setting Up a Decentralized Training Job Marketplace
This guide details the foundational architecture for a decentralized marketplace where users can submit AI training jobs and solvers compete to complete them, with results verified on-chain.
A decentralized training job marketplace is a peer-to-peer network where computational resources are coordinated without a central intermediary. The core system comprises three primary actors: job posters who submit tasks (e.g., fine-tuning a model), solvers who execute the training, and verifiers who validate the results. The system's architecture is built on a smart contract backbone deployed on a blockchain like Ethereum or an L2 (e.g., Arbitrum), which manages job listings, staking, payments, and dispute resolution. This creates a trust-minimized environment where economic incentives, rather than a central authority, ensure reliable execution.
The workflow begins when a job poster deploys a task smart contract. This contract defines the job's parameters: the training dataset (often via an IPFS or Arweave hash), the model architecture, the objective function, and the reward in a native or ERC-20 token. Solvers must stake a security bond to participate, which can be slashed for malicious behavior. When a solver claims a job, they download the dataset off-chain, perform the training computation locally or on their cloud infrastructure, and submit the resulting model weights back to the contract. A critical challenge here is the verification problem—proving the training was done correctly without re-executing it.
To address verification, the architecture typically implements a cryptoeconomic security layer. One common approach is a challenge period or optimistic verification, where submitted results are assumed valid but can be challenged by other network participants (verifiers) within a time window. The verifier must provide a fraud proof, often by executing a smaller, deterministic verification step. An alternative is zero-knowledge proof (ZKP)-based verification, where the solver generates a cryptographic proof (e.g., a zk-SNARK) attesting to correct execution. While computationally intensive for the solver, this provides instant, gas-efficient finality. The choice between optimistic and ZKP-based systems is a key design trade-off between cost, latency, and security guarantees.
The payment and reward mechanism is automated through the smart contract. Upon successful verification and the conclusion of any challenge period, the contract releases the job reward to the solver and returns their staked bond. A protocol fee, often a small percentage of the reward, may be directed to a treasury to fund ongoing development. Failed jobs or successfully challenged submissions result in the solver's bond being slashed, with a portion potentially awarded to the successful challenger. This incentive alignment is crucial: it discourages solvers from submitting low-quality work and encourages verifiers to police the network, creating a self-sustaining ecosystem.
For developers, implementing this architecture involves several key smart contract components. You'll need a factory contract to deploy new job instances, a staking contract to manage solver bonds, and a registry for tracking active solvers and their reputations. Off-chain, you need oracle services or keeper networks (like Chainlink Automation) to trigger contract functions based on time or events, and client SDKs for solvers to interact with jobs. A reference stack might use Solidity for contracts, Hardhat for development, The Graph for indexing and querying job data, and IPFS for decentralized storage of datasets and model artifacts.
Key Concepts and Components
Building a marketplace for AI training jobs requires integrating several core blockchain components. This section covers the essential systems you'll need to implement.
Job Description & Incentive Smart Contracts
The core logic of the marketplace is defined in smart contracts. These handle:
- Job Posting: Structuring tasks with data specs, model architecture, and reward.
- Staking & Slashing: Requiring workers to stake tokens as collateral, which can be slashed for malicious or incorrect work.
- Reward Distribution: Automatically paying out rewards to workers upon successful, verified task completion.
- Dispute Resolution: Managing challenges to submitted work, often via a decentralized oracle or jury system.
Decentralized Compute & Proof Systems
You need a mechanism to prove that training work was performed correctly. Key approaches include:
- Proof of Learning: Cryptographic proofs that a specific model was trained on a given dataset.
- Zero-Knowledge Proofs (ZKPs): For privacy, proving a model meets accuracy thresholds without revealing the underlying data or weights.
- Trusted Execution Environments (TEEs): Using hardware enclaves (like Intel SGX) to guarantee code execution integrity.
- Federated Learning Protocols: Coordinating training across many devices while keeping data local, using protocols like FedAvg.
Reputation & Worker Selection Mechanisms
A robust marketplace needs a system to identify reliable workers.
- On-Chain Reputation Scores: Track each worker's history of successful completions, slashing events, and dispute outcomes.
- Bonding Curves: Use token bonding curves for job assignment, where workers with higher reputation pay lower bonds.
- Delegated Staking: Allow token holders to stake on behalf of workers they trust, sharing in rewards and risks.
- Task Matching Algorithms: Implement off-chain or on-chain logic to match jobs to workers based on reputation, stake, and hardware specs.
Tokenomics & Fee Structures
Design the economic model that sustains the marketplace.
- Native Utility Token: Used for staking, paying rewards, and governance.
- Fee Models: Consider protocol fees on job rewards (e.g., 2-5%), slashing redistributions, and transaction fees.
- Inflation/Reward Schedules: Programmatic token emissions to bootstrap early network participation and compute supply.
- Treasury & Grants: Allocate a portion of fees to a community treasury to fund ecosystem development and critical job subsidies.
Step 1: Designing the Job Specification Smart Contract
The core of a decentralized training job marketplace is a smart contract that defines the job's parameters, rules, and payment logic. This step outlines how to design this foundational contract.
A Job Specification smart contract acts as the single source of truth for a machine learning training task. It defines the immutable rules of engagement between a job creator (who needs a model trained) and solvers (who compete to train it). Key parameters stored on-chain include the training dataset's IPFS hash, the model architecture definition, the target evaluation metric (e.g., accuracy, F1-score), the submission deadline, and the total reward pool in a token like ETH or a stablecoin. This on-chain specification ensures transparency and prevents disputes about the job's original requirements.
The contract must implement a clear lifecycle and state machine. Typical states are Open, Training, Evaluation, and Completed. The contract transitions from Open to Training when the creator deposits the reward and locks the job. Solvers then submit their trained model checkpoints (referenced via IPFS or Arweave) during the submission window. A critical design choice is the evaluation mechanism. For simple, verifiable metrics, you can use an on-chain evaluation function. For complex ML tasks, you'll need a decentralized oracle like Chainlink Functions or a designated committee of evaluators whose addresses are pre-defined in the contract to submit the final scores.
Payment logic is the most security-sensitive component. A common pattern is a winner-takes-all or split reward model based on performance. The contract must only release funds after the evaluation result is verifiably recorded on-chain and a challenge period (if any) has passed. Use OpenZeppelin's ReentrancyGuard and pull-over-push patterns for secure withdrawals. For example:
solidityfunction payoutWinner(address solver, uint256 score) external onlyEvaluator { require(jobState == JobState.Evaluation, "Not in eval"); require(scores[solver] == score, "Invalid score"); jobState = JobState.Completed; _safeTransferReward(solver); // Internal function using transfer() }
Consider gas optimization and data storage. Storing large datasets on-chain is prohibitively expensive. Instead, store content-addressed hashes (CIDs) from decentralized storage solutions. Use bytes32 for hashes and pack smaller uint values where possible. Emit informative events like JobCreated, SubmissionReceived, and WinnerPaid for off-chain indexers and frontends to track contract activity. These events are crucial for building a responsive application layer that monitors the marketplace.
Finally, integrate with a decentralized identity or reputation system. You can require solvers to stake a security deposit or use a registry like Ethereum Attestation Service (EAS) to link submissions to a verifiable credential of past performance. This mitigates spam and low-quality submissions. The completed specification contract becomes a verifiable, autonomous agreement that executes payments based on cryptographically proven outcomes, forming the trustless backbone of your marketplace.
Implementing the Bidding and Auction Mechanism
This section details the smart contract logic for a first-price sealed-bid auction, where trainers submit bids and a job creator selects the winner.
The auction mechanism is the core of the marketplace's coordination logic. We implement a first-price sealed-bid auction where interested trainers submit encrypted bids containing their proposed fee and a commitment hash. The bid function accepts the trainer's address, the job ID, an encrypted bid (typically the fee encrypted with the job creator's public key), and a commitment which is a keccak256 hash of the plaintext bid and a secret salt. This ensures bids are hidden during the bidding period to prevent front-running and bidding wars. The contract stores this data in a mapping like bids[jobId][trainer].
To reveal their bid and become eligible for selection, a trainer must call a revealBid function after the bidding period ends. This function requires the plaintext bid data (fee, salt) used to create the original commitment. The contract recalculates the keccak256 hash of these inputs and verifies it matches the stored commitment. If valid, the plaintext bid data is stored, marking the bid as revealed. Only revealed bids can be selected by the job creator. This commit-reveal scheme is a standard pattern for on-chain privacy.
The job creator finalizes the auction by calling a selectWinner function, specifying the job ID and the address of the winning trainer. The contract validates that: the caller is the job creator, the bidding period is over, the specified trainer has a revealed bid, and no winner has been selected yet. Upon successful validation, the contract stores the winner, transfers the staked collateral from the job creator to the contract (or releases it if already staked), and emits an event. The selected trainer's address and bid fee are now recorded on-chain, forming the agreement for the subsequent work phase.
Critical security checks must be integrated. The bid function should prevent duplicate bids from the same address for the same job. The revealBid function must enforce a strict time window after the bidding ends to prevent indefinite state bloating. The selectWinner function should include a failsafe timer, allowing the job creator to reclaim collateral if no suitable bids are revealed. These guards make the system robust against common auction exploits like bid suppression or griefing.
For developers, consider using OpenZeppelin's EIP-712 for structured data hashing to improve the user experience for off-chain signature generation (if you add permit functionality for fees). The encrypted bid can be implemented using the eth-sig-util library client-side, encrypting with the job creator's public key from their wallet. Always emit comprehensive events (BidPlaced, BidRevealed, WinnerSelected) to allow indexers and frontends to track auction state efficiently.
Step 3: Securing Payments with an Escrow Contract
This guide explains how to build a Solidity escrow contract to secure payments between clients and trainers in a decentralized marketplace, ensuring funds are only released upon job completion.
An escrow smart contract acts as a neutral, trustless third party that holds funds until predefined conditions are met. In our training job marketplace, when a client posts a job, they will deposit the agreed payment into the escrow contract. The funds are locked and cannot be accessed by the trainer until the client approves the delivered work. This mechanism eliminates counterparty risk: the client's funds are safe from misuse, and the trainer has a guaranteed payment source upon successful completion. The contract's logic is immutable and transparent, enforced by the blockchain.
The core contract state includes variables to track the job's status, the involved parties (client and trainer), the amount in escrow, and a boolean flag like workCompleted. Key functions are deposit(), which the client calls to fund the escrow (often payable with msg.value), and releasePayment(), which transfers the funds to the trainer. Crucially, releasePayment() should be callable only by the client and only when workCompleted is true. A refund() function, callable only by the client if work is not completed within a deadline, provides an exit mechanism.
Here is a simplified Solidity code snippet illustrating the escrow structure:
soliditycontract TrainingEscrow { address public client; address public trainer; uint256 public amount; bool public workCompleted; constructor(address _trainer) payable { client = msg.sender; trainer = _trainer; amount = msg.value; workCompleted = false; } function confirmCompletion() external { require(msg.sender == client, "Only client can confirm"); workCompleted = true; payable(trainer).transfer(amount); } }
This basic pattern uses transfer for payment, but in production, consider using the Checks-Effects-Interactions pattern and secure withdrawal methods to prevent reentrancy attacks.
To integrate this with the marketplace, your job listing or matching contract (from Step 2) would deploy a new instance of the escrow contract for each agreed-upon job. The client address and trainer address are passed to the constructor, and the payment is sent along with the deployment transaction. The contract address should be stored (e.g., in a mapping in your main contract) for future reference. The frontend application would then monitor the escrow contract's state, enabling the client to trigger confirmCompletion via a button once they verify the training materials or model output.
Security is paramount. Beyond reentrancy guards, implement access controls rigorously using modifiers like onlyClient or onlyTrainer. Introduce a dispute resolution mechanism, such as a timelock after which either party can escalate to a decentralized oracle or a simple multisig of trusted community members if consensus isn't reached. For mainnet deployment, always audit your contract and consider using established libraries like OpenZeppelin's Escrow or ConditionalEscrow contracts as a foundation to reduce risk.
Step 4: Building a Result Verification System
This step implements a decentralized verification mechanism to ensure AI model trainers are paid only for valid, high-quality work, eliminating the need for a centralized arbiter.
A decentralized marketplace requires a trustless mechanism to verify the results of a training job before releasing payment. The core challenge is preventing a malicious trainer from submitting a poorly trained or random model to claim the bounty. Our system solves this by implementing a commit-reveal scheme with slashing, where the trainer's stake is at risk during a verification period. The job's verificationLogic—a smart contract function defined by the job creator—objectively evaluates the submitted model weights or metrics.
The verification flow follows a specific sequence. First, the trainer submits a cryptographic commitment (e.g., a hash of the model file and a secret) to signal job completion, locking the bounty. Then, a verification window opens, during which the trainer must reveal the actual model data. The verificationLogic is executed on-chain or via an oracle (like Chainlink Functions) to check the model against the job's success criteria (e.g., accuracy threshold on a test set). If verification passes, the bounty and stake are released to the trainer. If it fails, the trainer's stake is slashed and the bounty is returned to the creator.
Here is a simplified Solidity code snippet for the verification contract's core function:
solidityfunction submitForVerification( uint256 jobId, bytes32 commitmentHash ) external onlyTrainer(jobId) { Job storage job = jobs[jobId]; require(job.status == JobStatus.Training, "Invalid status"); job.trainerCommitment = commitmentHash; job.verificationDeadline = block.timestamp + VERIFICATION_WINDOW; job.status = JobStatus.Verification; } function revealAndVerify( uint256 jobId, bytes calldata modelData, bytes32 secret ) external { Job storage job = jobs[jobId]; require(keccak256(abi.encodePacked(modelData, secret)) == job.trainerCommitment, "Invalid reveal"); require(block.timestamp <= job.verificationDeadline, "Window closed"); bool isValid = job.verificationLogicContract.verify(modelData, job.testDataHash); if (isValid) { // Release bounty + stake to trainer _payoutTrainer(jobId); } else { // Slash stake, return bounty to creator _slashAndRefund(jobId); } }
Designing effective verificationLogic is critical. For simpler tasks, on-chain verification using a pre-agreed test dataset hash can work. For complex AI model validation, you typically need an oracle to compute metrics off-chain in a deterministic way. Services like Chainlink Functions or API3 can fetch a result from a trusted off-chain verification API. The key is that the logic must be deterministic and publicly verifiable; all parties must agree on the inputs (model data, test data hash) and the computational result.
This architecture creates strong economic incentives for honesty. Trainers are motivated to submit quality work to recover their staked funds and earn the bounty. Job creators are protected from paying for useless models. The system's security scales with the value of the stake relative to the bounty. For high-value jobs, the marketplace or a decentralized jury system (like Kleros) can be integrated as a fallback for disputed verifications, though this adds complexity and cost.
Finally, emit clear events for each state change (e.g., JobSubmitted, VerificationPassed, VerificationFailed) to allow indexers and frontends to track job progress. The completed, verified model weights can be stored on decentralized storage like IPFS or Arweave, with the resulting content identifier (CID) recorded on-chain as the final, immutable output of the marketplace transaction.
Comparison of Verification Methods for Training Jobs
Methods for verifying the integrity and correctness of AI model training on decentralized compute.
| Verification Method | ZK Proofs (e.g., zkML) | Optimistic Fraud Proofs | Trusted Execution (TEEs) |
|---|---|---|---|
Trust Assumption | Cryptographic | Economic (bonded) | Hardware vendor |
Verification Latency | ~1-2 hours | ~7 days (challenge period) | < 1 sec |
Compute Overhead | 100-1000x training cost | ~1x (only if disputed) | ~10-20% |
Client Proof Size | ~10-100 MB | Full model + dataset | Remote attestation (~1 KB) |
Suitable for Model Size | < 100M parameters | Any size | Any size (hardware-limited) |
Decentralized Verifier Network | |||
Example Implementation | EZKL, RISC Zero | Truebit, Arbitrum | Intel SGX, AMD SEV |
Step 5: Creating the Off-Chain Job Orchestrator
Build the backend service that coordinates training jobs between requesters and solvers, managing the entire workflow lifecycle.
The Job Orchestrator is the central off-chain service that manages the lifecycle of a decentralized training job. It acts as a state machine, listening for on-chain events from the JobRegistry smart contract and coordinating the off-chain execution steps. When a new JobPosted event is emitted, the orchestrator validates the job parameters, selects a suitable solver from the available pool, and initiates the training process. This service is responsible for reliability, handling retries, and ensuring that the final model and proof are submitted back to the blockchain upon completion.
Key responsibilities of the orchestrator include solver selection, data staging, and progress monitoring. For solver selection, you can implement a simple round-robin algorithm or a more sophisticated reputation-based system that considers a solver's past performance and stake. The orchestrator must securely fetch the training dataset from the decentralized storage location specified in the job (e.g., IPFS CID or Arweave TXID) and make it available to the selected solver's execution environment. It continuously monitors the solver's progress via heartbeats or log streams.
Implement the core orchestrator logic using a framework like Node.js with Express or Python with FastAPI. The service should maintain a persistent database (e.g., PostgreSQL) to track job states: Pending, Assigned, Running, Completed, or Failed. Use a message queue like RabbitMQ or Redis to handle asynchronous tasks such as dispatching jobs to solvers and processing callback notifications. Here is a simplified structure for a job processing loop:
javascriptasync function processJob(jobId) { const job = await db.getJob(jobId); const solver = await selectSolver(job.requirements); await dispatchToSolver(solver, job); await monitorJob(solver, jobId); }
To ensure fault tolerance, design the orchestrator to be stateless where possible, allowing for horizontal scaling. Implement idempotent operations so that retrying a failed step does not cause duplicate submissions. Use environment variables for all configuration, including RPC endpoints for chains like Ethereum or Polygon, and private keys for the orchestrator's wallet to sign transactions. The service must also expose a secure API or dashboard for administrative oversight, providing visibility into queue lengths, solver performance, and job success rates.
Finally, integrate with the on-chain contracts to complete the cycle. Upon receiving a success signal from the solver, the orchestrator must fetch the final model weights and the generated ZK-SNARK proof (or similar). It then calls the JobRegistry.submitResult(jobId, modelCID, proof) function, paying the gas fee from its managed wallet. This transaction moves the job to the Completed state on-chain, releases payment to the solver, and makes the trained model available to the original requester, closing the decentralized loop.
Essential Resources and Tools
Key protocols, frameworks, and infrastructure components required to build a decentralized marketplace for machine learning training jobs. Each resource below maps directly to a production concern like compute orchestration, data availability, payments, or verification.
Verifiable Training and Result Validation
A decentralized marketplace must answer a critical question: did the provider actually run the training job as specified? Verification mechanisms reduce trust assumptions.
Common approaches:
- Deterministic re-execution on smaller subsets of data
- Checkpoint and log verification using cryptographic hashes
- Reputation systems tied to historical job success rates
Emerging techniques:
- Zero-knowledge proofs for ML inference and limited training steps
- Challenge-response schemes where validators request partial recomputation
Practical reality:
- Full ZK proofs for large-scale training are still impractical
- Most systems combine economic incentives, sampling-based checks, and slashing
Designing verification early prevents your marketplace from becoming a race to the bottom on dishonest compute. Even partial verification significantly improves reliability and user trust.
Frequently Asked Questions (FAQ)
Common questions and troubleshooting for building a decentralized training job marketplace on platforms like Bittensor, Gensyn, or Ritual. Focused on smart contract logic, incentive alignment, and node operation.
The core mechanism is a cryptoeconomic incentive loop that aligns the interests of job requesters (clients), compute providers (miners/validators), and verifiers. Clients submit tasks and stake a bounty. Miners compete to complete the machine learning training job, submitting a proof-of-work or proof-of-learning. A separate set of validators then verifies the work's correctness, often through cryptographic proofs like zk-SNARKs or interactive challenges. Successful verification triggers a smart contract to release the bounty to the miner and pay a fee to the verifier. Failed or fraudulent work results in slashing the miner's stake. This model, used by protocols like Gensyn and Bittensor's subnet 1, ensures cost-effective, trustless computation.