A merit-based airdrop is a token distribution mechanism that allocates rewards based on verifiable, on-chain user contributions rather than simple wallet balances or Sybil-vulnerable actions. Unlike early airdrops that rewarded mere interaction, modern systems like EigenLayer, Starknet, and LayerZero use complex criteria to identify and reward proven value. The core architectural challenge is creating a transparent, automated, and manipulation-resistant scoring system that maps user actions to a quantifiable merit score, which then determines token allocation.
How to Architect a Merit-Based Airdrop System
How to Architect a Merit-Based Airdrop System
A technical guide for developers on designing and implementing airdrop systems that reward genuine user contributions instead of simple wallet activity.
The system architecture typically involves three core components: a Data Indexer, a Scoring Engine, and a Distribution Module. The Data Indexer queries blockchain data (events, transaction logs, state changes) from sources like The Graph, Dune Analytics, or custom indexers to build a profile of user activity. The Scoring Engine applies a predefined set of rules and weights to this data—such as protocol usage volume, duration of engagement, governance participation, or liquidity provision depth—to calculate a final merit score. This logic is often implemented off-chain for flexibility but must be fully transparent and reproducible.
For the scoring logic, consider a model that weights different actions. For example, you might assign points where total_score = (swap_volume * 0.4) + (lp_days * 0.3) + (governance_votes * 0.2) + (referral_bonus * 0.1). This must be documented in a public merit criteria specification. A critical implementation step is Sybil resistance, which can involve techniques like analyzing transaction graph clusters, requiring a minimum activity threshold, or using proof-of-personhood solutions like World ID to filter out bot-driven wallets.
The final component is the Distribution Module, which uses the calculated scores to determine token allocations. A common method is a linear distribution based on score proportionality, but you can also use tiered brackets (e.g., top 10% of users get a fixed larger share). This module is responsible for generating the final merkle root of eligible addresses and their entitlements. The claim process is usually handled by a smart contract that verifies proofs against this root, as seen in the Uniswap and Arbitrum airdrop contracts.
When implementing, start by defining clear, objective metrics that align with your protocol's goals. Use event-driven architecture for your indexer to efficiently capture relevant on-chain actions. Thoroughly test the scoring logic with historical blockchain data before the snapshot. Finally, ensure full transparency by publishing the eligibility criteria, the final score calculation methodology, and the merkle root well before the claim period begins, as community trust is paramount for a successful merit-based drop.
Prerequisites and System Requirements
Before deploying a merit-based airdrop, you must establish the technical and conceptual foundation. This involves selecting the right blockchain, designing a secure data pipeline, and preparing your development environment.
The core of a merit-based airdrop is a verifiable data pipeline. You need a reliable method to collect, process, and attest to user activity data from on-chain and off-chain sources. Common data sources include on-chain interactions (e.g., transaction history, NFT holdings, governance votes), off-chain contributions (e.g., GitHub commits, forum posts), and social graph data. Tools like The Graph for indexing on-chain data or custom indexers are essential for querying this information efficiently and reliably.
Your choice of blockchain dictates the smart contract language, tooling, and final user experience. For high-throughput airdrops with complex logic, EVM-compatible chains like Ethereum, Arbitrum, or Polygon are popular due to their mature tooling (Hardhat, Foundry) and extensive library support. For cost-sensitive airdrops targeting a broad audience, Layer 2 solutions or app-chains using Cosmos SDK or Substrate offer lower fees and customization. Consider finality time and bridge security if distributing across multiple chains.
A secure development environment is non-negotiable. You will need Node.js (v18+), a package manager like npm or yarn, and an IDE such as VS Code. Essential development tools include Hardhat or Foundry for smart contract development and testing, along with wallet management via MetaMask or WalletConnect. For managing private keys and signing transactions in your backend, use a secure signer library like ethers.js or viem, and never hardcode private keys in your source code.
You must architect a system to calculate user scores and generate a Merkle root for the claim. This typically involves a backend service (written in Node.js, Python, or Go) that fetches user data, applies your scoring algorithm, and outputs a list of eligible addresses and amounts. This list is then used to generate a Merkle tree, whose root is stored on-chain. Libraries like merkletreejs or OpenZeppelin's MerkleProof are standard for this process. Ensure your scoring logic is deterministic and reproducible for auditability.
Finally, plan for the deployment and claim phases. You'll need testnet ETH/TOKEN for contracts on Goerli or Sepolia, and mainnet funds for the final deployment. The smart contract must handle Merkle proof verification, prevent double claims, and manage the token distribution securely. Always conduct thorough testing, including unit tests for scoring logic and integration tests for the full claim flow, before proceeding to mainnet deployment.
Core Concepts for Merit-Based Distribution
Designing a fair and effective airdrop requires moving beyond simple snapshots. These concepts form the technical foundation for rewarding genuine users and contributors.
Multi-Dimensional Scoring Models
A single metric is insufficient. Effective models assign weighted scores across categories:
- Financial Stake (Weight: 30%): TVL, fees paid, yield generated.
- Protocol Usage (Weight: 40%): Number of swaps, loans, or trades; unique features used.
- Longevity & Consistency (Weight: 20%): Duration of activity, avoiding "airdrop farming" spikes.
- Community Contribution (Weight: 10%): Off-chain work as attested.
Models should be transparent, with scores calculated deterministically from public data to build trust. Linear or quadratic functions can prevent whale dominance.
Data Indexing & Infrastructure
Building the scoring engine requires robust data pipelines.
- Indexers: Use The Graph subgraphs or custom indexers to query historical on-chain state (e.g., "balance of address X at block 18,000,000").
- RPC Providers: Services like Alchemy or QuickNode provide archival data for complex historical queries.
- Snapshotting: Determine the exact block height or time window for evaluation. Multi-epoch snapshots are more resistant to gaming.
- Compute: Scoring millions of addresses can be computationally heavy. Consider batch processing via AWS Lambda, Google Cloud Functions, or specialized platforms like Goldsky.
Step 1: Sourcing Contribution Data
The first and most critical step in building a merit-based airdrop is identifying and aggregating on-chain and off-chain user activity. This data forms the objective basis for all subsequent scoring and distribution logic.
A merit-based airdrop rewards users for specific, verifiable contributions to a protocol or ecosystem. Unlike a simple snapshot of token holdings, it requires a multi-faceted data strategy. You must define which actions constitute "merit"—common categories include liquidity provision, governance participation, development activity, and social engagement. For each category, you need a reliable method to collect granular, timestamped data about user addresses and their specific contributions.
On-chain data is the most critical and trustless source. You can query historical events directly from the blockchain using services like The Graph for indexed data, Dune Analytics for custom dashboards, or Covalent for unified APIs. For example, to track Uniswap v3 liquidity providers, you would query Mint and Burn events from the pool contracts, filtering for addresses that maintained positions through specific time periods or fee tiers. This provides immutable proof of contribution.
Off-chain data presents a greater challenge but is essential for capturing community efforts. This includes GitHub commits to relevant repositories, forum posts (e.g., Discourse, Commonwealth), governance forum votes on Snapshot, and qualified social media activity. While not natively on-chain, this data can be cryptographically verified (e.g., via Sign-in with Ethereum) and recorded on-chain in a merkle tree or referenced via a content hash before the airdrop claim period begins, ensuring transparency and auditability.
Your data architecture must handle address aliasing—the fact that one user may interact from multiple wallets. Use deterministic methods to link addresses, such as tracking deployments from the same EOA, interactions through a smart contract wallet's modular account, or self-reported attestations via services like ENS or Proof of Attendance Protocols (POAP). Failing to consolidate identities can lead to sybil attacks where one user receives multiple unfair allocations.
Finally, establish a clear data cutoff date and snapshot mechanism. All contribution data must be finalized and hashed on-chain at a specific block height. This creates an immutable record of the eligibility dataset. The common practice is to publish the root of a merkle tree containing all eligible addresses and their scores, allowing users to later submit a merkle proof to claim. This approach keeps the claim transaction gas-efficient and the initial data publication a fixed, one-time cost.
Step 2: Verifying and Attesting Data
This step details the core logic for validating user eligibility and generating on-chain attestations, ensuring the airdrop's integrity and transparency.
Once user data is collected, the system must verify eligibility against the predefined criteria before any token distribution. This involves running the collected data—such as wallet addresses, transaction histories, or social proofs—through the off-chain verification logic you designed in Step 1. A common pattern is to use a serverless function (e.g., Vercel Edge Function, AWS Lambda) or a dedicated backend service to execute this logic. For example, to verify a user held a specific NFT during a snapshot period, the service would query an indexer like The Graph or a node provider to check the historical state of the user's wallet at the block height of the snapshot.
The output of this verification is an attestation, a cryptographic proof of a user's eligibility. Instead of storing this result in a private database, you should publish it on-chain using an attestation registry like Ethereum Attestation Service (EAS) or Verax. These protocols allow you to create a tamper-proof, publicly verifiable record that links a user's identity (e.g., their wallet address) to a specific claim (e.g., isEligible: true). Creating an EAS attestation on Sepolia testnet involves calling the attest function on the EAS contract with the recipient's address, a unique schema ID defining your data structure, and the encoded eligibility data.
Here is a simplified code example for generating an off-chain signed attestation using EAS's SDK, which can later be submitted on-chain:
javascriptimport { EAS, Offchain, SchemaEncoder } from "@ethereum-attestation-service/eas-sdk"; // Initialize Offchain instance for a specific chain const eas = new EAS(EASContractAddress); const offchain = new Offchain( { chainId: 11155111, version: 1 }, // Sepolia eas.getAddress() ); // Define the data schema for your attestation const schemaEncoder = new SchemaEncoder("bool isEligible,uint256 allocation"); const encodedData = schemaEncoder.encodeData([ { name: "isEligible", value: true, type: "bool" }, { name: "allocation", value: BigInt(500), type: "uint256" }, // e.g., 500 tokens ]); // Create the offchain attestation const offchainAttestation = await offchain.signOffchainAttestation( { recipient: "0xUserWalletAddress", expirationTime: BigInt(0), // No expiration revocable: true, data: encodedData, }, signerPrivateKey // Your attestor's private key ); // `offchainAttestation.uid` is the unique identifier for this claim
This attestation serves as the single source of truth for the claim. The subsequent claim portal (Step 3) will not need to re-verify complex logic; it simply checks for the existence of a valid, unrevoked attestation for the connecting wallet. This separation of concerns—off-chain verification and on-chain attestation—is critical for security and cost-efficiency. It prevents malicious users from spoofing eligibility, as the attestation is signed by your secure attestor key, and moves the heavy computational cost of verification off-chain, saving gas.
Key considerations for this step include attestor key management (use a secure, non-custodial service like Gelato Relay or Safe{Wallet} for signing), schema design (define all necessary eligibility fields in your EAS schema upfront), and revocation logic. You should plan for the ability to revoke attestations in cases of discovered fraud or errors, which is a feature provided by attestation registries. The attestation UIDs generated here are the essential inputs for building the merkle tree or direct claim contract in the next phase of the airdrop architecture.
Contribution Metrics: Sources and Verification Methods
Comparison of common data sources for quantifying user contributions, their verification methods, and inherent trade-offs for airdrop allocation.
| Metric & Source | On-Chain Data | Off-Chain Data | Hybrid/Attestation |
|---|---|---|---|
Primary Data Source | Blockchain state (txs, balances, NFTs) | API logs, GitHub commits, Discord activity | Verifiable Credentials (VCs), EAS attestations |
Verification Method | Cryptographic proof via RPC node | Centralized server validation | On-chain signature verification by issuer |
Tamper Resistance | |||
Developer Overhead | Low (read public state) | High (build & secure backend) | Medium (integrate attestation schema) |
User Privacy | Pseudonymous | May require PII (email, handle) | Selective disclosure possible |
Cost to Verify | Gas fees for writes only | Server infrastructure | Gas fees for issuer & optional verifier |
Example Use Case | DEX trading volume, NFT holdings | Community moderation, content creation | Proof of event attendance, KYC status |
Sybil Resistance | Medium (requires capital) | Low (easy to fake) | High (depends on issuer trust) |
Step 3: Designing the Allocation Algorithm
The allocation algorithm is the engine of your airdrop. It translates user activity into a quantifiable score and determines the final token distribution.
The core of a merit-based airdrop is the allocation algorithm. This function takes the processed on-chain data and calculates a unique score for each eligible wallet. The design must be transparent, tamper-proof, and aligned with your project's goals. Common scoring factors include: transaction volume, frequency of interaction, duration of engagement, and specific milestone completions (like providing liquidity or using a new feature). The algorithm is typically implemented as a view function in a smart contract or computed off-chain in a verifiable script, with the results committed to a Merkle tree for claim verification.
A robust algorithm applies weighted scoring to balance different types of contributions. For example, a protocol might value long-term loyalty over sheer volume. You could assign 40% of the score to the total value of assets interacted with, 30% to the number of distinct days a wallet was active over a 90-day period, 20% for completing specific governance actions, and 10% for early adoption (e.g., interacting before a mainnet launch). This prevents gaming by volume-only bots and rewards genuine users. The formula must be deterministic, using only the immutable on-chain data snapshot.
Here is a simplified conceptual example of an allocation function written in a Solidity-like pseudocode. This function calculates a basic score based on two weighted factors:
solidityfunction calculateScore( uint256 totalVolume, uint256 activeDays ) public pure returns (uint256 score) { // Define weights (sum to 100 for percentage) uint256 volumeWeight = 60; uint256 loyaltyWeight = 40; // Normalize inputs (simplified example) uint256 volumeScore = (totalVolume / 1e18) * volumeWeight; // Assume 18 decimals uint256 loyaltyScore = activeDays * loyaltyWeight; // Final score score = volumeScore + loyaltyScore; }
In practice, you would add more factors, safe math libraries, and normalization to handle large numbers.
After calculating raw scores, you must map them to a token allocation. A linear distribution often leads to a very top-heavy outcome. Many projects use a logarithmic or square root scaling to compress the range, ensuring a more equitable distribution that still rewards top contributors but doesn't neglect smaller, consistent users. For instance, instead of allocation = raw_score, use allocation = sqrt(raw_score). Finally, the sum of all allocations defines the total pool needed. You must decide if allocations are fixed (e.g., top 10,000 scores get tokens) or proportional (each score gets a slice of a fixed token pool).
The final step is generating the Merkle root. Once every wallet's final allocation is calculated, these (address, amount) pairs are used as leaves to construct a Merkle tree. The root hash of this tree is published on-chain (e.g., in the airdrop claim contract). This allows users to submit a Merkle proof to claim their tokens, proving their inclusion and allocation without revealing the entire dataset. The algorithm's code and the final score dataset should be published for community verification, fulfilling the requirement for transparency and trustlessness in the airdrop process.
Step 4: Smart Contract Implementation
This section details the core smart contract design for a merit-based airdrop, focusing on secure, verifiable, and gas-efficient logic for distributing tokens based on on-chain activity.
The foundation of a merit-based airdrop is a smart contract that maps user eligibility to a calculated token amount. We'll use a merkle tree for efficient verification. The process is two-phase: first, an off-chain server calculates user scores and constructs a merkle root; second, the on-chain contract verifies claims against this root. Start by importing OpenZeppelin's MerkleProof library and inheriting from their Ownable and ReentrancyGuard contracts for security. The contract's state should store the merkle root, the token address, a mapping to track claimed addresses, and the total airdrop allocation.
The core function is claim(uint256 index, address account, uint256 amount, bytes32[] calldata merkleProof). It must check several conditions: that the claim period is active, the account hasn't already claimed, and the provided merkleProof validates against the stored root for the given index, account, and amount. Use MerkleProof.verify(merkleProof, merkleRoot, leaf) where the leaf is the keccak256 hash of abi.encodePacked(index, account, amount). If verification passes, mark the account as claimed and safely transfer the tokens using the transfer function from the ERC20 interface.
Consider gas optimization and user experience. Batching claims isn't feasible per-user, but you can implement a multicall pattern allowing users to bundle the claim with other actions in one transaction. For large distributions, be mindful of the gas cost for the merkle proof verification, which scales with tree depth. A tree with 10,000 leaves requires about 14-16 proof elements. Always include an emergencyHalt function (callable only by the owner) to pause claims in case of a critical bug, and a recoverUnclaimedTokens function to withdraw remaining funds after the claim period ends, ensuring no tokens are permanently locked.
Tools and Resources
Practical tools and design resources for building a merit-based airdrop system that rewards real usage, resists Sybil attacks, and is auditable end to end.
Post-Airdrop Analysis and Iteration
Merit-based systems improve through post-drop analysis. Measuring outcomes helps refine future distributions and defend design choices.
Key metrics to track:
- Claim rate by score decile
- Retention of recipients after 30 and 90 days
- Correlation between score components and long-term usage
Analysis workflow:
- Label claimed vs unclaimed addresses
- Compare behavior of airdrop recipients to a control group
- Identify signals that predicted meaningful engagement
Transparency practices:
- Publish a postmortem with methodology and findings
- Share anonymized datasets for independent review
Example: discovering that governance participation predicted higher retention than transaction volume can justify reweighting future merit formulas.
Deliverable: a public report and updated scoring framework for the next airdrop.
Frequently Asked Questions
Common technical questions for developers building on-chain merit-based distribution systems.
A merit-based airdrop is a token distribution mechanism that allocates rewards based on verifiable, on-chain user contributions, rather than simple wallet snapshots. Unlike standard airdrops that may reward mere token holding or early sign-ups, merit-based systems use attestations or proofs of specific actions.
Key differences:
- Standard Airdrop: Often uses criteria like wallet balance or participation in a snapshot at a specific block. Prone to sybil attacks.
- Merit-Based Airdrop: Distributes based on provable work, such as:
- Volume of transactions contributed to a DEX pool
- Number of successful trades or loans completed
- Code commits or governance proposals in a DAO
- Attendance verifications from real-world events (via POAPs)
These actions are recorded as verifiable credentials or on-chain data, allowing for a more equitable, sybil-resistant distribution that rewards genuine ecosystem participants.
Conclusion and Next Steps
This guide has outlined the core components for building a merit-based airdrop system. The next steps involve implementing these concepts and exploring advanced features.
You now understand the architectural blueprint for a merit-based airdrop: defining clear, on-chain eligibility criteria, implementing a robust scoring mechanism, and ensuring a secure, transparent distribution process. The key is to move beyond simple wallet snapshots to a system that rewards genuine, measurable contributions to your protocol's ecosystem. This approach builds a more aligned and engaged community than a standard token drop.
For implementation, start by integrating with data providers like The Graph for querying historical on-chain activity or Chainscore for analyzing wallet behavior and engagement scores. Your smart contract should reference a Merkle root for efficient claim verification, as used by protocols like Uniswap and Optimism. Thorough testing on a testnet is non-negotiable; simulate various user behavior patterns to ensure your scoring logic is fair and resistant to manipulation.
To advance your system, consider these next steps: 1) Implement sybil resistance using techniques like proof-of-personhood or stake-weighted scoring. 2) Add tiered rewards where higher contribution scores unlock larger allocations or exclusive NFTs. 3) Design for continuous rewards by making the airdrop a recurring event for ongoing participation, similar to Axie Infinity's season-based rewards. 4) Plan for dispute resolution, perhaps through a community-governed council, to handle edge cases in scoring.
Finally, communicate your system's rules transparently. Publish the detailed scoring formula and eligibility windows before the snapshot. After distribution, provide a public verification tool so users can audit their own scores. This transparency transforms your airdrop from a marketing event into a trust-building cornerstone for your project's long-term decentralized governance.