Real-time eligibility checks are a foundational requirement for many Web3 applications. Whether you're building a token-gated community, distributing a merkle-less airdrop, or creating a permissioned liquidity pool, you need a way to verify user permissions instantly and trustlessly on-chain. Traditional methods like merkle proofs require off-chain list management and user-side proof generation, which adds complexity and latency. A smart contract-based eligibility system moves the verification logic entirely on-chain, enabling immediate, gas-efficient checks directly within a transaction.
Launching a Smart Contract System for Real-Time Eligibility Checks
Introduction
This guide explains how to build a smart contract system for real-time on-chain eligibility verification, a core primitive for token-gating, airdrops, and permissioned DeFi.
At its core, the system revolves around an eligibility verifier contract that acts as a single source of truth. This contract exposes a function, typically isEligible(address user, bytes32 context) returns (bool), that other protocols can query. The verifier's internal logic can be based on various criteria: ownership of a specific NFT (checking balanceOf), a snapshot of token holdings at a past block, membership in a decentralized autonomous organization (DAO), or completion of certain on-chain actions. The context parameter allows for dynamic checks, such as verifying eligibility for a specific campaign ID or tier.
Implementing this requires careful architectural decisions. You must choose between a modular design, where verification logic is upgradeable or swappable, and a static ruleset for immutability. Consider gas costs: storing eligibility lists directly in contract storage is simple but expensive for large sets, while using cryptographic proofs or referencing external registries like ERC-721 or ERC-20 contracts can be more efficient. Security is paramount; the verifier must be resistant to manipulation and its data sources must be trustless or explicitly trusted by users.
In this guide, we will build a concrete example: an EligibilityVerifier contract that checks for NFT ownership and a minimum token balance. We'll write the Solidity code, deploy it to a testnet, and then integrate it with a mock TokenGatedVault contract that restricts access based on the verifier's response. This pattern is directly applicable to real-world use cases like Syndicate's ERC-721M for token-gated transactions or OpenZeppelin's AccessControl for role management, but with a focus on composable, external verification.
By the end, you will understand how to design, deploy, and interact with a real-time eligibility system. You'll be able to extend the basic verifier with custom logic—such as integrating with oracles for off-chain data or cross-chain messaging for multi-chain eligibility—enabling you to build sophisticated, permissioned applications entirely on-chain.
Prerequisites
Before launching a smart contract system for real-time eligibility checks, you need to establish a solid technical foundation. This section outlines the essential knowledge, tools, and infrastructure required to build and deploy a secure, on-chain verification system.
A strong grasp of smart contract development is non-negotiable. You should be proficient in Solidity, the primary language for Ethereum and many EVM-compatible chains like Arbitrum, Optimism, and Polygon. Understanding core concepts is key: - contract state variables and data structures for storing eligibility rules - function modifiers and access control patterns like OpenZeppelin's Ownable - event emission for off-chain indexing and real-time notifications. Familiarity with the ERC-20 and ERC-721 token standards is also crucial, as eligibility often pertains to token ownership or balances.
You will need a local development environment. The standard toolkit includes Node.js (v18+), a package manager like npm or yarn, and a code editor such as VS Code. The primary framework for development, testing, and deployment is Hardhat or Foundry. These tools provide a local blockchain (e.g., Hardhat Network), a testing suite, and scripts for compiling and deploying contracts. You'll also need the MetaMask browser extension for interacting with your contracts during development and testing.
Real-time eligibility checks require a reliable connection to blockchain data. For development and testing, you can use services like Alchemy, Infura, or QuickNode to get RPC endpoints without running your own node. For reading on-chain state (e.g., checking a user's token balance), you'll use these providers with libraries like ethers.js (v6) or viem. Understanding how to make eth_call RPC requests for gas-less state reads is essential for the "real-time" aspect of your system.
Security is paramount for any system handling permissions or value. You must understand common vulnerabilities like reentrancy, integer overflows, and improper access control. Use established libraries like OpenZeppelin Contracts for secure, audited implementations of standards (ERC-20, ERC-721) and security utilities (Ownable, ReentrancyGuard). Your development workflow should include writing comprehensive unit and integration tests using Hardhat's testing environment or Foundry's Forge, aiming for high test coverage before any mainnet deployment.
Finally, you need a deployment strategy. Decide which blockchain network suits your needs: a Layer 1 like Ethereum Mainnet for maximum security, or a Layer 2 like Arbitrum or Base for lower cost and higher speed. You will need testnet ETH (e.g., Sepolia ETH) for initial deployments. For managing deployments and verifying contract source code, you'll use tools like Hardhat deployment scripts and block explorers like Etherscan or Arbiscan. Planning for upgradeability, using patterns like Transparent or UUPS proxies from OpenZeppelin, should be considered if your eligibility logic may need to evolve.
System Architecture Overview
This guide details the architecture for a smart contract system that performs real-time eligibility checks, a core component for on-chain verification in DeFi, airdrops, and governance.
A real-time eligibility system requires a modular, gas-efficient architecture. The core components are a verification contract that holds the logic and state, an oracle or relayer for off-chain data, and a front-end client for user interaction. The verification contract stores eligibility criteria—such as token balances, NFT ownership, or snapshot timestamps—and exposes a checkEligibility(address user) function. This separation of concerns allows the logic to be upgraded independently of the data sources and user interface.
Off-chain data is critical for checks based on real-world information or complex computations. An oracle like Chainlink can push verified data (e.g., KYC status, credit scores) to the contract via a trusted node. For permissioned systems, a designated relayer can sign off-chain messages that users submit in a transaction, using a pattern like EIP-712 signatures. This keeps gas costs low by moving computation off-chain while maintaining cryptographic proof of validity on-chain.
The contract must be optimized for the real-time requirement, minimizing latency and gas costs for the check. This involves using efficient data structures like mapping for O(1) lookups, avoiding storage writes during the check itself, and potentially implementing a Merkle tree for large, static allowlists. For example, an airdrop eligibility contract might store a Merkle root of eligible addresses, allowing users to submit a Merkle proof for verification without storing every address on-chain.
Security is paramount. The contract should include access controls (using OpenZeppelin's Ownable or AccessControl) to restrict who can update eligibility criteria or pause the system. Reentrancy guards should be used if checks interact with external contracts. Furthermore, the system should be designed to handle front-running; a commit-reveal scheme or deadline-based transactions can prevent users from copying and submitting another user's successful eligibility proof.
Finally, the architecture must consider upgradability and maintenance. Using a proxy pattern (like the Transparent Proxy or UUPS) allows the verification logic to be improved without migrating state. Events should be emitted for all state changes and eligibility results to enable easy indexing by subgraphs or front-ends. A well-architected system balances security, cost, and flexibility to serve as reliable infrastructure for on-chain applications.
Key Concepts for Healthcare Smart Contracts
Building a system for real-time eligibility checks requires understanding core blockchain primitives, data management, and privacy-preserving techniques.
Step 1: Develop the Core Smart Contract
The core smart contract is the single source of truth for your eligibility system, defining the rules, storing the state, and exposing the functions that other components will query.
Begin by defining the data structures and state variables. You'll need a mapping to store eligibility status, such as mapping(address => bool) public isEligible;. Consider adding metadata like uint256 public eligibilityThreshold for dynamic rules, or a bytes32 public merkleRoot if using Merkle proofs for privacy. Use the immutable keyword for configuration set at deployment, like a trusted admin address. Structuring state efficiently is critical for minimizing gas costs during frequent real-time checks.
Next, implement the core logic functions. The primary function will be a view function like function checkEligibility(address user) public view returns (bool) that returns the stored status. For systems with on-chain rule evaluation, this function might perform calculations, like checking if the user's token balance meets the eligibilityThreshold. Ensure all state-modifying functions (e.g., an adminGrantEligibility function) are protected with access controls like OpenZeppelin's Ownable or AccessControl.
For production systems, integrating upgradeability is a key consideration. Using a proxy pattern like the Transparent Proxy or UUPS from OpenZeppelin allows you to fix bugs or adjust logic without losing the contract's state or address. This is separate from the core logic contract itself, which will be your implementation contract. Always include comprehensive event emissions, such as EligibilityUpdated(address indexed user, bool status), for off-chain indexing and monitoring.
Thorough testing is non-negotiable. Write unit tests in Hardhat or Foundry that cover:
- The initial state after deployment.
- The correct operation of the
checkEligibilityfunction under various conditions. - The proper enforcement of access controls on admin functions.
- Edge cases, like zero addresses or maximum values. Forge (from Foundry) enables fuzz testing, which is excellent for uncovering unexpected inputs that could break your logic.
Finally, consider the integration points for real-time systems. Your contract's functions will be called by off-chain indexers, backend services, or frontend applications. Optimize for low gas costs on frequent view calls by minimizing storage reads and complex computations. Document the ABI and function signatures clearly. The completed core contract forms the immutable bedrock upon which the entire real-time eligibility system is built.
Step 2: Integrate with a Decentralized Oracle
Connect your smart contract to real-world data feeds to perform automated, trust-minimized eligibility checks.
A smart contract is a closed system; it cannot natively access data from outside its own blockchain. To verify real-world conditions—like a user's token holdings on another chain, the current price of an asset, or the outcome of a sporting event—you need a decentralized oracle. Oracles act as secure middleware, fetching, validating, and delivering external data to your contract on-chain. For eligibility systems, this is the critical link between off-chain verification logic and on-chain execution.
Chainlink is the most widely adopted oracle network, providing decentralized data feeds (Price Feeds) and verifiable randomness (VRF). For a basic eligibility check, you would typically use a data feed. First, identify the required data point and its corresponding feed on the Chainlink Data Feeds page. For example, to check if a user holds at least 1 ETH, you would use the ETH/USD price feed to calculate the USD value of their balance. You'll need the feed's proxy address for your specific network (e.g., Ethereum Mainnet, Polygon).
Integration involves importing the AggregatorV3Interface from Chainlink's contracts. Your smart contract stores the oracle address and uses the latestRoundData() function to request the latest price. The function returns a tuple containing the answer, timestamp, and other metadata. Your contract's logic then compares this external data against the user's submitted information or on-chain state to determine eligibility. Always implement error handling for stale or incomplete data using the returned timestamp and answeredInRound values.
For more complex checks—like verifying a user's NFT ownership on a different chain or a custom API result—you would use Chainlink Functions or a similar oracle service. These allow you to run custom JavaScript logic off-chain that returns a computed result to your contract. The code snippet below shows a minimal contract checking if an ETH balance is worth more than $2000 USD using a Chainlink Price Feed.
solidityimport "@chainlink/contracts/src/v0.8/interfaces/AggregatorV3Interface.sol"; contract EligibilityChecker { AggregatorV3Interface internal priceFeed; // ETH/USD feed on Ethereum Mainnet address constant ETH_USD_FEED = 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419; constructor() { priceFeed = AggregatorV3Interface(ETH_USD_FEED); } function isEligible(uint256 userEthBalance) public view returns (bool) { (, int256 price, , , ) = priceFeed.latestRoundData(); // price is in 8 decimals, balance in 18 decimals (wei) uint256 usdValue = (userEthBalance * uint256(price)) / 1e26; return usdValue > 2000 * 1e18; // More than $2000 } }
Security is paramount when using oracles. Relying on a single data source creates a central point of failure. Always use decentralized oracle networks that aggregate data from multiple independent nodes. Chainlink feeds, for instance, are secured by dozens of nodes. Furthermore, your contract should include circuit breakers or pause functionality if data becomes stale (beyond a defined heartbeat) or deviates abnormally (beyond a defined deviation threshold). Never use the oracle's answer for critical logic without these sanity checks.
After deploying your contract with the integrated oracle, you must fund it with the native blockchain token (e.g., ETH, MATIC) to pay for oracle gas costs. While reading from a data feed doesn't cost your users extra gas beyond the call, writing a transaction that triggers an oracle update (like with Chainlink Functions) requires the contract to hold LINK tokens to pay the oracle nodes. Test thoroughly on a testnet like Sepolia using testnet oracle addresses before deploying to mainnet.
Step 3: Build the External Adapter
An external adapter is a self-contained service that fetches and formats data from an API for a Chainlink node. This step bridges your smart contract with the real-world data required for eligibility checks.
An external adapter is the crucial bridge between a Chainlink node and your specific data source. It is a standalone web service, typically built with Node.js or Python, that receives a request from a Chainlink node, calls an external API (like a KYC provider, credit scoring service, or proprietary database), and returns the formatted result. This architecture keeps the node's core logic generic and secure, while allowing for infinite customization of data sources. You can host this adapter on any cloud provider, such as AWS Lambda, Google Cloud Functions, or a dedicated server.
The adapter communicates using a simple JSON specification. When a Chainlink job triggers, it sends a JSON payload to your adapter's endpoint. Your adapter's job is to parse this request, execute the necessary API calls or logic, and return a JSON response with the result. The critical fields are data for the input (like a user's wallet address or ID) and a result field containing the processed output (e.g., {"eligible": true, "score": 750}). This output is what gets written on-chain by the oracle.
For a real-time eligibility check, your adapter would handle the core logic. For example, it might receive a user's decentralized identifier (DID), query a verifiable credentials registry or a zero-knowledge proof verifier, and return a boolean isEligible flag. You must implement robust error handling, API key management (using environment variables), and response formatting. The Chainlink documentation provides starter templates and examples to accelerate development.
Security is paramount. Your adapter should validate all incoming requests, implement rate limiting, and use HTTPS. Since it often handles sensitive API keys, never hardcode credentials; use secret management services. The adapter's code should be concise and auditable, as it becomes a trusted component in the oracle's data pipeline. For production systems, consider building a redundant adapter deployment across multiple regions to ensure high availability for your smart contract's requests.
Once built, you must inform your Chainlink node operator of the adapter's endpoint. The node operator will then create a bridge in their node, linking a unique name (like my_eligibility_adapter) to your service's URL. Your smart contract's requestEligibilityCheck function will specify this bridge name, completing the connection. This decoupled design allows you to update your API logic or data source without modifying the on-chain contract or the node's core software.
On-Chain vs Off-Chain Data Handling
Comparison of data processing strategies for a real-time eligibility system.
| Feature | Fully On-Chain | Hybrid (Oracle-Based) | Fully Off-Chain (ZK Proofs) |
|---|---|---|---|
Data Source & Trust | Immutable on-chain state | Trusted oracle network (e.g., Chainlink) | Cryptographic proof of off-chain state |
Real-Time Update Latency | High (12-30 sec per block) | Medium (Oracle heartbeat, ~1-5 min) | Low (Instant off-chain, ~1 sec proof gen) |
Gas Cost per Check | High ($5-20) | Medium ($1-5 for oracle call) | Low ($0.10-1 for proof verification) |
Data Privacy | None (All data public) | Limited (Oracle sees data) | High (Only proof is public) |
Implementation Complexity | Low | Medium | High |
Decentralization / Censorship Resistance | High | Medium (Relies on oracle set) | High (Verifier is trustless) |
Example Use Case | Simple token-gating | Checking real-world asset prices | Private credit scoring or KYC |
Settlement Finality | Immediate with block | Depends on oracle finality | Immediate with proof verification |
Step 4: Testing and Implementing Audit Logs
This step details how to implement and test on-chain audit logs, a critical component for verifying eligibility checks and maintaining a transparent, tamper-proof record of all system activity.
Audit logs are non-negotiable for a production-grade eligibility system. They provide an immutable, chronological record of every critical action, such as a user's eligibility status being checked, updated, or a rule being modified. This serves three primary purposes: security forensics to trace malicious activity, operational transparency for users to verify their own status, and regulatory compliance by proving the system's logic was followed. On Ethereum and EVM-compatible chains, this is achieved by emitting structured events from your smart contracts, which are then indexed and stored on-chain.
Implementing audit logs requires defining and emitting specific events within your eligibility smart contracts. A well-structured event includes indexed parameters for efficient off-chain querying. For example, a contract might emit an EligibilityChecked event containing the user's address, a timestamp, the rule identifier, and the boolean result. Use the indexed keyword for parameters you will frequently filter by, like user or ruleId. Here is a basic Solidity example:
solidityevent EligibilityChecked( address indexed user, uint256 indexed ruleId, uint256 timestamp, bool isEligible, bytes32 proofHash // Optional: hash of off-chain proof ); function checkEligibility(address _user, uint256 _ruleId) public { bool eligible = _evaluateRule(_user, _ruleId); emit EligibilityChecked(_user, _ruleId, block.timestamp, eligible, bytes32(0)); }
Thoroughly testing your audit logs is as important as testing core logic. Your test suite should verify that events are emitted with the correct arguments under all conditions. Using a framework like Foundry or Hardhat, you can write tests that directly inspect emitted events. For instance, after calling checkEligibility, your test should assert that an EligibilityChecked event was fired, and that its isEligible parameter matches the expected outcome for the test user. This ensures your logging mechanism is tightly coupled to your business logic and won't fail silently in production.
Once deployed, you need a strategy for accessing and utilizing the logged data. While the raw event data is on-chain, you typically use an indexing service like The Graph or an RPC provider's event filtering API to query it efficiently. For a user-facing dApp, you might build a dashboard that queries these indexed logs to show a user their eligibility history. For internal monitoring, you can set up alerts based on specific event patterns, such as a high rate of failed eligibility checks from a single address, which could indicate an attack or a bug in the rule logic.
Consider the gas cost implications of your logging strategy. Each event emission consumes gas, and using multiple indexed parameters increases this cost. Optimize by only indexing fields essential for filtering (typically 1-3 fields) and packing related data into non-indexed bytes or string fields. For high-frequency operations, you might implement a pattern where logs are batched or emitted only for state-changing transactions, not for simple view function calls. Always profile gas usage during testing to ensure your audit trail doesn't make core operations prohibitively expensive.
Finally, integrate your audit logs into a broader incident response and compliance workflow. Define procedures for your team to query logs during an investigation. For projects requiring formal compliance (e.g., with specific jurisdictional regulations), work with legal counsel to ensure your event schema captures all necessary data points—such as the exact timestamp, initiating party, and final state change—to create an admissible audit trail. The robustness of this system directly impacts the trust users and regulators place in your on-chain eligibility platform.
Development Resources and Tools
Tools and patterns for launching a smart contract system that performs real-time eligibility checks based on on-chain state, off-chain signals, or cryptographic proofs.
Frequently Asked Questions
Common technical questions and solutions for developers building real-time on-chain eligibility systems.
This is often caused by state differences between your local fork and the live network. The most common culprits are:
- Out-of-sync block state: Your local fork may have stale data. The live contract you're querying (e.g., a token contract for balance checks) may have been updated.
- Missing msg.sender context: In tests, you might call the check directly. On-chain, it's called via a relayer or another contract. Ensure your logic doesn't rely on
tx.originand correctly uses the passed-in user address parameter. - Gas and block limits: Your local environment has very high gas limits. On-chain, you must stay within the gas limit of the calling transaction. Profile gas usage with tools like
hardhat-gas-reporter.
Fix: Use a live testnet fork (e.g., with Foundry's forge create --fork-url) for final verification and implement event logging to trace the exact point of failure.
Conclusion and Next Steps
You have now built the core components of a smart contract system for real-time eligibility checks. This guide covered the essential steps from design to deployment.
This system demonstrates a modular approach to on-chain verification. The EligibilityManager contract acts as the central registry, while the ClaimVerifier contract handles the logic for specific checks. Using a factory pattern for ClaimVerifier instances allows for scalable, upgradeable rule sets. The key security feature is the use of off-chain signatures from a trusted verifier wallet, which prevents gas fees for users and keeps complex logic off-chain until a claim is submitted. This pattern is common in systems like token airdrops or gated NFT mints where proof must be provided without upfront user cost.
To extend this system, consider implementing additional verifier logic. For example, you could create a BalanceVerifier that checks if a user holds a minimum amount of a specific ERC-20 token, or a SnapshotVerifier that validates inclusion in a merkle tree. Each new verifier contract would implement the same verifyClaim interface. For production, you must rigorously manage the private key for the VERIFIER_SIGNER role, using a secure multi-signature wallet or a dedicated signing service. Audit the signature generation code to prevent replay attacks across different chains or contract instances.
Next, integrate this system with a frontend application. Your dApp should: 1) Request a user's wallet connection, 2) Call getUserClaims to fetch eligible claim IDs, 3) For a selected claim, prompt the backend to generate a signature (e.g., via an API endpoint that holds the signer key), and 4) Submit the signed message to the claim function. Tools like WalletConnect and viem or ethers.js are essential for this integration. Monitor contract events like ClaimVerified to trigger UI updates or backend processes.
For further learning, explore related concepts and protocols. Study EIP-712 for structured typed data signing, which improves user experience and security in signature requests. Examine how Layer 2 solutions like Optimism or Arbitrum can reduce the gas costs for the final claim transaction. Review the architecture of existing eligibility systems used by protocols like Uniswap's governance or Optimism's airdrop for real-world design patterns. The complete code for this guide is available on the Chainscore Labs GitHub repository.