Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Launching a Competitive On-Chain Intelligence Program

A systematic guide for developers to monitor and analyze competitor protocols using on-chain data. Covers tracking key metrics, analyzing user behavior, and reverse-engineering incentive structures to inform product strategy.
Chainscore © 2026
introduction
GUIDE

Launching a Competitive On-Chain Intelligence Program

A practical framework for building a systematic on-chain intelligence program to track competitor activity, market trends, and strategic opportunities.

On-chain competitive intelligence transforms raw blockchain data into actionable insights about your competitors and the broader market. Unlike traditional web analytics, it provides a transparent, real-time view of user behavior, capital flows, and protocol interactions. A structured program moves beyond ad-hoc queries to establish continuous monitoring of key metrics like total value locked (TVL), user growth, transaction volume, and smart contract interactions. This data-driven approach is essential for DeFi protocols, NFT projects, and venture funds to make informed strategic decisions.

The first step is defining your intelligence objectives and identifying your competitive set. Are you tracking a direct protocol competitor's user adoption, monitoring venture capital wallet activity for market signals, or analyzing tokenomics of emerging projects? Clear goals determine your data sources. Primary tools include block explorers like Etherscan, specialized analytics platforms such as Nansen or Dune Analytics, and direct node access via providers like Alchemy or Infura. For custom analysis, you'll need to query blockchain data using SQL (via Dune) or libraries like web3.py and ethers.js.

Building your data pipeline is the core technical challenge. Start by tracking fundamental on-chain metrics. For a DeFi competitor, this includes daily active addresses, new vs. returning users, transaction fee expenditure, and liquidity pool compositions. Use Dune Analytics to create dashboards or set up automated queries. For example, to track a wallet's ERC-20 holdings, you might use an ethers.js snippet: const balance = await contract.balanceOf(walletAddress);. Automate data collection with cron jobs or orchestration tools to ensure your intelligence is current.

Moving from data to insights requires analysis and context. Raw transaction counts are less valuable than understanding why activity is changing. Correlate on-chain events with off-chain announcements like partnership reveals or token launches. Perform wallet clustering to identify if activity is driven by a few whales or broad retail adoption. Benchmark your metrics against industry averages or leading protocols to gauge relative performance. This analytical layer turns data points into a narrative about competitor strengths, weaknesses, and strategic pivots.

Finally, operationalize your findings into a regular reporting cadence. Create a competitive intelligence dashboard that highlights weekly or monthly changes in key metrics. Establish alerts for significant on-chain events, such as a large token transfer from a competitor's treasury or a sudden spike in smart contract calls. Share distilled reports with product, growth, and executive teams to inform roadmap planning and marketing strategy. The goal is to create a feedback loop where on-chain insights directly influence tactical and strategic business decisions.

prerequisites
PREREQUISITES AND SETUP

Launching a Competitive On-Chain Intelligence Program

Establishing a robust on-chain intelligence program requires foundational knowledge and a structured data pipeline. This guide outlines the essential prerequisites and initial setup steps.

A competitive on-chain intelligence program is built on three core pillars: technical infrastructure, data literacy, and clear objectives. Before writing a single query, you must define your goals. Are you tracking protocol health, monitoring for arbitrage opportunities, or analyzing user behavior? Your objective dictates the data sources you'll need, such as block explorers like Etherscan, raw node providers (e.g., Alchemy, QuickNode), and specialized analytics platforms (e.g., Dune Analytics, Flipside Crypto).

The technical setup begins with accessing blockchain data. For real-time, high-volume analysis, you'll need a reliable RPC endpoint. Services like Alchemy, Infura, and QuickNode provide managed nodes with enhanced APIs. For historical analysis and complex aggregations, platforms like The Graph for subgraphs or Dune for SQL-based queries are essential. You should also be familiar with core concepts: block structure, event logs, transaction traces, and smart contract ABIs, as these are the raw materials of your analysis.

Your analytical toolkit should include proficiency in SQL for querying indexed data and a scripting language like Python or JavaScript for interacting directly with nodes and processing data. Python libraries such as web3.py and ethers.js for JavaScript are standard for building custom data collectors. Set up a local development environment with these libraries and connect to your RPC provider. Initial tests should involve simple queries, like fetching the latest block number or decoding event logs from a known contract, such as a Uniswap V3 pool.

Finally, establish a workflow for validation and iteration. On-chain data is complex and can be misinterpreted. Start by replicating known metrics from reputable dashboards to verify your methodology. Implement a version-controlled repository for your queries and scripts, and document your data sources and assumptions. This disciplined setup phase transforms ad-hoc investigation into a reproducible, scalable intelligence operation capable of generating actionable insights.

key-concepts
INTELLIGENCE PROGRAM

Core On-Chain Metrics to Track

Building a competitive on-chain intelligence program starts with tracking the right data. These core metrics provide the foundational signals for analyzing protocol health, user behavior, and market dynamics.

02

Active Addresses & Transaction Count

These metrics gauge genuine user engagement and network utilization.

  • Active Addresses: Unique addresses that initiate a transaction in a given period (daily/weekly).
  • Transaction Count: Total number of on-chain transactions. High counts with low value can indicate bot activity or spam.
  • Analysis: Track ratios like transactions per active address to understand user sophistication and intent.
03

Gas Fees & Network Congestion

Gas prices and block space utilization are direct measures of network demand and economic activity.

  • Key Metrics: Average gas price (Gwei), total gas used per day, and base fee (for EIP-1559 chains).
  • Insight: Spikes indicate high demand for block space (e.g., NFT mints, token launches). Sustained high fees can signal scalability issues.
  • Tool: Use block explorers like Etherscan or dedicated gas trackers for real-time data.
06

Token Holder Concentration & Flow

Analyze token distribution and movement to assess decentralization and potential market risk.

  • Concentration: Percentage of supply held by top 10/100 wallets (excluding contracts like DEX pools).
  • Flow: Track net transfers between exchange addresses (likely selling) and smart contracts (likely staking).
  • Tools: Nansen and Arkham Intelligence label addresses to provide context for large movements.
data-sources-methods
ON-CHAIN INTELLIGENCE

Data Sources and Collection Methods

Building a competitive on-chain intelligence program starts with a robust data pipeline. This guide covers the primary sources of blockchain data and the technical methods for collecting it at scale.

The foundation of any on-chain intelligence program is raw blockchain data, which is accessed via full nodes or archive nodes. A full node validates transactions and maintains the current state, while an archive node retains the complete historical state, enabling deep historical analysis. For Ethereum, running a Geth or Erigon archive node is the most direct method, but it requires significant storage (over 12TB) and synchronization time. Services like Alchemy, Infura, and QuickNode offer managed RPC endpoints, abstracting this infrastructure complexity. The core data types include blocks, transactions, logs (event emissions), and internal traces, each providing a different lens into on-chain activity.

For scalable and efficient data collection, developers use the JSON-RPC API standard. This protocol allows programs to query node data. Common methods include eth_getBlockByNumber to fetch block details, eth_getTransactionReceipt for transaction outcomes and logs, and eth_getLogs for filtering events by contract address and topics. To track real-time activity, a WebSocket connection (eth_subscribe) is essential for listening to new pending transactions and blocks as they are propagated through the network, enabling low-latency monitoring and alerting systems.

While RPC calls provide the raw data, transforming it into an analyzable format is the next challenge. This is where indexing comes in. Projects build indexing pipelines that listen to chain data, decode smart contract ABIs to make sense of event logs, and structure the information into relational databases or data lakes. Popular frameworks for this include The Graph, which uses a subgraph manifest to define the data schema and mapping logic, and Subsquid, which offers a high-performance SDK for building custom ETL (Extract, Transform, Load) pipelines. These tools convert raw hexadecimal log data into human-readable fields.

Beyond the base layer, auxiliary data sources are critical for context. Token price feeds from oracles like Chainlink, Decentralized Exchange (DEX) pool reserves from Uniswap or Curve contracts, and bridge transaction records from protocols like Wormhole are necessary to calculate metrics like portfolio value, liquidity depth, and cross-chain flow. Collecting this data often requires interacting with multiple smart contracts and parsing their specific event structures, which must be meticulously documented and updated.

Finally, establishing a reliable collection method requires planning for chain reorganizations (reorgs), RPC rate limits, and data validation. A robust pipeline must handle reorgs by having a confirmation depth (e.g., waiting for 12 block confirmations on Ethereum) and the ability to revert orphaned data. Implementing retry logic with exponential backoff for rate-limited requests and validating the consistency of fetched data against multiple RPC providers are best practices for ensuring the integrity and reliability of your on-chain intelligence data stream.

COMPETITOR METRICS

On-Chain Intelligence Platform Comparison

Key performance and feature metrics for leading on-chain analytics platforms.

Metric / FeatureNansenDune AnalyticsArkham Intelligence

Data Freshness

< 30 sec

~5 min

< 15 sec

Historical Data Depth

Full history

Full history

Jan 2020 onward

Smart Money Wallet Labels

Custom Query Language

Real-time Alerting

API Rate Limit (req/min)

60
100
300

On-Chain Entity Resolution

Average Query Execution Time

< 2 sec

< 5 sec

< 1 sec

analyzing-user-flow
ON-CHAIN INTELLIGENCE

Analyzing User Inflow and Outflow

Understanding the directional flow of capital and users is a foundational metric for any competitive on-chain intelligence program. This analysis reveals market sentiment, protocol health, and emerging trends.

User inflow measures new capital and participants entering a protocol or ecosystem, often signaling growth, successful marketing, or positive sentiment. Conversely, outflow tracks capital leaving, which can indicate profit-taking, dissatisfaction, or a shift to competing platforms. By analyzing the net flow (inflow minus outflow), teams can gauge real-time traction. For example, a sustained positive net flow for a new L2 like Arbitrum or Optimism suggests successful user adoption, while sudden outflows from a major DeFi protocol like Aave could precede a liquidity crisis.

To operationalize this, you need to query on-chain data for specific events. Inflow is often calculated by tracking new deposit transactions or first-time interactions from addresses. A basic SQL query for a hypothetical vault might look for first-time depositors: SELECT depositor, COUNT(*) AS first_deposit FROM vault_deposits GROUP BY depositor HAVING COUNT(*) = 1. Outflow analysis requires monitoring withdrawal functions and bridge-out transactions to other chains. Tools like Dune Analytics, Flipside Crypto, and your own indexer are essential for this granular event logging.

Beyond simple counts, velocity and source analysis add depth. Velocity examines the frequency and size of inflows/outflows—large, infrequent whale movements differ from steady retail activity. Source analysis traces where incoming users are migrating from, such as identifying if a surge on Polygon zkEVM is coming from Ethereum mainnet users or from another L2. This requires analyzing bridge transaction origins and using wallet clustering techniques to map user journeys across chains.

Implementing alerts for abnormal flow patterns is a key actionable output. Set thresholds for daily net flow deviations (e.g., >20% drop) and monitor for correlated social sentiment or news. For a DeFi protocol, pairing outflow data with a drop in Total Value Locked (TVL) and rising loan collateralization ratios on platforms like MakerDAO can provide an early warning system. This triangulation of data points moves analysis from observation to actionable intelligence.

Finally, benchmark your findings against competitors. A protocol might show positive absolute inflow, but if its market share of total sector inflow is declining, it's losing the competitive battle. This requires building a comparable dataset across similar protocols—analyzing inflow for the top five NFT marketplaces or lending protocols simultaneously. This competitive intelligence informs strategic decisions on incentives, partnerships, and product development to capture and retain users.

reverse-engineering-incentives
ON-CHAIN INTELLIGENCE

Reverse-Engineering Incentive Structures

A practical guide to analyzing and replicating the tokenomics and reward mechanisms that drive successful on-chain applications.

On-chain intelligence begins with reverse-engineering incentive structures to understand why users and capital behave the way they do. This involves moving beyond surface-level metrics like Total Value Locked (TVL) to analyze the precise economic flywheels embedded in smart contracts. For a lending protocol like Aave, this means dissecting the interplay between supply APY, borrowing rates, liquidity mining rewards, and governance token (AAVE) staking incentives. The goal is to map the flow of value: how rewards are funded, who captures them, and what actions are economically rational for each participant. This foundational analysis reveals the protocol's true growth engine and potential stress points.

To launch a competitive intelligence program, you need systematic data collection. Start by identifying the core smart contracts for the target protocol (e.g., the lending pool, staking vault, reward distributor). Use a block explorer or a service like Dune Analytics to query for key events: Deposit, Borrow, RewardPaid, Stake. For a yield aggregator like Yearn, you'd track vault deposits/withdrawals and harvest events to calculate net yields. Building a dashboard that tracks these metrics over time—preferably in real-time via a node RPC—allows you to model user profitability and predict capital migration. The key is to automate the extraction of raw, actionable data from the chain.

The next step is quantitative modeling. Translate the on-chain data into economic models. For a liquidity pool on Uniswap V3, calculate the impermanent loss for LPs at different price ranges and fee tiers, then compare it against emitted UNI rewards. Use this to determine the real yield after accounting for gas costs and price volatility. For a new Layer 2 like Arbitrum, model the Nitro upgrade's effect on sequencer revenue and the resulting distribution to ARB stakers. Your models should answer specific questions: "At what TVL does the protocol's treasury become sustainable?" or "What is the break-even point for a liquidity miner?" This transforms data into a strategic asset.

Finally, apply your intelligence competitively and defensively. Use your models to stress-test your own protocol's tokenomics before launch. If you're building a competitor to a leading DEX, simulate how your proposed fee switch or token emission schedule would perform under various market conditions captured from the incumbent's data. Monitor for incentive exploits, such as reward farming loops or governance attacks, by setting alerts for anomalous contract interactions. The most effective on-chain intelligence programs don't just observe; they anticipate capital flows and design mechanisms that are more resilient, efficient, and attractive than those of competitors, turning analysis into a core competitive advantage.

PRACTICAL IMPLEMENTATIONS

Code Examples: Querying Data

Getting Started with Subgraphs

Subgraphs are open APIs for querying blockchain data, primarily on Ethereum and EVM chains. They index data from smart contracts into a GraphQL endpoint. The most common use case is fetching historical events and contract states.

Key Concepts:

  • Entity: A data type you define in your subgraph schema (e.g., Swap, User).
  • Event Handler: A mapping function that processes blockchain events and saves data to entities.
  • Query: A GraphQL request to fetch indexed data.

Simple Query Example: This fetches the 10 most recent swaps from a Uniswap V2 subgraph.

graphql
{
  swaps(first: 10, orderBy: timestamp, orderDirection: desc) {
    id
    amount0In
    amount1Out
    sender
    to
    transaction {
      id
    }
  }
}

You can run this query directly in a hosted service explorer like The Graph's Playground.

building-dashboards-alerts
ON-CHAIN INTELLIGENCE

Building Dashboards and Setting Alerts

A systematic guide to implementing a competitive on-chain intelligence program using dashboards for monitoring and alerts for proactive insights.

An effective on-chain intelligence program moves beyond ad-hoc analysis to systematic monitoring. The core components are dashboards for real-time situational awareness and alerts for proactive notification of critical events. This framework allows teams to track key performance indicators (KPIs) like total value locked (TVL), user growth, transaction volume, and protocol-specific metrics. For developers, this means instrumenting data pipelines from sources like The Graph, Dune Analytics, or direct RPC nodes to feed into visualization tools such as Grafana or custom React applications.

Building a dashboard starts with defining your intelligence objectives. Are you monitoring a competitor's DeFi protocol, tracking NFT collection health, or overseeing your own smart contract deployment? Each goal dictates different metrics. For a lending protocol, you might track borrow/utilisation rates, liquidity depths across pools, and oracle price feeds. Technical implementation often involves writing GraphQL queries for subgraphs or SQL for Dune, then using libraries like Chart.js or commercial platforms to render the data. The key is creating views that surface signal, not noise.

Alerts transform passive dashboards into active intelligence systems. They are conditional triggers that notify you when on-chain activity meets predefined criteria. Common alert conditions include large token transfers (whale movements), significant liquidity withdrawals, governance proposal submissions, or smart contract function calls exceeding a gas threshold. Services like Chainscore, Tenderly, and OpenZeppelin Defender provide alerting infrastructure. For a custom setup, you can write a script that polls an RPC node or subgraph and sends notifications via Discord webhooks, Telegram bots, or email.

Here is a basic Python example using Web3.py to monitor for large ETH transfers and trigger an alert:

python
from web3 import Web3
import requests

w3 = Web3(Web3.HTTPProvider('YOUR_INFURA_URL'))
ALERT_THRESHOLD_ETH = Web3.to_wei(500, 'ether')
DISCORD_WEBHOOK_URL = 'your_webhook_url'

def handle_event(event):
    value_wei = int(event['args']['value'])
    if value_wei > ALERT_THRESHOLD_ETH:
        value_eth = Web3.from_wei(value_wei, 'ether')
        message = f"Large Transfer: {value_eth} ETH from {event['args']['from']} to {event['args']['to']}"
        requests.post(DISCORD_WEBHOOK_URL, json={'content': message})

# Create filter for Transfer events on WETH contract
weth_contract = w3.eth.contract(address='0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2', abi=ABI)
event_filter = weth_contract.events.Transfer.create_filter(fromBlock='latest')

while True:
    for event in event_filter.get_new_entries():
        handle_event(event)
    time.sleep(10)

To maintain a competitive edge, your alert logic must evolve. Start with broad alerts for major events, then refine them to reduce false positives and capture nuanced signals. For instance, instead of alerting on any governance proposal, alert only when the proposal involves treasury funds exceeding a certain amount or comes from a newly active delegate. Integrate multiple data layers: combine on-chain alerts with off-chain sentiment from social media APIs or development activity from GitHub. This holistic view helps distinguish between routine activity and genuinely significant shifts in the ecosystem.

Ultimately, the most competitive programs are those that automate response. Beyond sending a notification, an alert can trigger an automated analysis script, update a risk model, or even execute a hedging transaction via a smart wallet (with appropriate safeguards). This closes the loop from intelligence to action. Regularly audit your dashboard metrics and alert thresholds to ensure they remain aligned with your strategic goals, as the on-chain landscape and your own objectives will continuously change.

ON-CHAIN INTELLIGENCE

Frequently Asked Questions

Common questions and technical clarifications for developers and analysts building or scaling on-chain intelligence systems.

On-chain intelligence is the practice of extracting actionable insights from raw, public blockchain data to inform decision-making. While blockchain analytics often focuses on compliance and transaction tracing (e.g., Chainalysis, TRM Labs), on-chain intelligence is broader and more proactive. It involves:

  • Real-time monitoring of smart contract interactions, liquidity flows, and governance proposals.
  • Predictive modeling using metrics like Net Unrealized Profit/Loss (NUPL) or exchange netflow.
  • Protocol-specific analysis for DeFi (e.g., tracking impermanent loss in Uniswap v3 concentrated liquidity positions) or NFTs (e.g., analyzing Blur bidding strategies).

The core difference is intent: analytics describes what happened, while intelligence seeks to explain why it happened and what might happen next, requiring deeper technical integration with node RPCs and indexing services like The Graph or Goldsky.

conclusion-next-steps
IMPLEMENTATION ROADMAP

Conclusion and Strategic Next Steps

This guide has outlined the core components of an on-chain intelligence program. The final step is to operationalize these concepts into a sustainable, value-generating system.

Building a competitive on-chain intelligence program is an iterative process, not a one-time project. Start by defining clear objectives aligned with your organization's goals, whether that's alpha generation, risk management, or product development. Begin with a focused scope—such as monitoring a specific protocol like Uniswap V3 or tracking wallet activity for a particular NFT collection—to prove value before scaling. Use modular tools like The Graph for historical queries and Dune Analytics dashboards for real-time metrics to build your initial intelligence pipeline without heavy infrastructure investment.

As your program matures, transition from reactive monitoring to proactive signal generation. This involves developing custom bots using libraries like ethers.js or viem to listen for specific on-chain events—large DEX swaps, governance proposal submissions, or unusual token transfers—and trigger alerts. For example, a bot could monitor Swap events on a Curve pool and calculate the resulting price impact to identify potential manipulation. Data quality is paramount; always verify contract addresses and ABIs from official sources like Etherscan's contract verification to ensure your analysis is based on accurate information.

The most advanced programs integrate machine learning models to uncover non-obvious patterns. You can train models on historical transaction data to predict metrics like the likelihood of a token's price pump based on holder concentration changes or the risk of an Aave loan liquidation. However, treat model outputs as one input among many; always maintain a human-in-the-loop for validation. Continuous iteration is key—regularly backtest your signals, refine your data sources, and stay updated on new blockchain developments and analytical methodologies to maintain your competitive edge.

To ensure long-term success, establish a knowledge-sharing framework within your team. Document your data pipelines, signal logic, and key findings in an internal wiki. Schedule regular review sessions to discuss new insights and failed hypotheses. Consider contributing anonymized research to the public domain, as engaging with the broader analytics community on platforms like Dune or GitHub can provide valuable feedback and uncover collaborative opportunities. The goal is to build an institutional memory that accelerates learning and decision-making.

Finally, measure the program's impact with concrete metrics. Track the signal-to-noise ratio of your alerts, the ROI of trades informed by your research, or the reduction in response time to security incidents. This data justifies further investment and guides strategic pivots. The on-chain data landscape evolves rapidly; a successful intelligence program is a living system that adapts to new chains, protocols like zkSync Era or Base, and analytical techniques, turning raw blockchain data into a sustained strategic advantage.