Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Setting Up a Cross-Chain User Acquisition Dashboard

A technical tutorial for developers to build a dashboard that aggregates and visualizes user acquisition metrics across multiple blockchain deployments.
Chainscore © 2026
introduction
INTRODUCTION

Setting Up a Cross-Chain User Acquisition Dashboard

A guide to building a dashboard that tracks user acquisition across multiple blockchains, providing actionable insights for growth teams.

In the multi-chain ecosystem, user activity is fragmented across networks like Ethereum, Polygon, Arbitrum, and Solana. A cross-chain user acquisition dashboard aggregates this data into a single view, allowing teams to measure growth, identify high-performing channels, and allocate resources effectively. Without it, you're making decisions with an incomplete picture of your protocol's adoption. This guide walks through the core concepts and initial setup for building such a dashboard.

The foundation of any acquisition dashboard is on-chain data. You'll need to track key events such as wallet interactions with your smart contracts, token transfers, and new unique addresses. For cross-chain analysis, you must query data from multiple sources, including block explorers, indexing services like The Graph, or node providers such as Alchemy and Infura. Each chain has its own data structure and APIs, so normalization is a critical first step.

Start by defining your Key Performance Indicators (KPIs). Common metrics for user acquisition include: New Unique Wallets, Transaction Count per Chain, Contract Interaction Frequency, and Source of Funds analysis (tracking where users bridge assets from). For example, you might discover that 40% of your new users on Arbitrum are bridging from Ethereum via the Arbitrum Bridge, indicating a specific growth vector to optimize.

Technically, you can build this dashboard by setting up a backend service that polls blockchain data, processes it, and stores it in a database like PostgreSQL or TimescaleDB for time-series analysis. Use libraries such as ethers.js or viem to interact with EVM chains. For non-EVM chains like Solana, you would use the @solana/web3.js library. The frontend can then query this processed data via an API to display charts and tables.

A practical first step is to create a script that tracks new wallets interacting with a specific contract. Here's a simplified example using ethers.js to get new mints from an NFT contract on Ethereum:

javascript
const filter = contract.filters.Transfer(ethers.constants.AddressZero);
contract.on(filter, (from, to, tokenId, event) => {
  console.log(`New user ${to} minted token #${tokenId}`);
  // Log this event to your analytics database
});

This event-driven approach captures acquisition moments in real-time.

Finally, consider using specialized analytics platforms like Dune Analytics, Flipside Crypto, or Chainscore to accelerate development. These platforms provide pre-built queries and dashboards for common metrics, allowing you to fork and customize them for your protocol. The goal is to move from raw, chain-specific data to a clean, aggregated dashboard that clearly shows where your users are coming from and how they are engaging across all supported networks.

prerequisites
SETUP

Prerequisites

Before building a cross-chain user acquisition dashboard, you need to configure your development environment and gather the necessary data sources.

To build a cross-chain user acquisition dashboard, you'll need a foundational setup. This includes a Node.js environment (v18 or later) for running scripts and a package manager like npm or yarn. You'll also need a code editor such as VS Code. The core of the dashboard will be built using a framework like Next.js or Vite for the frontend, paired with a charting library such as Recharts or Chart.js for data visualization. Ensure you have Git installed for version control and a GitHub account to manage your repository.

Access to blockchain data is non-negotiable. You will need API keys from several providers. For on-chain data aggregation, services like Chainscore, Covalent, or The Graph are essential. You'll also need a WalletConnect or RainbowKit project ID for wallet connection functionality. If you plan to index specific protocol events, you may require an Alchemy or Infura API key for reliable RPC access to multiple chains like Ethereum, Polygon, and Arbitrum. Store these keys securely using environment variables in a .env.local file.

Your dashboard's logic will rely on querying smart contracts and parsing transaction data. Familiarity with Ethers.js v6 or Viem is required for interacting with blockchains. You should understand core concepts like ABIs, event logs, and ERC-20 token standards. For example, to track a user's first transaction (a key acquisition metric), you'll need to query transfer events for a specific token or NFT contract across chains and filter for unique from addresses that are the zero address (0x000...), indicating a mint.

Finally, define your data schema and storage. For a simple MVP, you can use a local JSON file or SQLite. For a production-ready dashboard that updates in real-time, consider a time-series database like TimescaleDB or a managed service like Supabase. You'll need to design tables to store aggregated metrics such as new_wallets_per_chain, bridge_volume, and protocol_first_interactions. Structuring your data pipeline to fetch, transform, and store this information efficiently is the final prerequisite before writing your first line of dashboard code.

architecture-overview
SYSTEM ARCHITECTURE OVERVIEW

Setting Up a Cross-Chain User Acquisition Dashboard

This guide details the architectural components and data flow for building a dashboard that tracks user acquisition across multiple blockchain ecosystems.

A cross-chain user acquisition dashboard aggregates and analyzes on-chain activity to identify new users entering a protocol's ecosystem from other chains. The core architecture consists of three layers: a data ingestion layer that queries blockchain RPC nodes and indexers, a processing and storage layer that normalizes and stores the data, and a frontend visualization layer that presents insights. Key data sources include bridge transaction logs from protocols like Wormhole and LayerZero, DEX swap events on Uniswap and PancakeSwap, and new wallet creation events. The system must handle heterogeneous data formats and varying block times across networks like Ethereum, Solana, and Polygon.

The data ingestion layer is typically built using specialized indexers or custom listeners. For comprehensive coverage, you can use a combination of The Graph for indexed subgraphs, Covalent or Alchemy for unified APIs, and direct RPC calls for real-time event streaming. A listener service, written in Node.js or Python, subscribes to specific event logs. For example, to track users bridging to Arbitrum, you would monitor the DepositFinalized event on the Arbitrum bridge contract. The raw transaction data, including sender address, destination chain, token amount, and timestamp, is then pushed to a message queue like Kafka or RabbitMQ for reliable processing.

In the processing layer, a stream processing engine (e.g., Apache Flink) or batch jobs (e.g., using dbt) transforms the raw data. This involves address clustering to link wallets belonging to the same user, calculating metrics like time-to-first-action, and attributing the acquisition source. Processed data is stored in a time-series database (TimescaleDB) for metrics and a columnar data warehouse (BigQuery) for complex analytics. Defining a clear user journey schema is critical; a new user might be defined as an address that interacts with your protocol's contract within 7 days of a cross-chain bridge transaction.

The final component is the frontend dashboard, built with frameworks like React or Vue.js, that queries the processed data via a REST or GraphQL API. Effective visualizations include: a daily active users (DAU) chart segmented by origin chain, a funnel showing user progression from bridge to protocol interaction, and a leaderboard of the most effective acquisition channels. For actionable alerts, integrate with Slack or Discord webhooks to notify teams when acquisition from a specific chain spikes or drops below a threshold, enabling rapid response to market movements or bridge incentives.

data-sources-tools
CROSS-CHAIN ANALYTICS

Data Sources and Tools

Building a user acquisition dashboard requires aggregating and analyzing data from multiple blockchains. These tools provide the foundational infrastructure for tracking cross-chain activity.

DATA PROVIDERS

Chain Data Source Comparison

Comparison of primary data sources for tracking cross-chain user activity and on-chain metrics.

Data FeatureThe Graph (Subgraphs)Covalent APIChainscore API

Real-time Transaction Indexing

Historical Data Depth

Full chain history

Full chain history

Full chain history

Cross-Chain Query Support

40+ chains

50+ chains

Pre-built User Acquisition Metrics

Limited (wallet balances)

Comprehensive (cohorts, LTV, source)

Custom Query Flexibility

Data Freshness (Block Lag)

~2-10 blocks

< 1 block

< 1 block

Pricing Model

Query fee (GRT)

Pay-as-you-go

Tiered subscription

Developer Onboarding Complexity

High (requires subgraph deployment)

Low (REST API)

Low (REST & GraphQL)

step-1-fetching-transaction-data
DATA AGGREGATION

Step 1: Fetching Transaction Data from Multiple Chains

The foundation of any cross-chain dashboard is reliable data. This step covers how to programmatically collect transaction data from disparate blockchain networks.

To analyze user acquisition across chains, you must first gather raw transaction data. This involves querying blockchain nodes or APIs for specific events like token transfers, contract interactions, and wallet creations. For a multi-chain view, you'll need to connect to multiple data sources simultaneously, such as an Ethereum RPC endpoint, a Polygon archive node, and Arbitrum's sequencer. The key challenge is normalizing this data into a consistent format, as different chains and clients (Geth, Erigon, etc.) can return data with varying structures and field names.

The most efficient method is to use specialized data providers that offer a unified API across chains. Services like The Graph (for indexed subgraphs), Covalent, Alchemy's Supernode, and Chainscore's own APIs abstract away the complexity of direct RPC calls. For example, fetching all Transfer events for a specific ERC-20 token on five different EVM chains would require five separate, complex eth_getLogs RPC calls. A unified provider lets you make a single query with a chainIds parameter like [1, 137, 42161, 10, 43114].

When building from scratch with direct RPCs, your code must handle chain-specific nuances. An Ethereum eth_getBlockByNumber call returns a baseFeePerGas field post-London fork, while a Polygon response includes gasUsed and a separate receipt. You'll need to write adapters for each chain. Furthermore, you must manage request rate limits, error handling for unreliable public RPCs, and the significant data volume—processing just one day of transactions on a busy chain can mean handling millions of events.

Here's a conceptual code snippet using the ethers.js library and a multi-RPC provider configuration to fetch recent transactions for a wallet across two chains. This approach highlights the manual orchestration required.

javascript
import { ethers } from 'ethers';

const providers = {
  ethereum: new ethers.JsonRpcProvider('https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY'),
  polygon: new ethers.JsonRpcProvider('https://polygon-mainnet.g.alchemy.com/v2/YOUR_KEY')
};

async function fetchTransactions(address) {
  const chains = ['ethereum', 'polygon'];
  let allTxs = [];
  
  for (const chain of chains) {
    const provider = providers[chain];
    // Fetch recent blocks and iterate
    const blockNumber = await provider.getBlockNumber();
    const block = await provider.getBlock(blockNumber, true); // Include full transactions
    
    const chainTxs = block.transactions.filter(tx => 
      tx.from.toLowerCase() === address.toLowerCase() || 
      (tx.to && tx.to.toLowerCase() === address.toLowerCase())
    ).map(tx => ({
      ...tx,
      chain: chain.toUpperCase(),
      blockNumber: block.number
    }));
    
    allTxs = allTxs.concat(chainTxs);
  }
  return allTxs;
}

For production dashboards, the simple block-scanning method above is insufficient. You need to listen for real-time events using WebSocket subscriptions (eth_subscribe) or poll for logs within specific block ranges. The data must then be parsed—decoding log topics for standard events like ERC-20 Transfer(address indexed from, address indexed to, uint256 value) using the contract ABI. This parsed data forms the raw dataset for the next step: filtering and classifying transactions to identify genuine user acquisition events, separating them from routine DeFi interactions or bot spam.

Ultimately, the goal of this step is to produce a clean, timestamped stream of cross-chain transactions attributed to user addresses. This dataset is the input for the aggregation and analysis layers that will calculate metrics like New Unique Wallets, Transaction Volume per Chain, and Cost of Acquisition. Choosing the right data infrastructure here—whether managed API or self-hosted indexer—directly impacts the dashboard's reliability, speed, and cost.

step-2-address-correlation
DATA AGGREGATION

Step 2: Correlating Wallet Addresses Across Chains

This step focuses on the core challenge of linking a single user's activity across multiple blockchain networks using their wallet addresses.

A user's identity in Web3 is not a single profile but a collection of public addresses across different chains. To track user acquisition, you must first correlate these addresses to a single user entity. This process involves identifying deterministic relationships between addresses, primarily through address derivation from a single seed phrase or by analyzing on-chain interactions like bridging and swapping. Without this correlation, activity on Ethereum, Polygon, and Arbitrum appears as separate, unrelated users, severely distorting acquisition metrics.

The most reliable method for correlation is analyzing key derivation paths. Wallets like MetaMask generate addresses for EVM chains (Ethereum, BNB Chain, Avalanche C-Chain) from the same seed using the path m/44'/60'/0'/0/0. By querying an indexer for all addresses derived from a known root address on one chain, you can build a cross-chain map. For non-EVM chains (e.g., Solana, Bitcoin), different derivation standards (like m/44'/501') are used, requiring separate logic. Services like the Chainscore Identity API abstract this complexity by providing unified user profiles.

When derivation paths are unknown, you can infer connections through on-chain behavior analysis. Look for bridging transactions where a user sends assets from Address A on Ethereum to Address B on Arbitrum via a bridge like Arbitrum Bridge or Hop Protocol. A single deposit event on the origin chain and a corresponding mint or claim on the destination chain strongly indicates the same user controls both addresses. Swaps involving canonical bridged assets (like USDC.e on Avalanche) can also reveal these links.

Implementing this requires querying multiple data sources. For derivation, use a library like ethers.js with the HDNodeWallet module. For behavioral analysis, query transaction logs from RPC nodes or indexed data from The Graph. The correlation logic can be structured as a mapping: user_id -> { ethereum: '0x123...', polygon: '0x456...', arbitrum: '0x789...' }. This map becomes the foundation of your dashboard, allowing you to aggregate deposits, transactions, and volume per user, not per chain.

Accuracy in correlation directly impacts your acquisition cost (CAC) and lifetime value (LTV) calculations. A miscorrelation can double-count users or split a single user's activity, making campaigns seem less efficient. Always validate links with multiple data points, such as correlating timestamps of first transactions across chains or verifying consistent NFT ownership. This step transforms raw, chain-specific data into actionable, user-centric insights for your acquisition dashboard.

step-3-calculating-metrics
ANALYTICS ENGINE

Step 3: Calculating Core Acquisition Metrics

Transform raw on-chain data into actionable insights by calculating the key performance indicators that define user acquisition success.

With your data pipeline established, the next step is to calculate the core metrics that measure user acquisition performance. These metrics move beyond simple transaction counts to reveal the health and efficiency of your growth efforts. The foundational trio for any cross-chain dashboard includes: New Unique Wallets, which tracks first-time interactors with your protocol; Retention Rate, measuring the percentage of users who return after their initial interaction; and Cost Per Acquired User (CPAU), which divides your total acquisition spend by the number of new unique wallets. Calculating these requires joining event data with wallet-level timelines.

New Unique Wallets is more nuanced than a simple count. For a precise calculation, you must filter for a user's first-ever interaction with your protocol's core functions (e.g., first deposit, swap, or mint) within your defined acquisition channels. This often involves a SQL query that uses a ROW_NUMBER() window function partitioned by user_address and ordered by block_timestamp to identify the earliest event. You should segment this metric by source chain and referral source (if tracked) to understand which channels are most effective.

Calculating Retention Rate requires constructing a user cohort based on their first interaction date and then checking for subsequent activity within a defined period (e.g., 7, 30, or 90 days). The formula is: (Users with ≥2 interactions in period / Total new users in cohort) * 100. This metric is critical for distinguishing between one-time "airdrop farmers" and genuinely engaged users. A low retention rate may indicate a product-market fit issue or that your acquisition channels are targeting the wrong audience.

Cost Per Acquired User (CPAU) bridges on-chain activity with off-chain spend. To calculate it, you need your marketing expenditure data (e.g., from a CRM or payment tracker) segmented by campaign and chain. The formula is: Campaign Spend on Chain X / New Unique Wallets from Chain X. For example, if you spent 5 ETH on a Galxe campaign on Arbitrum and acquired 250 new unique wallets from that campaign, your CPAU is 0.02 ETH. This metric is essential for optimizing your marketing budget across different chains and initiatives.

For accurate calculations, ensure your data model includes deduplicated wallet addresses (using LOWER() in SQL) and accounts for contract deployments that may generate new addresses. Tools like Dune Analytics, Flipside Crypto, or a custom ClickHouse setup are commonly used for these complex aggregations. Always document your metric definitions (e.g., "New User = first deposit > $10") to ensure consistency across reports and team members.

step-4-building-dashboard
DATA VISUALIZATION

Step 4: Building the Visualization Dashboard

This guide walks through creating a dashboard to visualize cross-chain user acquisition data, turning raw on-chain metrics into actionable business intelligence.

A well-designed dashboard transforms raw data from your data warehouse into clear, actionable insights. For tracking cross-chain user acquisition, your dashboard should surface key performance indicators (KPIs) like Daily Active Users (DAU), New Unique Wallets, Retention Rates, and Transaction Volume segmented by source chain. Tools like Grafana, Metabase, or Superset are ideal for connecting directly to your PostgreSQL or ClickHouse database. Start by defining the core questions your team needs answered daily, such as "Which bridge is driving the most valuable users?" or "What is the cost of acquisition per chain?"

Your dashboard's architecture should reflect the data model built in the previous steps. Create a primary view that aggregates the user_journey_facts table, using the dim_user and dim_chain dimensions for filtering. For example, a time-series chart of new users could use a SQL query like:

sql
SELECT date, source_chain, COUNT(DISTINCT user_id) as new_users
FROM user_journey_facts
WHERE is_first_interaction = true
GROUP BY date, source_chain;

This allows you to visualize adoption trends over time and immediately identify which chains are growth leaders or laggards.

To analyze user quality and retention, build cohort analysis charts. Segment users by their acquisition week and source chain, then track their activity over subsequent weeks. This reveals whether users from a specific bridge (like LayerZero or Wormhole) are making repeat transactions or are one-time interactors. Funnel visualization is another critical component. Map the journey from a user's first bridge interaction to their first swap or deposit in your dApp, identifying drop-off points between chains or protocols.

Finally, ensure your dashboard is interactive and accessible. Implement chain-specific filters, date range selectors, and the ability to drill down from an aggregate KPI to the individual transaction level. Set up automated alerts for anomalies, such as a sudden drop in new users from a major chain, which could indicate a bridge outage or a competitor's campaign. By centralizing these visualizations, your growth, product, and engineering teams can make data-driven decisions to optimize acquisition strategy and resource allocation across the multi-chain ecosystem.

TROUBLESHOOTING

Frequently Asked Questions

Common technical questions and solutions for building a cross-chain user acquisition dashboard using Chainscore's APIs and data infrastructure.

A robust dashboard should track user acquisition across multiple blockchains. Core data points include:

  • New Unique Wallets: Distinct addresses interacting with your protocol for the first time, segmented by source chain.
  • Source Chain Attribution: Identifying which chain (e.g., Ethereum, Arbitrum, Polygon) a user originated from.
  • On-Chain Actions: Specific interactions like token swaps, NFT mints, or contract deployments that signal engagement.
  • Deposit/Bridge Volume: The amount of capital bridged into your protocol's native chain.
  • Retention Metrics: Whether a user returns for a second interaction within a defined period (e.g., 7, 30 days).

Chainscore's unified API provides this cross-chain data, normalizing address formats and transaction semantics from over 20 networks into a single query.

conclusion-next-steps
IMPLEMENTATION SUMMARY

Conclusion and Next Steps

You have now built a functional dashboard to track user acquisition across multiple blockchains. This guide covered the core components: data sourcing, processing, and visualization.

Your dashboard should now be operational, pulling data from sources like The Graph for on-chain activity and Dune Analytics for aggregated metrics. The key to maintaining its value is data freshness. Set up automated cron jobs or serverless functions (e.g., using AWS Lambda or GCP Cloud Functions) to run your data ingestion scripts at regular intervals. For production, consider implementing a message queue like RabbitMQ or Apache Kafka to handle data streams reliably and avoid losing events during chain reorgs or API outages.

To extend your analysis, integrate additional data layers. Consider adding wallet profiling via services like Arkham or Nansen to segment users by behavior (e.g., whales, degens, NFT collectors). Incorporate gas fee data from Etherscan or Blocknative to understand the cost of acquisition. For deeper engagement tracking, you can set up smart contract event listeners for specific protocol interactions, moving beyond simple token transfers to measure actions like providing liquidity or staking.

The final step is to operationalize your insights. Use the dashboard to inform growth loops. For example, if you identify that users bridging from Arbitrum have a high lifetime value, you could allocate more resources to liquidity mining campaigns on that chain. Connect your dashboard to notification services like Slack or Discord using webhooks to alert your team of significant spikes or drops in acquisition from key chains. Regularly audit your data pipelines and query performance as chain activity scales.

For further learning, explore advanced topics like building predictive models for user churn using historical on-chain data, or implementing cross-chain attribution to track a user's full journey. The Chainlink Functions documentation provides patterns for secure off-chain computation, while Covalent's Unified API offers a robust alternative for multi-chain data fetching. Continuously test your assumptions and iterate on your metrics to keep your acquisition strategy data-driven and effective.

How to Build a Cross-Chain User Acquisition Dashboard | ChainScore Guides