Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Implement a Unified Asset Dashboard Across Chains

This guide provides a technical walkthrough for building a dashboard that fetches and displays a user's token balances, NFT holdings, and DeFi positions from multiple blockchains.
Chainscore © 2026
introduction
INTRODUCTION

How to Implement a Unified Asset Dashboard Across Chains

A technical guide to building a single interface for tracking and managing digital assets across multiple blockchains.

A unified asset dashboard aggregates a user's portfolio—including tokens, NFTs, and DeFi positions—from disparate blockchain networks into a single, coherent view. This solves a critical user experience problem in Web3, where assets are fragmented across chains like Ethereum, Solana, Arbitrum, and Polygon. Developers face the challenge of querying multiple, often incompatible, data sources—RPC nodes, indexers, and subgraphs—to compile a complete financial picture. The goal is to provide real-time, accurate data on balances, values, and transaction history without requiring users to manually switch between network-specific explorers or wallets.

The core technical architecture relies on two main components: a data aggregation layer and a normalization engine. The aggregation layer is responsible for fetching raw data. This involves making concurrent RPC calls using libraries like ethers.js or web3.js for EVM chains, and chain-specific SDKs for others. For indexed data (e.g., historical transactions or complex DeFi states), you would integrate with services like The Graph, Covalent, or Moralis. The normalization engine then processes this heterogeneous data, converting chain-native units (wei, lamports) into human-readable formats, applying consistent token metadata from registries like the Token Lists repository, and calculating aggregate values using real-time price oracles.

Implementing this requires careful state and error management. A robust dashboard must handle partial failures gracefully—if one chain's RPC node is slow, it shouldn't block data from other networks. Implementing caching strategies for static data (like token logos) and rate-limiting requests to public RPC endpoints are essential for performance and reliability. Furthermore, wallet connection libraries such as Wagmi or RainbowKit simplify the process of detecting which chains a user's wallet supports and switching networks programmatically when needed for transactions.

From a security perspective, the dashboard itself is typically a read-only interface, but it often needs to prepare transaction data. It's crucial that the application never holds private keys. Instead, transaction payloads are constructed client-side and signed by the user's connected wallet (e.g., MetaMask, Phantom). The dashboard should also verify and display data transparently, allowing users to see the source of token prices and the on-chain proofs for their balances, which builds trust in the aggregated information presented.

prerequisites
FOUNDATIONAL KNOWLEDGE

Prerequisites

Before building a unified asset dashboard, you need a solid technical foundation. This section covers the essential concepts and tools required to aggregate and display on-chain data across multiple networks.

A unified asset dashboard requires a deep understanding of blockchain fundamentals. You must be comfortable with concepts like public/private key cryptography, transaction structures, and the role of gas fees and nonces. Familiarity with different consensus mechanisms (Proof-of-Work, Proof-of-Stake) and their implications for finality and speed is crucial for interpreting cross-chain data accurately. This foundational knowledge informs how you query and reconcile information from diverse networks.

Proficiency in smart contract interaction is non-negotiable. Your dashboard will need to read data from token contracts (ERC-20, ERC-721), liquidity pools, and staking contracts. You should understand the Application Binary Interface (ABI) and how to use libraries like ethers.js or web3.js to call functions like balanceOf(address) or decimals(). Experience with Multicall batching to reduce RPC requests is highly recommended for performance.

You need hands-on experience with RPC providers and indexers. Directly querying node RPCs (e.g., via Infura, Alchemy, QuickNode) for raw chain data is one approach. For more complex queries involving historical events or aggregated states, you'll likely use indexing protocols like The Graph (subgraphs) or Covalent Unified API. Understanding the trade-offs between decentralization, speed, and cost for each data source is key to architectural decisions.

Finally, a strong grasp of asynchronous JavaScript/TypeScript and modern front-end frameworks (React, Vue, Svelte) is essential for building the dashboard interface. You will be managing multiple concurrent data-fetching states and updating the UI reactively. Knowledge of state management libraries (Zustand, Redux) and caching strategies is important for creating a smooth user experience that feels unified despite pulling from disparate, slow-moving sources.

architecture-overview
SYSTEM ARCHITECTURE

How to Implement a Unified Asset Dashboard Across Chains

A unified asset dashboard aggregates a user's holdings from multiple blockchains into a single interface. This guide outlines the core architectural components and data flow required to build one.

A unified dashboard's primary function is to aggregate and index data from disparate sources. The core architecture consists of three layers: a data ingestion layer (indexers, RPC nodes), a processing and normalization layer (backend services), and a presentation layer (frontend UI). The ingestion layer pulls raw on-chain data via RPC calls or subgraphs. The processing layer standardizes this data—converting token amounts to a common decimal format, fetching current prices from oracles, and calculating total portfolio value. The presentation layer displays the normalized data, often grouping assets by chain or type.

The data ingestion strategy is critical. For optimal performance and reliability, implement a hybrid approach. Use dedicated RPC providers (like Alchemy, Infura, or QuickNode) for real-time balance checks and transaction simulation. For historical data and complex queries (e.g., "all ERC-20 holdings"), leverage blockchain indexers like The Graph or Covalent. This offloads heavy query logic from your backend. Always implement robust error handling and retry logic, as RPC endpoints can be unreliable. Consider using a service like POKT Network or a multi-provider fallback system to ensure uptime.

Data normalization presents significant challenges. You must reconcile different token standards (ERC-20, SPL, BEP-20), decimal conventions (ETH uses 18, USDC uses 6), and chain-native assets (ETH, MATIC, SOL). Create a central TokenRegistry service that maps token contract addresses to canonical metadata (name, symbol, decimals, logo). Use price oracles like Chainlink or CoinGecko's API to fetch real-time USD values. The formula for calculating a holding's value is: (balance / 10^decimals) * current_price. Cache price data aggressively to reduce API calls and cost.

A secure backend must manage user authentication and private data. Never store private keys. Instead, use sign-in with Ethereum (SIWE) or wallet connection via libraries like WalletConnect or Web3Modal. Once authenticated, your backend service will need the user's public addresses for each chain. It will then query the ingestion layer for each address, aggregate the results, apply normalization, and return a unified JSON response to the frontend. All sensitive logic and API keys should remain on the backend.

For the frontend, frameworks like React or Vue.js are common. Use state management (e.g., Redux, Zustand) to handle the portfolio data. Key UI components include: a portfolio summary (total value across chains), chain-specific views, asset lists with balances and values, and transaction history. Libraries like viem and ethers.js are essential for interacting with wallets and performing client-side checks. Always display data in the user's preferred currency and provide clear labels for which network each asset resides on.

Finally, implement continuous data updates. Use WebSocket connections to RPC providers for listening to new blocks on monitored chains. Upon receiving a block update, trigger a refresh for relevant user addresses. For less time-sensitive data, use polling intervals. Audit and security are paramount: regularly review the smart contracts of tokens you display, implement rate limiting on your APIs, and consider using a multi-signature wallet for any treasury or fee-collection addresses associated with your dashboard service.

step-fetch-token-balances
DATA AGGREGATION

Step 1: Fetch Native and Token Balances

The foundation of a cross-chain dashboard is aggregating asset data. This step covers retrieving a user's native currency (e.g., ETH, MATIC) and ERC-20 token balances across multiple networks.

To fetch a user's native balance, you query the blockchain's state using the user's public address. This is typically done via an RPC provider like Alchemy, Infura, or a public node. For Ethereum, the method is eth_getBalance. For other EVM chains like Polygon or Arbitrum, the method is identical, but you must connect to the correct chain's RPC endpoint. The balance is returned in wei, the smallest unit, which you must convert to a human-readable format like ETH.

Fetching ERC-20 token balances is more complex. You must interact with each token's smart contract using the balanceOf function. This requires the token's contract address and the user's wallet address. To scale this across a portfolio, you need a pre-defined list of popular token contracts per chain. Services like the Token Lists repository or Moralis' token API provide these lists. You then batch these RPC calls using eth_call for efficiency.

A practical implementation involves using a library like ethers.js or viem. You first establish a provider for each target chain. For native balances, you use provider.getBalance(address). For tokens, you create a contract instance with the ABI containing the balanceOf function and call contract.balanceOf(address). Always handle network errors and format the raw balance using the token's decimals property.

The major challenge is completeness. Manually maintained token lists will miss new or obscure tokens. For a production dashboard, consider integrating with specialized indexers. The Graph subgraphs for protocols like Uniswap, or APIs from Covalent, Alchemy, or Chainscore can provide comprehensive, real-time balance data without requiring you to manage RPC calls for thousands of contracts directly.

Finally, you must normalize the data. Balances from different chains and tokens should be converted into a unified format, typically a common fiat value (USD). This requires fetching real-time price feeds from oracles like Chainlink or aggregated price APIs. The output of this step is a structured list: { chainId, assetAddress, assetSymbol, balance, balanceUsd }, ready for display and further aggregation in the dashboard's UI.

step-query-nft-holdings
DATA AGGREGATION

Step 2: Query NFT Holdings with The Graph

Learn how to use The Graph's decentralized indexing protocol to programmatically fetch a user's NFT collection data across multiple blockchains.

The Graph is a decentralized protocol for indexing and querying blockchain data using GraphQL. For our asset dashboard, it allows us to fetch a user's NFT holdings without running our own node or parsing raw event logs. We interact with subgraphs, which are open APIs that index specific smart contract data. Popular NFT marketplaces and collections like OpenSea, ENS, and CryptoPunks have public subgraphs, providing a standardized way to query ownership, metadata, and transaction history.

To query a user's NFTs, you first need to identify the correct subgraph endpoint. For Ethereum mainnet NFTs, you might use the OpenSea Shared Storefront subgraph hosted on The Graph's decentralized network. A basic query to get NFT tokens owned by an address looks like this:

graphql
query GetNFTs($owner: String!) {
  tokens(where: { owner: $owner }) {
    id
    tokenID
    owner { id }
    contract { id }
  }
}

You send this query via HTTP POST to the subgraph endpoint, passing the user's wallet address as the $owner variable.

A significant challenge is the multi-chain nature of NFTs. You must query a separate subgraph for each blockchain. For Polygon NFTs, use the Polygon-specific OpenSea subgraph. For Arbitrum, Avalanche, or Optimism, you need to find or deploy subgraphs for the NFT contracts on those chains. This requires managing multiple GraphQL endpoints and aggregating the results client-side. Always check the subgraph's schema.graphql file to understand the available entities and fields for constructing accurate queries.

Handling the response data involves normalizing it into a consistent format for your dashboard. Different subgraphs may structure tokenID, metadata, or collection data differently. You'll need to map the GraphQL responses to your application's data model. Furthermore, for complete metadata like images, you often need to resolve an tokenURI from the on-chain contract and fetch the JSON from IPFS or a centralized provider, adding another asynchronous step to your data pipeline.

For production applications, consider query optimization and caching. Fetching all NFTs for a wallet with a large collection can be slow. Use pagination via GraphQL's first and skip arguments. Implement client-side caching of subgraph responses to reduce latency and avoid hitting rate limits. For the most reliable uptime, especially for critical subgraphs, you can use The Graph's hosted service or decentralized network, or even explore indexing solutions like Goldsky or Subgraph Studio for custom deployments.

step-aggregate-defi-positions
IMPLEMENTATION

Aggregate DeFi Positions

This guide details the technical process of programmatically fetching and consolidating a user's DeFi assets and liabilities across multiple blockchain networks into a single dashboard view.

A unified dashboard requires aggregating data from disparate sources. The core technical challenge is querying multiple blockchain states and smart contracts to compile a user's complete financial picture. This involves fetching on-chain data like token balances from wallets, liquidity provider (LP) positions from Automated Market Makers (AMMs) like Uniswap V3 and Curve, and debt positions from lending protocols such as Aave and Compound. Each protocol and chain has its own interface, requiring a modular data-fetching strategy.

The implementation typically follows a multi-step architecture. First, you must identify all the user's addresses across chains (Ethereum, Arbitrum, Polygon, etc.) via a wallet connection or address derivation. Next, for each chain, you spawn parallel RPC calls to fetch native token balances and query the relevant smart contract ABIs for ERC-20 tokens and protocol-specific positions. For example, to get a user's Aave debt, you would call the getUserAccountData function on the Aave Pool contract. Efficient batching of eth_call RPC requests is crucial for performance.

Here is a simplified code snippet using ethers.js and the Multicall pattern to fetch balances for a list of tokens on Ethereum, which can be adapted for any EVM chain:

javascript
import { ethers } from 'ethers';
import { MultiCall, ContractCallContext } from 'ethereum-multicall';

const provider = new ethers.providers.JsonRpcProvider(RPC_URL);
const multiCall = new MultiCall({ ethersProvider: provider });

const callsContext = tokenAddresses.map(address => ({
  reference: address,
  contractAddress: address,
  abi: [{ name: 'balanceOf', type: 'function', inputs: [{name: 'owner', type: 'address'}], outputs: [{name: '', type: 'uint256'}] }],
  calls: [{ reference: 'balance', methodName: 'balanceOf', methodParameters: [userAddress] }]
}));

const results = await multiCall.call(callsContext);
// Process results.results

After raw data is collected, it must be normalized and consolidated. This means converting token amounts from raw wei/wei-like units into human-readable decimals, fetching current prices from an oracle or DEX liquidity pool, and calculating the total value (TVL) of each position. For LP tokens, you must query the underlying reserves and the user's share. The final output is a structured data object—often JSON—that categorizes assets by chain, protocol, and type (e.g., supplied, staked, borrowed), ready for frontend display.

Key considerations for a production system include handling chain reorganizations, managing RPC rate limits, caching stale data appropriately, and providing clear error states for failed queries. Services like The Graph can simplify querying historical events (e.g., past deposits), but for real-time state, direct contract calls are necessary. The aggregated data forms the foundation for advanced features like cross-margin calculations, risk assessment, and portfolio rebalancing suggestions.

CORE INFRASTRUCTURE

Data Provider Comparison

Comparison of leading providers for fetching on-chain asset data across multiple networks.

Feature / MetricChainscoreCovalentThe Graph

Native Multi-Chain Support

Real-Time Balance Updates

Historical Portfolio Snapshot

NFT Metadata & Media

DeFi Position Data (LP, Staking)

Average API Latency (p95)

< 300ms

1-2 sec

2-5 sec

Pricing Model (per 1M calls)

$50-200

$100-400

GRT Query Fees

Supported Chains (Count)

50+

100+

30+

DEVELOPER FAQ

Frequently Asked Questions

Common technical questions and solutions for building a unified asset dashboard that aggregates data across multiple blockchains.

Fetching native token balances (e.g., ETH, MATIC, AVAX) requires querying each blockchain's RPC endpoint. Use the eth_getBalance JSON-RPC method for EVM chains, which returns the balance in wei. For non-EVM chains like Solana or Cosmos, you must use their respective client libraries (e.g., @solana/web3.js).

Key considerations:

  • Rate Limiting: Public RPCs have strict limits; use dedicated node providers (Alchemy, Infura, QuickNode) or rotate endpoints.
  • Unit Conversion: Consistently convert raw balances (wei, lamports) to human-readable decimals using each chain's native token specification.
  • Performance: Batch requests or use multicall contracts on EVM chains to reduce latency. For a dashboard, implement caching to avoid hitting RPCs on every page load.
How to Build a Cross-Chain Asset Dashboard | ChainScore Guides