Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Launching a Real-Time Analytics Dashboard for Blockchain Supply Chain Data

A developer tutorial for building a dashboard to monitor pharmaceutical supply chain events on-chain. Covers data indexing, streaming to a database, and creating visualizations for operational and compliance metrics.
Chainscore © 2026
introduction
OVERVIEW

Introduction

A guide to building a real-time analytics dashboard for blockchain-based supply chain data.

Blockchain technology is transforming supply chain management by providing an immutable, transparent ledger for tracking goods from origin to consumer. A real-time analytics dashboard is the critical interface that allows stakeholders to monitor this data, verify authenticity, and optimize logistics. Unlike traditional dashboards that rely on delayed batch updates, a blockchain-native dashboard consumes live events directly from smart contracts and decentralized data sources, enabling immediate insights into inventory levels, shipment status, and compliance checks.

This guide focuses on the technical architecture for building such a system. We will cover how to ingest and process live data from chains like Ethereum, Polygon, and Solana, which are commonly used for supply chain applications. You will learn to use tools like The Graph for indexing historical events and Chainlink oracles for bringing real-world data on-chain. The core challenge is designing a backend that can handle the high throughput and finality delays of blockchain data while presenting a coherent, real-time view to end-users.

The value proposition is significant. For a logistics manager, this could mean seeing a pallet's temperature and location update instantly as it moves, with each data point cryptographically verified on-chain. For a consumer, it could be scanning a QR code to see a product's full provenance history. We will implement key features like event streaming, data aggregation, and alerting based on smart contract state changes, moving from theoretical concepts to a working prototype.

prerequisites
FOUNDATION

Prerequisites and Architecture Overview

Before building a real-time analytics dashboard for blockchain supply chain data, you need the right tools and a clear architectural blueprint. This section outlines the essential prerequisites and the core system design.

The foundation of this dashboard is a robust data pipeline. You will need a blockchain node or a node provider API (like Alchemy, Infura, or QuickNode) to access on-chain data. For Ethereum-based supply chains, an archive node is often required to query historical events. Your development environment should include Node.js (v18+), a package manager like npm or yarn, and a code editor. Familiarity with TypeScript, React (or a similar frontend framework), and SQL for database queries is essential.

The architecture follows a modular, event-driven pattern. At its core, a listener service (often written in Node.js) subscribes to on-chain events from your smart contracts, such as ProductMinted, ShipmentUpdated, or OwnershipTransferred. This service parses the event logs and pushes the structured data to a message queue like Apache Kafka or RabbitMQ. This decouples data ingestion from processing, ensuring the system can handle high-throughput event streams without data loss.

From the message queue, a stream processor (using a framework like Apache Flink or a simple Node.js worker) consumes the messages, performs any necessary transformations or enrichments, and writes the data to a time-series database. TimescaleDB (PostgreSQL extension) or InfluxDB are optimal choices for storing timestamped supply chain events, enabling efficient queries for time-windowed analytics, like "average shipment duration last week."

The processed data is then served to the frontend dashboard via a GraphQL API (using Apollo Server or Hasura) or a REST API. This API layer provides the flexibility to fetch aggregated metrics, filtered event histories, and real-time subscriptions. The frontend, built with React and a charting library like Recharts or D3.js, connects to this API to visualize metrics such as shipment status distribution, carbon footprint per leg, and inventory levels across nodes.

For a production deployment, you must integrate monitoring and alerting. Use Prometheus and Grafana to track the health of your listener service, database query performance, and API latency. Set up alerts for critical failures, like the event listener falling behind the blockchain head block. Finally, consider data retention policies and archival strategies for your time-series database to manage storage costs as the dataset grows.

step-1-subgraph
DATA PIPELINE

Step 1: Define and Deploy a Subgraph for Event Indexing

This step creates the foundational data layer for your dashboard by using The Graph protocol to index and query on-chain supply chain events in real-time.

A subgraph is an open API that indexes blockchain data based on a predefined schema and event-handling logic, known as manifest and mapping files. For a supply chain dashboard, you will define a data schema that models entities like Shipment, Product, and Checkpoint. The subgraph will listen for specific events from your smart contracts (e.g., ProductMinted, StatusUpdated) and populate these entities with structured data, making them queryable via GraphQL. This process transforms raw, sequential blockchain logs into a searchable database.

Start by defining your subgraph.yaml manifest. This file specifies the smart contract address, network (e.g., Ethereum Mainnet, Polygon), the events to index, and the handlers that process them. For a supply chain contract emitting a ShipmentCreated event, your manifest would map this event to a handler function like handleShipmentCreated. You must also create a GraphQL schema (schema.graphql) that defines your entities and their relationships, such as a Shipment having multiple Checkpoint records.

Next, write your mapping logic in AssemblyScript (a TypeScript-like language) within mapping.ts. This code is executed whenever a subscribed event is emitted. For example, when handleShipmentCreated is triggered, it extracts parameters like shipmentId, origin, and destination from the event log, creates a new Shipment entity, and saves it to The Graph's store. This automated indexing is what enables real-time data availability for your dashboard.

To deploy, use the Graph CLI. After installing with npm install -g @graphprotocol/graph-cli, authenticate and run graph deploy --node <deployment-node> --ipfs <ipfs-node> --version-label v0.0.1. You can deploy to a hosted service or a decentralized network. Once deployed, your subgraph will begin syncing, scanning the blockchain from the defined start block to index historical data and then listening for new events. The sync status and any errors can be monitored via The Graph Explorer dashboard.

The final output is a dedicated GraphQL endpoint (API) for your indexed data. Your analytics dashboard can then query this endpoint to fetch filtered, aggregated, or real-time data without directly interacting with a blockchain node. For instance, a query to get all shipments with a DELAYED status in the last 24 hours becomes a simple, fast GraphQL operation. This setup forms the core of a performant, real-time data pipeline for blockchain analytics.

step-2-data-stream
DATA PIPELINE

Step 2: Stream Subgraph Data to a Time-Series Database

Learn how to capture real-time blockchain events from The Graph and store them in a structured database for complex analytics.

A subgraph provides a real-time stream of blockchain events, but for historical analysis, trend visualization, and complex aggregations, you need a time-series database. While The Graph's GraphQL API is excellent for querying the current state, it's not optimized for time-window queries, large-scale historical joins, or feeding business intelligence tools. A time-series database like TimescaleDB (PostgreSQL extension), InfluxDB, or QuestDB is designed for this workload, storing each event with a precise timestamp for efficient time-range queries and rollups.

To build the streaming pipeline, you need a service that polls the subgraph for new data and inserts it into your database. A common pattern is to use a Node.js or Python script with a GraphQL client (like Apollo Client or graphql-request). The script queries the subgraph at regular intervals (e.g., every 15 seconds) for entities created or updated since the last poll, using a blockTimestamp or blockNumber filter. For a supply chain subgraph, you would query for new ShipmentCreated, StatusUpdated, or LocationRecorded events.

Here is a simplified Node.js example using graphql-request to fetch recent shipments and insert them into a PostgreSQL/TimescaleDB table:

javascript
const { request, gql } = require('graphql-request');
const { Client } = require('pg');

const SUBGRAPH_URL = 'https://api.thegraph.com/subgraphs/name/your-org/supply-chain';
const dbClient = new Client({ connectionString: process.env.DATABASE_URL });

const query = gql`
  query GetRecentShipments($lastTimestamp: BigInt!) {
    shipments(where: { createdAt_gt: $lastTimestamp }, first: 1000) {
      id
      trackingId
      createdAt
      status
      from
      to
    }
  }
`;

async function syncShipments() {
  await dbClient.connect();
  const lastTimestamp = await getLastStoredTimestamp(dbClient); // Your logic
  const data = await request(SUBGRAPH_URL, query, { lastTimestamp });

  for (const shipment of data.shipments) {
    await dbClient.query(
      `INSERT INTO shipments(id, tracking_id, created_at, status, from_address, to_address)
       VALUES($1, $2, to_timestamp($3), $4, $5, $6)
       ON CONFLICT (id) DO UPDATE SET status = $4`,
      [shipment.id, shipment.trackingId, shipment.createdAt, shipment.status, shipment.from, shipment.to]
    );
  }
  await dbClient.end();
}

For production resilience, implement idempotency and error handling. Use the blockchain's blockNumber as a checkpoint to ensure no events are missed if the service restarts. Consider using a message queue (like RabbitMQ or Apache Kafka) to decouple the polling service from the database writer, allowing for buffering and parallel processing. This is crucial when dealing with high-throughput chains or during blockchain reorgs, where you may need to revert and re-process a range of blocks.

Once your data is flowing into the time-series database, you unlock powerful analytics. You can write SQL queries to calculate average shipment duration, status transition frequencies, or geographic movement patterns. You can connect visualization tools like Grafana or Metabase directly to your database to build dashboards that show real-time KPIs, such as 'Shipments Delivered Today' or 'Average Temperature by Shipping Lane.' This structured historical data layer is the foundation for the real-time dashboard you'll build in the next step.

step-3-calculate-metrics
DATA AGGREGATION & ANALYSIS

Step 3: Calculate Key Performance Indicators (KPIs)

Transform raw on-chain and off-chain supply chain data into actionable business intelligence by calculating core performance metrics.

KPIs are quantifiable metrics that measure the performance and health of your supply chain operations. For a blockchain dashboard, you calculate these by aggregating and processing the event data ingested in the previous step. Core categories include operational efficiency (e.g., order fulfillment time), inventory management (e.g., stock turnover), compliance and provenance (e.g., certified product percentage), and financial performance (e.g., cost per shipment). Each KPI should be tied to a specific smart contract event or a combination of on-chain data and off-chain oracle inputs.

Implement KPI calculations within your data pipeline, not in the frontend, for consistency and performance. Use a stream processing framework like Apache Flink or Apache Spark Structured Streaming to compute metrics in real-time as new blocks are confirmed. For example, to calculate Average Fulfillment Time, your job would listen for OrderShipped and OrderDelivered events, calculate the time delta for each order ID, and maintain a rolling average. Store the results in a time-series database like TimescaleDB or InfluxDB for efficient historical querying and dashboard rendering.

Here is a simplified conceptual code snippet for a streaming job calculating a daily count of shipments:

python
# Pseudo-code for a Spark Structured Streaming job
shipment_events_df = spark \
  .readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "localhost:9092") \
  .option("subscribe", "supplychain-events") \
  .load()

# Filter for ShipmentCreated events and parse JSON
shipments = shipment_events_df.filter("eventType = 'ShipmentCreated'") \
  .select(from_json(col("value").cast("string"), schema).alias("data")) \
  .select("data.*")

# Window by day and count
daily_shipment_count = shipments \
  .withWatermark("blockTimestamp", "10 minutes") \
  .groupBy(window(col("blockTimestamp"), "1 day")) \
  .count()

# Write to your time-series database
query = daily_shipment_count.writeStream \
  .outputMode("append") \
  .format("influxdb") \
  .option("checkpointLocation", "/path/to/checkpoint") \
  .start()

Beyond basic counts and averages, leverage on-chain data verifiability for trust-centric KPIs. A key metric is Provenance Completeness Score, which measures the percentage of a product's journey (from raw material to retail) recorded on-chain with verified attestations. This requires querying multiple smart contracts (e.g., a Registrar for materials, a Batch contract for manufacturing, and a Shipment contract for logistics) and checking for valid cryptographic proofs like zk-SNARKs or signature-based attestations from authorized entities.

Finally, establish KPI baselines and alerting. Use historical data to set normal operating ranges for metrics like Temperature Excursions for cold chain items or Customs Clearance Delay. Configure your pipeline to trigger alerts—via services like PagerDuty or a simple webhook—when a KPI deviates beyond a threshold. This transforms your dashboard from a passive reporting tool into an active monitoring system, enabling rapid response to supply chain disruptions, fraud, or compliance issues directly indicated by immutable blockchain records.

step-4-build-dashboard
IMPLEMENTATION

Build the Frontend Dashboard

This step connects your smart contract and data pipeline to a user interface, visualizing real-time supply chain events on a live dashboard.

The frontend dashboard serves as the primary interface for supply chain stakeholders to monitor product provenance and events. For a blockchain supply chain, this requires a web application that can connect to a user's wallet (like MetaMask), listen for on-chain events, and display data from your off-chain indexing service. A modern React or Next.js application is a common choice, using libraries like ethers.js or viem for blockchain interaction and TanStack Query or SWR for efficient data fetching from your API. The core architectural challenge is synchronizing real-time on-chain events with the enriched off-chain data stored in your database.

Start by setting up a basic Next.js project with TypeScript for type safety. Install the essential dependencies: npm install ethers @tanstack/react-query axios. Create a context or provider to manage the user's wallet connection state using the wagmi library, which simplifies handling account and network changes. Your dashboard's main component should fetch the initial list of products or shipments from your backend API endpoint (e.g., GET /api/products). Display this data in a table, with each row containing key identifiers like a Product ID and a link to a detailed view.

The detailed product view is where real-time analytics come to life. This page should do two things in parallel. First, it queries your backend for the complete event history and enriched data for the specific product ID. Second, it establishes a live subscription to new on-chain events. You can implement this by using TanStack Query's useQuery for the initial fetch and useSubscription (via a WebSocket connection to your backend or by polling the blockchain RPC) for updates. When a new ProductShipped or QualityCheckRecorded event is detected, the UI should update immediately, appending the new event to a timeline or updating a status badge.

For the visualization itself, consider components that clearly communicate the supply chain journey. A vertical timeline is effective for showing the sequence of events (Manufactured → Shipped → Received). Use data cards to display immutable on-chain data (timestamp, transaction hash, block number) alongside the enriched off-chain data (warehouse photos, PDF certificates, temperature logs). Integrate a map component (like Leaflet or Google Maps) to plot geographic checkpoints if your data includes location coordinates. All on-chain transaction hashes should be hyperlinked to a block explorer like Etherscan for independent verification.

Finally, implement critical frontend logic for data integrity. Always display the verifying block explorer link for each event. Use your smart contract's ABI to decode complex event parameters client-side if needed. Add a clear indicator showing the data source for each piece of information (e.g., "On-Chain," "Stored Off-Chain"). To complete the user flow, include a simple form that triggers a new transaction, such as a button that calls the recordShipment function on your smart contract, demonstrating the full cycle from data entry to dashboard visualization.

SUPPLY CHAIN KPIs

Key Dashboard Metrics and Calculations

Core metrics to track for blockchain supply chain visibility, their calculation, and data sources.

MetricCalculation FormulaData SourceUpdate Frequency

On-Chain Transaction Finality Rate

Confirmed Tx / Total Tx in Period

Node RPC (e.g., Ethereum, Polygon)

< 1 sec

Item Provenance Verification Time

Block Timestamp (Verification) - Block Timestamp (Origin)

Smart Contract Events

Per Transaction

Average Order Fulfillment Latency

Σ(Delivery Block - Order Block) / # Orders

Order & Shipment Contracts

Daily

Supplier Compliance Score

(On-Time Deliveries / Total Deliveries) * 100

Oracle Data & Smart Contract Logic

Per Shipment

Carbon Footprint per Shipment (kg CO2e)

Distance * Mode Emission Factor + On-Chain Tx Footprint

Chainlink Oracles (IoT/Sustainability Data)

Per Leg

Real-Time Inventory Accuracy

(On-Chain Inventory Count / Physical Audit Count) * 100

RFID/IoT Sensors -> Blockchain

Continuous

Interoperability Bridge Success Rate

Successful Cross-Chain Msgs / Total Msgs

Wormhole, Axelar, LayerZero Events

Hourly

Data Integrity Alert Rate

of Tamper Alerts / Total State Updates

Guardian Nodes or ZK Proof Verifiers

Real-time

step-5-alerts-integration
OPERATIONALIZING INSIGHTS

Step 5: Implement Alerting and Stakeholder Views

Transform raw blockchain data into actionable intelligence by configuring automated alerts and tailored dashboards for different user roles.

A real-time dashboard is only as valuable as its ability to prompt action. The final step is to build an alerting system that monitors key performance indicators (KPIs) and stakeholder-specific views that present relevant data to different users. For a supply chain, critical alerts might include a temperature sensor breach for a refrigerated shipment, a significant delay at a customs checkpoint, or a failed smart contract execution for a payment milestone. These alerts should be configurable with thresholds and trigger notifications via email, Slack, or SMS to the appropriate operations team.

To implement alerting, you need a rules engine that queries your aggregated data. Using a tool like Grafana with its alerting rules, or a dedicated service like PagerDuty, you can define conditions. For example, an SQL query in Grafana might check: SELECT shipment_id FROM aggregated_events WHERE sensor_temperature > 8 AND timestamp > NOW() - INTERVAL '5 minutes'. If this query returns results, an alert is fired. For blockchain-specific events, you might monitor on-chain data for failed transactions or unexpected contract state changes using your indexer's API.

Different stakeholders require different data lenses. A logistics manager needs a view focused on geolocation, ETAs, and exception reports. A finance officer needs to see invoice settlements, payment confirmations on-chain, and working capital metrics. A compliance auditor requires an immutable log of all custody transfers and sensor data for regulatory proof. Build these views as separate dashboards or filtered perspectives within your main application, ensuring each user only sees the data necessary for their role, improving clarity and security.

For the frontend, use a framework like React or Vue.js to create modular dashboard components. Fetch data from your backend API, which queries the time-series and graph databases. Implement real-time updates using WebSockets or Server-Sent Events (SSE) to push new alerts and data points to the UI without requiring a page refresh. This ensures that a warehouse manager sees a temperature alert the moment it occurs on the blockchain, enabling immediate intervention to preserve shipment integrity.

Finally, document the alert workflows and dashboard access protocols. Establish a runbook detailing how to respond to each alert type and ensure stakeholder training. The system's value is realized when a temperature alert leads to a warehouse check that saves a shipment, or when a finance dashboard instantly confirms a letter of credit payment via smart contract, reducing days of manual reconciliation. This step closes the loop, turning data visibility into operational resilience and stakeholder trust.

DEVELOPER FAQ

Frequently Asked Questions

Common technical questions and troubleshooting for building a real-time analytics dashboard on blockchain supply chain data.

The most reliable method is to use a combination of RPC providers and indexing services. For raw, low-latency data, connect to an RPC node from providers like Alchemy, Infura, or QuickNode. For complex event filtering and historical queries, use a specialized indexing protocol like The Graph (for historical subgraphs) or Covalent (for unified APIs).

Key steps:

  1. Identify the smart contracts for your supply chain (e.g., shipment tracking, inventory NFTs).
  2. Subscribe to specific event logs using WebSocket connections from your RPC provider for real-time updates.
  3. Use an indexing service to query aggregated data, like total shipments per day or product provenance history, which is inefficient to calculate directly from chain data.
conclusion
IMPLEMENTATION SUMMARY

Conclusion and Next Steps

You have now built a functional real-time analytics dashboard for blockchain supply chain data. This guide covered the core components: data ingestion, processing, and visualization.

Your dashboard connects to a blockchain node or indexer like The Graph, processes events from smart contracts (e.g., ERC-1155 for batches, ERC-721 for assets), and visualizes key metrics such as shipment status, location updates, and temperature logs. The architecture you implemented uses a backend service (like Node.js with WebSocket listeners) to stream data into a time-series database (TimescaleDB or InfluxDB) and a frontend framework (React with Recharts or D3.js) to display live charts and tables. This setup provides stakeholders with immediate visibility into the supply chain's state.

To enhance your system, consider these next steps. First, implement data validation and anomaly detection by setting thresholds for sensor readings (e.g., temperature outside 2-8°C for pharmaceuticals) and triggering alerts via services like PagerDuty or Telegram bots. Second, add historical analysis by running batch queries to calculate trends, such as average transit times between checkpoints or identifying frequent delay points. Third, explore privacy-preserving techniques like zero-knowledge proofs (using circuits from circom) to verify compliance (e.g., proof of organic certification) without exposing raw data.

For production deployment, focus on scalability and reliability. Use a message queue (Apache Kafka or RabbitMQ) to decouple data ingestion from processing, ensuring no events are lost during high load. Containerize your services with Docker and orchestrate them using Kubernetes for easy scaling. Implement robust monitoring with Prometheus and Grafana to track dashboard performance, API latency, and database health. Finally, secure your API endpoints with authentication (JWT tokens) and consider using a decentralized storage solution like IPFS or Arweave for audit trail immutability beyond the primary blockchain.

How to Build a Real-Time Pharma Supply Chain Dashboard | ChainScore Guides