Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Launching a Monitoring Dashboard for Development Networks

A technical guide for developers to implement observability for development and staging blockchain networks using Prometheus, Grafana, and custom exporters.
Chainscore © 2026
introduction
INTRODUCTION

Launching a Monitoring Dashboard for Development Networks

A practical guide to setting up a real-time monitoring dashboard for blockchain development environments like Anvil, Hardhat, and Foundry.

Effective development requires visibility. A monitoring dashboard provides a real-time, visual interface for tracking the health and activity of your local blockchain network. This is critical for debugging smart contract interactions, analyzing transaction flows, and verifying state changes during development. Unlike production block explorers, a dedicated dev dashboard offers granular control, custom metrics, and integration with your specific toolchain, turning raw RPC data into actionable insights.

This guide focuses on integrating with popular development networks. Anvil (from Foundry) and Hardhat Network are the most common choices, each exposing a JSON-RPC endpoint that a dashboard can query. The core setup involves connecting to this endpoint (typically http://localhost:8545) and subscribing to events like newHeads for new blocks or using the eth_getLogs filter to track specific contract events. This real-time data feed forms the backbone of any monitoring solution.

You will learn to build a dashboard that displays key metrics: the latest block number, gas prices, pending transactions, and native token balances for your test accounts. We'll also cover tracking ERC-20 transfers and custom smart contract events, which is essential for DeFi or NFT project development. The implementation uses Node.js with libraries like web3.js or ethers.js to interact with the RPC, and a frontend framework like React to display the data.

Beyond basic metrics, advanced monitoring includes setting up alerts for failed transactions, tracking gas consumption per contract call, and logging the sequence of contract calls within a test. For example, you can monitor a flash loan transaction's path through multiple protocols on your local fork. These practices help identify inefficiencies and vulnerabilities before deployment to testnets or mainnet.

Finally, we'll discuss operational considerations. This includes securing the dashboard's access in team environments, persisting historical data for test analysis, and integrating the monitoring setup into your CI/CD pipeline. By the end, you'll have a production-grade monitoring tool for your development workflow, significantly improving iteration speed and code reliability.

prerequisites
SETUP CHECKLIST

Prerequisites

Before launching your monitoring dashboard, ensure your development environment meets these technical requirements. This guide covers the essential tools and configurations needed to run Chainscore's observability stack.

To deploy a monitoring dashboard for development networks, you need a working Node.js environment (version 18 or later) and npm or yarn for package management. You'll also need Docker and Docker Compose installed, as the dashboard's backend services, including the metrics collector and database, are containerized. Ensure you have at least 2GB of free RAM and 10GB of disk space available for the containers to run smoothly. For local blockchain interaction, tools like Hardhat, Foundry, or Truffle should be configured on your system.

You must have access to at least one EVM-compatible development network. This could be a local Hardhat node (npx hardhat node), an Anvil instance from Foundry, a Ganache server, or a testnet like Sepolia or Goerli. The dashboard connects via a standard JSON-RPC endpoint. Ensure your node's RPC URL is accessible (e.g., http://localhost:8545) and that you have a funded account with test ETH for submitting transactions that will generate the telemetry data you intend to monitor.

Clone the Chainscore dashboard repository from GitHub (chainscore-labs/chainscore-dashboard) and review the docker-compose.yml and .env.example files. The core configuration involves setting environment variables for your RPC endpoint, specifying the chain ID (e.g., 31337 for Hardhat), and optionally configuring alert thresholds. For advanced setups, you may need to modify the Prometheus scrape configuration or Grafana dashboards, which are provided as code in the config/ directory.

The dashboard stack consists of several components: a Prometheus instance to scrape and store metrics from your node, a Grafana server for visualization, and the Chainscore collector service that translates blockchain data into Prometheus metrics. Understanding this architecture helps with troubleshooting. For example, if no data appears, you would first check if the collector can reach your RPC endpoint, then verify Prometheus is scraping it, and finally ensure Grafana is connected to the correct Prometheus data source.

Finally, verify your setup by starting a local Hardhat network and deploying a simple smart contract to generate activity. Run docker-compose up -d to launch the monitoring stack, then navigate to http://localhost:3000 to access Grafana. You should see pre-configured dashboards displaying metrics like block propagation time, gas usage per block, transaction success rate, and pending transaction pool size. This live data confirms your prerequisites are correctly met and the pipeline is operational.

architecture-overview
ARCHITECTURE

Launching a Monitoring Dashboard for Development Networks

A robust monitoring stack is essential for debugging and optimizing smart contracts and dApps during development. This guide outlines the core components and setup for a local observability system.

A development monitoring architecture typically consists of three core layers: data collection, processing/aggregation, and visualization. The data collection layer uses agents like Prometheus to scrape metrics from your blockchain nodes (e.g., Geth, Erigon) and applications. The processing layer, often a time-series database like Prometheus itself, stores and aggregates this data. Finally, the visualization layer, such as Grafana, provides dashboards to query and graph the metrics. This separation of concerns creates a flexible and scalable system for observing network health.

Setting up the stack begins with defining key metrics. For an Ethereum development network, you should monitor node performance (CPU/memory usage, peer count, sync status), blockchain activity (gas used, transaction count, block time), and application-specific data (smart contract function calls, event emissions, error rates). Prometheus uses a YAML configuration file (prometheus.yml) to define these scrape targets. A basic target for a Geth node's metrics endpoint would look like:

yaml
- job_name: 'geth-dev'
  static_configs:
    - targets: ['localhost:6060']

Grafana connects to your Prometheus database as a data source, allowing you to build dashboards with queries using PromQL. Effective dashboards for development include a high-level overview panel with current block number and peer count, time-series graphs for memory consumption and CPU load, and alert panels for critical failures. You can configure Grafana alerts to notify your team via Slack or email if, for instance, the node falls out of sync or memory usage exceeds a threshold. Pre-built dashboards for clients like Geth and Nethermind are available on the Grafana Labs website.

For a more comprehensive view, integrate logging and tracing. Tools like Loki can aggregate logs from your nodes and containers, while Tempo or Jaeger can trace transaction execution across microservices or smart contract calls. This unified approach—metrics, logs, and traces—is often called the three pillars of observability. Using the Grafana stack (Prometheus, Loki, Tempo) provides a single pane of glass for correlating a high gas spike in a metric graph with the corresponding error logs and slow transaction traces.

To automate this setup for local development, use Docker Compose. A single docker-compose.yml file can define services for Prometheus, Grafana, and alert managers, with pre-configured dashboards imported as volumes. This ensures every developer on your team has an identical, production-like monitoring environment from the start. The key is to treat your observability infrastructure as code, version-controlling the configuration files alongside your application code to maintain consistency and enable rapid iteration.

core-tools
MONITORING DASHBOARDS

Core Tools and Components

Essential tools and frameworks for building real-time observability into your development and test networks.

step-1-exporter-setup
INFRASTRUCTURE

Step 1: Setting Up a Node Exporter

Node Exporter is the foundational Prometheus agent for collecting system-level metrics from your blockchain node's host machine.

A Node Exporter is a Prometheus exporter that runs on the target machine (your node's server) and exposes a wide array of hardware and OS-level metrics via an HTTP endpoint. It is essential for monitoring the health of the underlying infrastructure, not just the blockchain client software. Key metrics include CPU usage, memory consumption, disk I/O, network traffic, and filesystem utilization. Without it, you cannot detect if your node is failing due to a full disk, memory leak, or network bottleneck.

Installation is typically done via your system's package manager. For Ubuntu/Debian systems, you can install the official Prometheus repository package. First, download the latest release from the Prometheus releases page. For example, to install version 1.8.0 on an amd64 Linux system, you would run:

bash
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.0/node_exporter-1.8.0.linux-amd64.tar.gz
tar xvfz node_exporter-1.8.0.linux-amd64.tar.gz
sudo mv node_exporter-1.8.0.linux-amd64/node_exporter /usr/local/bin/

After installation, you need to create a systemd service file to manage the Node Exporter as a background daemon. This ensures it starts automatically on boot and can be easily controlled. Create a file at /etc/systemd/system/node_exporter.service with the following content:

ini
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

Then, create the dedicated user, reload systemd, and start the service: sudo systemctl daemon-reload && sudo systemctl start node_exporter.

By default, Node Exporter runs on port 9100. You can verify it's working by visiting http://your-server-ip:9100 in your browser or using curl localhost:9100/metrics. This page will display all the raw metrics in Prometheus's exposition format. The next step is to configure your central Prometheus server to scrape this target. You will add this node's IP and port to the scrape_configs section of prometheus.yml, allowing Prometheus to collect these metrics at regular intervals (e.g., every 15 seconds) for storage and analysis.

For blockchain nodes, specific metrics are critical. Pay close attention to disk free space (node_filesystem_free_bytes) on the volume storing your chaindata, as running out of space will crash your client. Monitor memory available (node_memory_MemAvailable_bytes) to prevent out-of-memory (OOM) kills. High CPU system time (node_cpu_seconds_total{mode="system"}) can indicate kernel-level issues. Setting up alerts in Grafana for these thresholds is a common next step after the exporter is successfully integrated into your monitoring stack.

step-2-prometheus-config
MONITORING SETUP

Step 2: Configuring Prometheus

Configure Prometheus to scrape metrics from your development network nodes, enabling data collection for the Grafana dashboard.

Prometheus is a time-series database that collects and stores metrics from configured targets. For blockchain monitoring, it acts as the data layer, pulling key performance indicators (KPIs) like block_height, peer_count, and consensus_latency from your nodes. You configure it by defining scrape jobs in a YAML file, which tell Prometheus where to find your node's metrics endpoint (typically port 26660 for Cosmos SDK chains). This setup is essential for transforming raw node data into queryable time-series information.

Create a configuration file named prometheus.yml. The core component is the scrape_configs section, where you define jobs for each node. A basic job for a single local Tendermint node looks like this:

yaml
scrape_configs:
  - job_name: 'chainscore-devnet'
    static_configs:
      - targets: ['localhost:26660']
        labels:
          instance: 'validator-01'
          network: 'devnet'

This config instructs Prometheus to scrape the metrics API every 15 seconds (default) from the specified target. The labels add metadata, which is crucial for filtering and grouping data in Grafana when monitoring multiple nodes.

For a multi-node development network, you would add each node's IP and metrics port to the targets list. If your nodes are behind load balancers or in Docker containers, you may need to adjust network settings. After saving the configuration, start Prometheus with the command prometheus --config.file=prometheus.yml. Verify it's working by navigating to http://localhost:9090/targets in your browser; the status of your chainscore-devnet target should show as UP. This confirms Prometheus is successfully collecting data from your node, which is the prerequisite for building visualizations in the next step.

step-3-grafana-dashboards
VISUALIZATION

Step 3: Building Grafana Dashboards

Transform raw metrics from your development network into actionable insights with custom Grafana dashboards.

Grafana is an open-source platform for data visualization and monitoring. It connects to Prometheus as a data source, allowing you to build interactive dashboards that display real-time and historical metrics. For a development network, a well-designed dashboard provides a single pane of glass to monitor node health, blockchain activity, and system resource usage. This enables you to quickly identify performance bottlenecks, track transaction throughput, and verify the stability of your test environment before deployment.

To begin, you must first add your Prometheus server as a data source in Grafana. Navigate to Configuration > Data Sources in your Grafana instance and click Add data source. Select Prometheus and enter the URL of your Prometheus server (e.g., http://localhost:9090). Save and test the connection to ensure Grafana can query your metrics. Once connected, you can start creating panels using PromQL (Prometheus Query Language) to visualize the data collected from your nodes and the Chainscore agent.

Effective dashboards for development networks typically include several key panels. A Node Health section should display chainscore_node_sync_status to confirm block synchronization and process_cpu_seconds_total for resource consumption. A Network Activity section can graph chainscore_transactions_total to show transaction volume and chainscore_gas_used_sum to monitor contract execution costs. Finally, a System Metrics section using node_memory_MemAvailable_bytes and node_filesystem_avail_bytes helps prevent infrastructure issues. You can import pre-built dashboard templates, like the popular Node Exporter Full dashboard (ID: 1860), to accelerate setup.

Each dashboard panel is configured with a PromQL query. For example, to chart the rate of transactions per second, you would use the query rate(chainscore_transactions_total[5m]). To show the current block number, use chainscore_block_number. Grafana allows you to customize the visualization type (graph, stat, gauge, table), set alert thresholds, and define refresh intervals. Organize related panels into rows and use text panels to add descriptions, creating a dashboard that is both informative for ongoing monitoring and valuable for debugging specific test scenarios.

For teams, Grafana supports dashboard sharing, annotation, and alerting. You can set up alerts based on panel metrics, such as triggering a notification if a node's sync status changes to 0 (unsynced) for more than 5 minutes. This proactive monitoring is crucial for maintaining a reliable development and testing environment. By investing time in building a comprehensive Grafana dashboard, you create a powerful tool for gaining operational visibility, which directly contributes to more efficient development cycles and higher-quality blockchain applications.

step-4-smart-contract-metrics
DASHBOARD CONFIGURATION

Step 4: Adding Smart Contract Metrics

This step integrates your deployed smart contracts with the monitoring dashboard, enabling real-time tracking of key on-chain events and contract health.

With your development network (like a local Hardhat node or a public testnet) running and your dashboard infrastructure deployed, you can now connect your smart contracts. This involves configuring the dashboard to listen for specific events emitted by your contracts. For example, a DeFi lending protocol would track events like Deposit, Withdraw, Borrow, and Liquidate. You'll need to provide the dashboard with the contract's Application Binary Interface (ABI) and its deployed address. This allows the monitoring service to decode transaction logs and map them to human-readable metrics.

The core of this setup is defining the specific metrics you want to visualize. Common categories include transaction volume (count and value), user activity (unique active addresses), contract state (total value locked, token supply), and error rates (failed transactions, revert reasons). For a minting contract, you might track TotalMinted and UniqueMinters. These metrics are typically defined in a configuration file (e.g., metrics.yaml or dashboard.json) that specifies the event signature and the field to extract from the event log.

Here is a simplified example of how you might define a metric for a Transfer event from an ERC-20 contract in a YAML config:

yaml
metrics:
  - name: "token_transfer_volume"
    type: "counter"
    description: "Total volume of tokens transferred"
    contract_address: "0x742d35Cc6634C0532925a3b844Bc9e..."
    event_abi: "Transfer(address indexed from, address indexed to, uint256 value)"
    value_field: "value"

This configuration instructs the dashboard to listen for Transfer events, increment the counter by the value parameter, and display it as a time-series chart.

After applying your configuration, the dashboard will begin ingesting blockchain data. You should verify the connection by performing a test transaction—such as transferring tokens or calling a key contract function—and confirming that the corresponding metric updates in near real-time. This live feedback loop is critical for development, allowing you to immediately see the impact of your transactions and catch anomalies in contract behavior that might not be apparent from unit tests alone.

For advanced monitoring, consider setting up alerts based on these metrics. Threshold-based alerts can notify your team via Slack or PagerDuty for conditions like a sudden spike in failed transactions (potential bug), a drop in TVL below a safety margin, or a pause in user activity. Integrating with tools like OpenZeppelin Defender Sentinels or building custom alerting logic completes the operational picture, transforming your dashboard from a passive viewer into an active guardian for your smart contract suite.

step-5-alerting
DEVELOPMENT NETWORKS

Step 5: Configuring Alerts

Set up proactive notifications to be instantly informed of critical events on your development network, from transaction failures to contract errors.

After your monitoring dashboard is live, the next step is to configure alert rules. These rules define the specific conditions that, when met, will trigger a notification. For development networks, key metrics to monitor include high transaction failure rates, RPC endpoint latency spikes, smart contract revert errors, and gas price anomalies. Setting thresholds for these metrics allows you to catch issues before they impact your development workflow or user testing.

Chainscore provides a flexible alerting system where you can define rules using our UI or API. A common rule for a testnet might be: Alert if the failed transaction rate exceeds 5% over a 5-minute window. You can configure the destination for these alerts, such as Slack channels, Discord webhooks, or email. For programmatic setups, you can use the @chainscore/sdk to create and manage alert rules directly from your CI/CD pipeline or deployment scripts.

Here is an example of creating an alert via the Chainscore SDK for a Sepolia testnet deployment:

javascript
import { Chainscore } from '@chainscore/sdk';

const cs = new Chainscore({ apiKey: 'YOUR_API_KEY' });

await cs.alerts.create({
  projectId: 'your-project-id',
  name: 'High Sepolia TX Fail Rate',
  condition: {
    metric: 'transactions.failed.rate',
    threshold: 5, // 5%
    window: '5m',
    operator: 'gt'
  },
  channels: [
    { type: 'webhook', target: 'https://hooks.slack.com/...' }
  ]
});

This code snippet sets up an alert that fires when the failed transaction rate on your monitored Sepolia endpoints goes above 5%.

Effective alerting reduces mean time to detection (MTTD). To avoid alert fatigue, start with a few critical rules and expand based on observed patterns. Consider creating different severity levels: a P1 alert for complete RPC downtime sent to an on-call channel, and a P3 alert for elevated gas prices sent to a general dev channel. Regularly review and tune your alert thresholds as your application's traffic patterns on testnets evolve.

DEVELOPMENT NETWORK DASHBOARD

Key Metrics to Monitor

Essential on-chain and infrastructure metrics for tracking the health and performance of a development network.

MetricDescriptionTarget / Healthy RangeAlert Threshold

Block Production Rate

Average time between consecutive blocks

2-5 seconds (EVM L2) / 12 seconds (Ethereum)

15 seconds for 5+ blocks

Transaction Success Rate

Percentage of submitted transactions that succeed

99%

< 95%

Pending Transaction Queue

Number of transactions in mempool awaiting inclusion

< 1000

5000

Gas Price (Gwei)

Current average gas price for standard transactions

< 10 Gwei (devnet)

50 Gwei (devnet)

Active Validators / Nodes

Number of consensus participants currently online

66% of total network

< 33% of total network

RPC Endpoint Latency (p95)

95th percentile response time for JSON-RPC calls

< 200 ms

1000 ms

Chain Reorganization Depth

Number of blocks orphaned in a reorg event

0 (ideal)

= 2 blocks

Contract Deployment Failures

Rate of failed smart contract deployment attempts

< 1%

5%

MONITORING DASHBOARD

Troubleshooting Common Issues

Common problems and solutions when setting up a monitoring dashboard for development networks like Hardhat, Anvil, or Foundry.

This is typically a connection or configuration issue. First, verify your RPC endpoint is correct and the node is running. For local networks, ensure you're connecting to the right port (e.g., http://localhost:8545).

Check these common points:

  • Node Sync Status: Your node must be fully synced. Use curl -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://localhost:8545 to check if it returns a block number.
  • Dashboard Configuration: Confirm the chain ID in your dashboard config (e.g., 31337 for Hardhat) matches your network.
  • Firewall/Ports: Ensure port 8545 (or your configured port) is not blocked by a firewall.
  • Provider Instance: If using a library like ethers.js or web3.js, verify the provider object is correctly instantiated and passed to your monitoring service.
How to Build a Blockchain Monitoring Dashboard for Dev Nets | ChainScore Guides