Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Setting Up a Real-Time Blockchain Monitoring System

A technical guide for developers to build a system that listens to live blockchain events, triggers alerts for specific conditions, and scales for high-throughput networks.
Chainscore © 2026
introduction
INTRODUCTION

Setting Up a Real-Time Blockchain Monitoring System

A guide to building a system that tracks on-chain activity, detects anomalies, and delivers actionable alerts.

A real-time blockchain monitoring system is an essential tool for developers, researchers, and security teams. It continuously streams data from blockchain nodes, processes it for specific events, and triggers alerts based on predefined logic. Unlike static explorers, these systems provide low-latency insights into network health, smart contract interactions, and wallet activity. Core components typically include a reliable data source (like an RPC node), an event listener, a processing engine, and an alerting mechanism. This setup is crucial for detecting exploits, tracking governance proposals, or monitoring protocol performance as it happens.

The foundation of any monitoring system is a reliable data connection. You can use a public RPC endpoint from providers like Alchemy or Infura, or run your own node for full control. For Ethereum, you would use the eth_subscribe WebSocket method to listen for new blocks. In Node.js, you can use libraries like web3.js or ethers.js to establish this connection. The key is to subscribe to events like newHeads to receive block data the moment it's finalized. This real-time feed is the raw input for all subsequent analysis and filtering.

Once you have a live data stream, you need to filter and decode the information. This involves parsing transaction logs to identify specific smart contract events. For example, to monitor a Uniswap V3 pool, you would listen for the Swap event and decode its parameters (sender, amount0, amount1). You can set up filters for high-value transactions, failed transactions, or interactions with a known malicious contract address. The processing logic, often written in Python or Node.js, transforms raw blockchain data into structured, actionable events that your application can understand and act upon.

The final step is to act on the processed data. This usually means configuring an alerting system. For critical security events, you might send an immediate notification via SMS, email, or a Slack/Discord webhook. For analytical purposes, you could store the data in a time-series database like TimescaleDB or InfluxDB for visualization in Grafana. It's also common to implement a simple dashboard that displays key metrics like transactions per second, gas price trends, or failed transaction rates. This layer turns raw data into operational intelligence that teams can use to make decisions.

prerequisites
SETUP GUIDE

Prerequisites

Before building a real-time blockchain monitoring system, you need to configure the foundational components: a reliable data source, a processing environment, and a method for handling alerts.

The first prerequisite is establishing a robust data ingestion pipeline. You have several options: connecting directly to a node provider's RPC endpoint (e.g., Alchemy, Infura, QuickNode), using a specialized data indexer (The Graph, Subsquid), or subscribing to a real-time data stream (Chainscore WebSocket feeds, POKT Network). For monitoring, a WebSocket connection is often essential to receive instant notifications for new blocks, pending transactions, and specific contract events without constant polling, which reduces latency and API costs.

Next, set up your development and execution environment. You will need Node.js (v18 or later) or Python 3.9+ installed. Essential libraries include web3.js or ethers.js for Ethereum-based chains, or their equivalents for other ecosystems (e.g., @solana/web3.js). For more complex event processing, consider a framework like Node.js with Express for building a listener server or Python with FastAPI and asyncio for handling concurrent WebSocket streams. Containerizing your application with Docker ensures consistent deployment across environments.

You must also configure secure storage for application state and alerts. A lightweight database like SQLite or PostgreSQL is suitable for tracking processed block heights, storing filtered events, and logging incidents. For the alerting mechanism, integrate with a notification service such as Discord (via webhooks), Telegram Bot API, Slack API, or PagerDuty for critical infrastructure alerts. Ensure you manage API keys and sensitive configuration using environment variables (via a .env file) and never hardcode them.

Finally, understand the specific data you need to monitor. This involves knowing the smart contract addresses, event signatures (e.g., Transfer(address,address,uint256)), and target wallet addresses relevant to your use case. Tools like Etherscan for contract ABIs and Chainscore's Explorer for real-time chain state are invaluable for this setup phase. Having these details prepared will streamline the development of your event filters and alert logic.

architecture-overview
SYSTEM ARCHITECTURE OVERVIEW

Setting Up a Real-Time Blockchain Monitoring System

A guide to the core components and design patterns for building a scalable, reliable system to monitor on-chain activity and events.

A real-time blockchain monitoring system is a critical infrastructure component for applications like DeFi dashboards, security alerting, and on-chain analytics. Its primary function is to ingest, process, and act upon data from one or more blockchain networks as new blocks are produced. The architecture must be event-driven and highly available to handle the continuous, immutable stream of data from nodes. Core challenges include managing data consistency, handling chain reorganizations, and scaling to process high transaction volumes during network congestion.

The foundational layer is the data ingestion service. This component maintains persistent WebSocket connections to blockchain node providers like Alchemy, Infura, or a self-hosted node. It subscribes to events such as newHeads for block headers and specific contract events using filters. For Ethereum, libraries like ethers.js or web3.py are commonly used. This service must be resilient to node disconnections and include logic to catch up on missed blocks, ensuring no data gaps in the event of downtime.

Once raw data is ingested, it flows into a stream processing pipeline. This is where business logic is applied: decoding transaction inputs with ABI files, calculating derived metrics, or filtering for specific addresses or event signatures. Tools like Apache Kafka or cloud-native services (AWS Kinesis, Google Pub/Sub) are ideal for decoupling ingestion from processing, allowing multiple consumer services to work with the same data stream. This stage transforms raw blockchain logs into structured, application-ready events.

The processed events are then typically stored and made queryable. A time-series database like TimescaleDB or InfluxDB is optimal for metric data (e.g., gas prices, transaction counts), while a general-purpose database like PostgreSQL or a document store may be used for complex event payloads. This storage layer enables historical analysis and powers dashboards. For real-time alerts, a dedicated service consumes the processed stream, evaluates conditions (e.g., "large token transfer"), and triggers notifications via email, Slack, or SMS.

A robust monitoring system must also monitor itself. Implement health checks for node connections, track processing lag behind the chain tip, and set up alerts for error rates. For production systems, consider a multi-region deployment to avoid single points of failure. The entire stack can be containerized with Docker and orchestrated via Kubernetes for easy scaling and management. The final architecture provides a reliable pipeline from raw on-chain data to actionable insights and alerts in seconds.

setting-up-websocket
REAL-TIME DATA

Setting Up the WebSocket Connection

Establish a persistent WebSocket connection to receive live blockchain events, transaction confirmations, and mempool data without polling.

A WebSocket connection provides a full-duplex communication channel over a single TCP connection, enabling servers to push data to clients instantly. For blockchain monitoring, this is essential for tracking mempool transactions, new block headers, contract events, and address activity in real-time. Unlike REST APIs that require constant polling, WebSockets maintain an open connection, reducing latency and server load. Major node providers like Alchemy, Infura, and direct Geth/Erigon nodes offer WebSocket endpoints (e.g., wss://mainnet.infura.io/ws/v3/YOUR_KEY).

To establish a connection in a Node.js environment, you can use the native ws library or a more feature-rich option like Socket.IO. The core process involves instantiating a WebSocket object with the provider's URL and setting up event listeners for open, message, error, and close. Upon connection, you must subscribe to specific data streams using JSON-RPC methods like eth_subscribe. For example, subscribing to new pending transactions is done with {"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}.

Here is a basic implementation example using the ws library:

javascript
const WebSocket = require('ws');
const ws = new WebSocket('wss://mainnet.infura.io/ws/v3/YOUR-PROJECT-ID');
ws.on('open', function open() {
  console.log('Connected');
  // Subscribe to new pending transactions
  ws.send(JSON.stringify({
    "id": 1,
    "method": "eth_subscribe",
    "params": ["newPendingTransactions"]
  }));
});
ws.on('message', function incoming(data) {
  const parsed = JSON.parse(data);
  // Handle subscription response and subsequent data
  console.log('Received:', parsed);
});

This sets up a listener that will log every new transaction broadcast to the network.

Managing the connection's lifecycle is critical for production systems. Implement automatic reconnection logic with exponential backoff to handle network interruptions. Monitor the connection health using heartbeat/ping-pong messages, as some providers may close idle connections. It's also important to handle backpressure—if your application cannot process messages fast enough, buffer them or implement flow control to prevent memory exhaustion. For Ethereum, be aware of high-volume subscriptions like newPendingTransactions, which can exceed 10+ events per second during peak times.

For more complex subscriptions, such as listening for specific ERC-20 transfers or NFT sales, you will need to subscribe to logs with a filter. Use the logs subscription method with a topics array to filter events by contract address and event signature. Decoding these logs requires the contract's ABI to parse the indexed and non-indexed parameters. Services like The Graph or specialized indexers can be alternatives for complex querying, but a direct WebSocket provides the lowest latency for raw event data.

Finally, integrate your WebSocket client into a larger monitoring architecture. Route incoming events to a message queue (e.g., RabbitMQ, Kafka) for reliable processing by downstream services. Implement alerting for specific conditions, like a large token transfer to a flagged address. Always use environment variables for your WebSocket endpoint URLs and API keys, and consider connection pooling if you need to monitor multiple chains simultaneously. The Ethereum JSON-RPC specification is the authoritative source for all subscription methods and parameters.

defining-alert-rules
REAL-TIME MONITORING

Defining and Processing Alert Rules

A guide to building a robust alerting system for blockchain data, covering rule definition, event processing, and actionable notifications.

Alert rules are the core logic of any monitoring system. They define the specific conditions you want to track on-chain. A rule typically consists of a trigger condition, a data source, and an action. Common triggers include a specific smart contract function call, a transaction from a particular address, a token transfer exceeding a threshold, or a deviation in a protocol's key metric like total value locked (TVL). For example, a rule could be: IF function 'swap()' is called on Uniswap V3 Router contract AND input amount > 100 ETH THEN trigger alert.

Processing these rules in real-time requires subscribing to blockchain events. Services like Chainscore, The Graph, or direct node subscriptions via WebSocket provide streams of new blocks and their contained transactions. Your processing engine must decode these raw transactions, applying your defined rules using the transaction data, event logs, and state changes. Efficient processing often involves indexing relevant data (like token balances or pool reserves) to evaluate complex conditions, such as detecting a significant slippage event or a sudden liquidity withdrawal from a DeFi pool.

When a rule condition is met, the system must execute an action. This is where alerts become actionable. Actions can be configured to send notifications via email, SMS, Slack, Discord webhooks, or trigger automated scripts via APIs. For critical financial alerts, consider implementing a multi-channel approach. The alert payload should include all necessary context: transaction hash, block number, involved addresses, and the specific values that triggered the rule. This allows the recipient to immediately investigate the event on a block explorer like Etherscan.

Here is a simplified code example for a rule processor using JavaScript and ethers.js, checking for large transfers of a specific ERC-20 token:

javascript
const filter = {
  address: TOKEN_CONTRACT_ADDRESS,
  topics: [ethers.id('Transfer(address,address,uint256)')]
};
provider.on(filter, (log) => {
  const event = interface.parseLog(log);
  const [from, to, value] = event.args;
  const humanValue = ethers.formatUnits(value, 18); // Adjust for decimals
  if (humanValue > ALERT_THRESHOLD) {
    sendAlert(`Large transfer: ${humanValue} tokens from ${from} to ${to}`);
  }
});

To maintain system reliability, implement idempotency in your alerting logic to avoid duplicate notifications for the same on-chain event. Furthermore, consider the blockchain's reorganization (reorg) events; your subscriber should have a confirmation depth (e.g., 12 blocks for Ethereum) before processing an event as final. For production systems, design your rule engine to be scalable and fault-tolerant, potentially using message queues like Kafka or RabbitMQ to decouple event ingestion from rule evaluation and action dispatch.

Finally, continuously refine your rules based on false positives and evolving threats. Monitor the alert volume and adjust thresholds. Integrate with dashboards for visualization and set up severity levels (e.g., Info, Warning, Critical). A well-tuned alert system acts as an early warning mechanism, enabling proactive responses to security incidents, arbitrage opportunities, or significant protocol events.

ARCHITECTURE DECISION

Node Provider Comparison for Real-Time Data

Key metrics for selecting a node provider to power low-latency blockchain monitoring.

Feature / MetricAlchemyQuickNodeSelf-Hosted Node

Typical Latency (Mainnet)

< 200 ms

< 300 ms

Varies (50-1000 ms)

WebSocket Connection Stability

Historical Data Depth (Blocks)

Full Archive

Full Archive

Full Archive

Max Concurrent Subscriptions

Unlimited

10,000

Limited by hardware

P99 Uptime SLA

99.9%

99.5%

99.0%

Pricing Model (Monthly)

Pay-as-you-go + Tier

Fixed Tier

Infrastructure Cost

Avg. Cost for 10M Requests

$400-600

$299

$150-300

Multi-Chain Support (EVM + Solana)

integrating-notifications
INTEGRATING NOTIFICATION SERVICES

Setting Up a Real-Time Blockchain Monitoring System

A guide to building a system that tracks on-chain events and delivers alerts via email, SMS, and webhooks for DeFi, security, and user engagement.

A real-time blockchain monitoring system listens for specific on-chain events—like token transfers, contract deployments, or governance votes—and triggers notifications. This is essential for DeFi position management, smart contract security, and user engagement. Unlike polling an RPC node, which is inefficient, modern systems use event streams from services like Chainscore's Webhooks or The Graph's subgraphs to receive data as it happens. The core architecture involves an event source, a processing logic layer, and multiple notification channels (e.g., Discord, Telegram, email).

To begin, you must define the events you want to monitor. Common targets include ERC-20 Transfer events, specific function calls on a contract, or failed transactions. Using a service like Chainscore simplifies this; you can set up a webhook endpoint that receives a JSON payload for each event. For example, to monitor USDC transfers above $10,000 on Ethereum, you would configure a filter for the Transfer event on the USDC contract and add a value threshold. The alternative, manually parsing logs via eth_getLogs, requires managing block ranges and is not real-time.

The next step is processing the incoming event data. Your webhook server should validate the payload's origin (using a signing secret) and extract key details: transaction hash, block number, from/to addresses, and token amount. Here's a basic Node.js example using Express to receive a webhook:

javascript
app.post('/webhook', (req, res) => {
  const event = req.body;
  console.log(`Transfer: ${event.amount} from ${event.from} to ${event.to}`);
  // Add logic to format and route notification
  res.status(200).send('OK');
});

This processing layer is where you can enrich data, check against a database, or apply business logic before alerting.

Finally, integrate notification channels. For urgent security alerts, use SMS via Twilio or push notifications with services like Push Protocol. For team coordination, send messages to a Discord channel using a webhook or to Telegram via a bot. Transaction alerts to users can be sent via email using Resend or AWS SES. It's critical to implement rate limiting and idempotency to prevent duplicate alerts from blockchain reorgs. Tools like PagerDuty or Opsgenie can manage on-call schedules for critical infrastructure alerts.

For production systems, consider scalability and reliability. Use a message queue (e.g., RabbitMQ, Amazon SQS) to decouple event ingestion from notification sending, ensuring you don't lose alerts during downstream service outages. Implement logging and monitoring for the alerting pipeline itself using tools like Datadog. Always secure your webhook endpoints with HTTPS and signature verification. By combining Chainscore's real-time event streams with robust notification logic, you can build a monitoring system that responds to on-chain activity in seconds.

scaling-for-throughput
SCALING FOR HIGH-THROUGHPUT CHAINS

Setting Up a Real-Time Blockchain Monitoring System

A guide to building a monitoring stack that can keep pace with high-throughput blockchains like Solana, Sui, and Aptos, where sub-second block times and thousands of TPS are the norm.

High-throughput chains present a unique monitoring challenge. Traditional systems polling an RPC endpoint every 30 seconds are insufficient. You need an architecture designed for real-time data ingestion and low-latency alerting. The core components are a reliable data source, a high-performance stream processor, a time-series database, and a visualization/alerting layer. For chains like Solana, you must handle the Websocket subscription for new blocks and transactions as the primary data firehose, not as an afterthought.

Your first decision is selecting the data source. Using a public RPC endpoint for production monitoring is not recommended due to rate limits and instability. Instead, consider running your own archival node or using a dedicated node provider like Chainstack, Alchemy, or QuickNode that offers enhanced APIs. For maximum reliability, implement a fallback strategy with multiple providers. The key is to establish a persistent Websocket connection to subscribe to critical events: new blocks, transactions, logs, or program-specific instructions.

The stream processor is the heart of the system. It consumes the raw Websocket feed, parses, filters, and transforms the data into structured metrics. Tools like Apache Kafka, Apache Flink, or cloud-native services (AWS Kinesis, Google Pub/Sub) are ideal for this. For example, you might write a service that listens for logs notifications on Solana, extracts failed transaction signatures, and pushes a count metric to your database every second. This decouples ingestion from processing, allowing you to scale each layer independently.

Processed metrics must be stored in a database built for time-series data. Prometheus is the industry standard, but for vast, high-cardinality data from multiple chains, consider VictoriaMetrics or TimescaleDB. Define clear metrics: chain_head_block_number, block_propagation_time_ms, transaction_per_second, rpc_endpoint_latency, and custom business logic metrics like flash_loan_volume_eth. Use labels (e.g., chain="solana-mainnet", priority_fee="high") for powerful filtering and aggregation.

Finally, configure alerting and visualization. Grafana is the go-to for dashboards, connecting to your time-series database. Set up alerts using Grafana Alerting or Prometheus Alertmanager for critical conditions: block_height_stale_for_1m, tps_below_threshold, or error_rate_above_5percent. For on-call notifications, integrate with PagerDuty, Slack, or Opsgenie. The goal is to detect and respond to chain halts, performance degradation, or specific on-chain events before they impact your application's users.

To implement this, start with a simple architecture: a Node.js or Python service subscribing to a Websocket, parsing blocks, and writing to Prometheus. Use the official SDKs (e.g., @solana/web3.js, aptos-sdk). As scale increases, introduce a message queue. Remember to monitor the monitor itself—track connection stability, processing lag, and database health. This proactive approach is essential for maintaining reliable services on networks where a minute of downtime can mean missing tens of thousands of transactions.

building-dashboards
BUILDING OPERATIONAL DASHBOARDS

Setting Up a Real-Time Blockchain Monitoring System

A guide to architecting and implementing a production-grade monitoring dashboard for blockchain nodes, smart contracts, and network health using modern tools.

A real-time blockchain monitoring system is essential for developers and operators to ensure node health, track smart contract events, and detect anomalies. Unlike traditional web services, blockchain monitoring requires subscribing to on-chain events, parsing transaction data, and aggregating metrics from distributed nodes. The core components typically include a data ingestion layer (using RPC nodes or indexers), a processing engine (like a stream processor), a time-series database for metrics, and a visualization dashboard. Tools such as Prometheus for metrics, Grafana for dashboards, and The Graph for indexed data form a common stack.

The first step is establishing reliable data sources. For Ethereum and EVM-compatible chains, you can connect to node providers like Alchemy, Infura, or run your own Geth or Erigon node. Use the JSON-RPC API's eth_subscribe method to create WebSocket connections for real-time block headers, logs, and pending transactions. For more complex queries, consider using a subgraph on The Graph to index specific smart contract events. This ingestion layer should log raw data to a message queue like Apache Kafka or Redis Streams to decouple consumption and handle backpressure.

Next, process the stream of blockchain data. A service written in Node.js, Python, or Go should consume from the queue, parse transactions, and extract key metrics. Important metrics to track include: block_propagation_time, gas_used_percentage, pending_transaction_count, wallet_balance_changes, and custom smart_contract_event_counts. Each metric should be tagged with chain ID and node identifier. These processed metrics are then pushed to a time-series database like Prometheus, InfluxDB, or TimescaleDB. For example, a Prometheus counter could increment for every failed transaction.

Visualization is where Grafana excels. Create dashboards with panels for each metric family. A critical view is the Node Health panel, showing sync status, peer count, and memory usage. Another is the Network Overview, displaying blocks per second, average gas price, and total transaction volume. For DeFi protocols, a Smart Contract Activity panel can graph function calls and value locked. Grafana's alerting rules can notify teams via Slack or PagerDuty if, for instance, a node falls out of sync or gas prices spike abnormally. Use this JSON-RPC call to check sync status: {"jsonrpc":"2.0","method":"eth_syncing","id":1}.

Beyond basic metrics, implement logging and tracing for smart contract interactions. Tools like OpenTelemetry can trace a transaction's journey from a user's wallet through the mempool to block inclusion. Structured logs should capture transaction hashes, from/to addresses, and contract method names. This is crucial for debugging and auditing. For security monitoring, set up alerts for suspicious patterns, such as a high frequency of failed transactions from an address (potential attack) or large, unexpected token transfers from a protocol treasury.

Finally, ensure your system is resilient. Run multiple monitoring agents across different node providers to avoid single points of failure. Use docker-compose or Kubernetes to containerize your monitoring stack for easy deployment. Regularly update your dashboards to reflect new smart contracts or chain upgrades. The end goal is a system that provides a single pane of glass for your blockchain operations, enabling proactive responses to issues and data-driven decisions about network performance and user activity.

TROUBLESHOOTING

Frequently Asked Questions

Common technical questions and solutions for developers implementing real-time blockchain monitoring.

Frequent WebSocket disconnects are a common issue when using public RPC endpoints. These nodes often have rate limits, connection limits, or are under heavy load.

Primary causes include:

  • Rate limiting: Public RPC providers like Infura, Alchemy, and QuickNode enforce request limits per second or per day on free tiers.
  • Idle timeouts: Connections are often terminated after 30-60 seconds of inactivity to conserve server resources.
  • Concurrent connection limits: Free tiers may allow only 1-5 simultaneous WebSocket connections.

Solutions:

  • Implement automatic reconnection logic with exponential backoff in your client.
  • Use a dedicated, paid RPC endpoint for production monitoring, which offers higher limits and dedicated infrastructure.
  • Subscribe to heartbeats or ping/pong messages to keep the connection alive if your application has idle periods.
  • Consider using a service like Chainscore's WebSocket API, which is built for high-volume, persistent connections.