Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

Setting Up Transaction Propagation Monitoring

A developer guide to building a system that tracks how transactions spread across a peer-to-peer network, including latency measurement, failure detection, and node health checks.
Chainscore © 2026
introduction
PRACTICAL GUIDE

Setting Up Transaction Propagation Monitoring

Learn how to configure a system to track and analyze the speed and reliability of transaction broadcasts across a peer-to-peer network.

Transaction propagation monitoring is a critical tool for developers and node operators to understand the health of a blockchain network. It involves tracking a transaction from its initial broadcast to its acceptance by a majority of network peers. This process reveals key network metrics: propagation latency (how long it takes for a transaction to spread), peer connectivity (how many peers receive it), and potential censorship or network partitioning. By monitoring these factors, you can identify bottlenecks, validate the performance of your node's mempool policy, and ensure your applications are interacting with a robust network layer.

To set up basic monitoring, you need to instrument a node to log transaction-related events. For an Ethereum client like Geth, you can enable detailed logging for the transaction pool (txpool) and the P2P protocol (p2p). The following command starts Geth with enhanced logging: geth --txpool.pricelimit 1 --txpool.globalslots 4096 --txpool.globalqueue 4096 --verbosity 5. The key is to capture events like AddRemoteTxs (incoming transactions) and track the transaction hash. You can pipe these logs to a file or a monitoring service like Prometheus using a configured exporter to begin collecting data points.

A more advanced setup involves deploying a network of sentry nodes in geographically diverse locations. Each sentry runs a light client or a full node and is programmed to broadcast a test transaction at regular intervals—for example, a simple ETH transfer or a smart contract call. By having each sentry record the timestamp when it broadcasts (t=0) and then listen for that same transaction hash to be announced by other connected peers, you can measure the propagation time between nodes. This multi-point measurement provides a map of network latency and helps distinguish between local node issues and global network problems.

The core of your monitoring system is the data pipeline. Logs must be parsed to extract the transaction hash, peer ID, and timestamps. This data is then aggregated in a time-series database like InfluxDB or TimescaleDB. You can build dashboards in Grafana to visualize metrics such as average propagation delay, propagation success rate (percentage of peers that see a tx within a threshold like 2 seconds), and peer distribution. Setting alerts on these dashboards for anomalies—like a sudden increase in propagation time above 5 seconds—allows for proactive investigation of network degradation or peer connectivity loss.

For production-grade monitoring, especially in DeFi or exchange environments, consider integrating with specialized services. Chainscore Labs provides an API endpoint, https://api.chainscore.dev/v1/propagation, that can be queried with a transaction hash to retrieve its propagation histogram and peer reachability metrics across their global node network. This offers a complementary, external view of propagation health without needing to maintain your own global sentry infrastructure. Combining internal node logs with external API data creates a robust, multi-layered monitoring strategy for transaction propagation.

prerequisites
MONITORING GUIDE

Prerequisites and Setup

This guide details the essential tools and configurations required to monitor transaction propagation across the Ethereum network.

Effective transaction propagation monitoring requires a foundational setup of core infrastructure. You will need access to an Ethereum execution client (e.g., Geth, Erigon, Nethermind) running in archive mode to access full historical data. A Beacon Chain client (e.g., Lighthouse, Prysm) is necessary to track block proposals and attestations. For data processing and alerting, a time-series database like Prometheus and a visualization tool like Grafana are standard. Finally, ensure you have a reliable method for running custom scripts or services, such as a dedicated server, virtual machine, or containerized environment with Docker.

The primary data sources for monitoring are your node's RPC endpoints and logs. Configure your execution client's HTTP-RPC (--http in Geth) or JSON-RPC port to be accessible by your monitoring stack. Enable verbose logging to capture propagation events; for Geth, this includes flags like --metrics and --pprof. You will also need to subscribe to the P2P network to observe inbound and outbound transaction messages. Tools like Ethereum Node Monitor (ENM) or custom-built services using libraries like web3.py can listen for new pending transactions via the eth_subscribe method.

Key metrics to instrument include transaction pool size, peer count, and propagation latency. Propagation latency is measured from the moment a transaction is first seen by your node to when it appears in a mined block. This requires timestamping incoming transactions and cross-referencing them with new block data. Implementing this involves parsing the newPendingTransaction subscription event and logging its hash and receipt time, then watching for that hash in new blocks via the eth_getBlockByNumber call. Consistent system time synchronization using NTP is critical for accurate latency calculations.

For a production monitoring setup, consider deploying multiple nodes in geographically diverse locations. This allows you to measure propagation differences across network segments and identify if delays are localized. You can run lightweight nodes or bootnodes in different data centers or cloud regions (e.g., AWS us-east-1, eu-central-1). Comparing metrics from these nodes will give you a true view of global propagation health, helping to distinguish between a problem with a specific peer and a wider network issue.

Finally, establish a baseline for normal network behavior. Record metrics like average propagation time to 50% of peers (typically 1-3 seconds under normal conditions) and typical transaction pool churn. This baseline is essential for configuring meaningful alerts in Grafana or a service like Alertmanager. An alert should trigger if propagation latency exceeds your baseline by a significant factor (e.g., 10 seconds), indicating potential network congestion, a peer connectivity problem, or a malicious broadcasting attack.

key-concepts-text
MONITORING FUNDAMENTALS

Key Concepts: Mempool, Gossip, and Latency

Understanding the core network mechanics of transaction propagation is essential for building reliable Web3 applications and monitoring node health.

The mempool (memory pool) is a node's holding area for transactions that have been broadcast to the network but are not yet included in a block. It is a critical component of blockchain consensus, acting as a decentralized queue. Each node maintains its own mempool, and its contents can vary based on network connectivity and the node's transaction fee policy. Monitoring mempool size, composition (e.g., pending vs. stuck transactions), and fee distribution provides insights into network congestion and helps users estimate optimal gas prices for timely inclusion.

Transaction gossip is the peer-to-peer protocol used to propagate new transactions and blocks across the network. When a node receives a valid transaction, it forwards it to its connected peers, who then forward it to theirs, creating a viral spread. This process is not instantaneous; latency—the delay between a transaction being broadcast and being seen by other nodes—directly impacts front-running risks and arbitrage opportunities. High latency can lead to stale transactions and missed opportunities in time-sensitive DeFi operations.

To monitor propagation, you need to track metrics across multiple nodes in different geographic regions. Key indicators include: propagation time (time from first-seen to network-wide visibility), mempool churn rate (how quickly transactions are cleared), and orphan rate (transactions that never confirm). Tools like Ethereum's eth-netstats or custom listeners using WebSocket subscriptions (e.g., eth_subscribe('newPendingTransactions')) can capture this data. For example, a basic Python script using Web3.py can log when a transaction is first observed, allowing you to measure its spread.

Setting up effective monitoring involves deploying lightweight client agents on strategic nodes. These agents should subscribe to pending transactions and record timestamps with microsecond precision. By comparing timestamps across nodes, you can map propagation paths and identify bottlenecks. For Ethereum, services like Blocknative's Mempool Explorer offer a managed view, but for custom chains or specific needs, building your own system is often necessary. This data is vital for validators, DEX operators, and MEV searchers to optimize their strategies and ensure network fairness.

Latency is influenced by physical infrastructure, peer count, and client software. A node with poor connectivity or a non-default maxpeers setting will propagate transactions slower. Monitoring tools should therefore also track node-specific health metrics: peer count, bandwidth usage, and client version. Anomalies in propagation time often correlate with node health issues. By correlating mempool metrics with gossip latency, teams can diagnose whether a transaction delay is due to network-wide congestion or a local node failure, enabling faster resolution and more reliable service.

tools
TRANSACTION PROPAGATION

Monitoring Tools and Libraries

Tools and libraries for monitoring transaction propagation, latency, and network health across blockchain nodes.

03

Chainscore Node Monitoring

Open-source toolkit for measuring transaction propagation performance between nodes.

Core Components:

  • Propagation Latency Test: Sends a test transaction and measures time to appear on remote nodes
  • Peer Connectivity Map: Visualizes your node's connections to the network graph
  • Mempool Comparison: Checks transaction set consistency across connected peers

Deploy the agent alongside your node to generate propagation heatmaps and identify slow peers.

05

Ethers.js Provider Event Listening

Programmatic monitoring using the Ethers.js library to listen for pending transactions.

javascript
provider.on('pending', (txHash) => {
  provider.getTransaction(txHash).then(tx => {
    console.log(`Pending TX: ${tx.hash} Gas: ${tx.gasPrice}`);
  });
});

This approach allows you to:

  • Time the delay between seeing a transaction and its inclusion in a block
  • Build a local view of the mempool for analysis
  • Trigger alerts for specific transaction patterns or high-value transfers.
building-a-test-harness
DEVELOPER TOOLING

Building a Test Transaction Harness

A transaction harness is a controlled environment for testing how transactions propagate through a network, essential for debugging and performance analysis.

A transaction test harness is a developer tool that simulates the lifecycle of a transaction from creation to network propagation and finalization. Unlike unit tests that check contract logic in isolation, a harness focuses on the network layer: how quickly a signed transaction is gossiped to peers, the rate of mempool acceptance, and the factors influencing block inclusion. This is critical for testing gas optimization, front-running resistance, and the reliability of transaction submission under different network conditions like congestion or fork scenarios.

Setting up a basic monitoring harness involves three core components: a transaction generator, a network listener, and a metrics aggregator. The generator creates and signs transactions, often using a library like ethers.js or web3.py. The listener subscribes to events from connected nodes—such as pendingTransactions on an Ethereum Geth node—to track when the transaction is first seen. The aggregator logs timestamps and node sources, calculating propagation latency. For Ethereum, you can connect to multiple node providers (e.g., Alchemy, Infura, a local node) to compare propagation paths.

Here's a simplified code snippet using ethers.js to monitor transaction propagation time:

javascript
const { ethers } = require('ethers');
const provider = new ethers.providers.WebSocketProvider('wss://mainnet.infura.io/ws/v3/YOUR_KEY');

provider.on('pending', async (txHash) => {
  const tx = await provider.getTransaction(txHash);
  if (tx && tx.hash === myTxHash) {
    const latency = Date.now() - txTimestamp;
    console.log(`Tx ${txHash} observed after ${latency}ms`);
  }
});

// Send a test transaction
const wallet = new ethers.Wallet('PRIVATE_KEY', provider);
const tx = await wallet.sendTransaction({
  to: '0x...',
  value: ethers.utils.parseEther('0.001')
});
txTimestamp = Date.now();
console.log(`Sent tx: ${tx.hash}`);

This listens for pending transactions and logs when our specific transaction appears.

For more advanced analysis, integrate with a mempool explorer API like Blocknative or run your own Erigon or Geth node with verbose logging to see peer-to-peer gossip. Key metrics to track include: first-seen time across different geographic nodes, propagation delay percentiles (e.g., 95% of nodes see the TX within 500ms), and inclusion delay (time from send to block confirmation). This data helps identify if your transaction is being throttled by specific node clients or if certain gas price strategies lead to slower propagation.

Practical use cases for a transaction harness include testing Private Transaction Relayers like Flashbots, where you need to verify a bundle is not leaked to the public mempool, or stress-testing a new Layer 2 network's sequencer. By running the harness in a testnet environment first—such as Goerli or Sepolia—you can safely experiment with different transaction parameters and network loads without spending real funds. Documenting propagation characteristics under various conditions creates a benchmark for your application's expected performance in production.

Ultimately, a well-instrumented transaction harness shifts transaction reliability from guesswork to data-driven engineering. It allows developers to answer specific questions: Does my RPC provider propagate transactions faster than others? What is the optimal gas premium for quick inclusion during a network surge? By building this monitoring into your CI/CD pipeline, you can catch regression in transaction submission logic before it impacts end-users, ensuring a more robust and predictable decentralized application.

MONITORING DASHBOARD

Key Propagation Metrics and Their Meaning

Essential metrics to track for analyzing transaction propagation health and network performance.

MetricDefinitionHealthy RangeInvestigation Threshold

Propagation Latency (P50)

Median time for a transaction to reach 50% of monitored nodes

< 500 ms

1 sec

Propagation Latency (P95)

Time for 95% of nodes to receive the transaction

< 2 sec

5 sec

Propagation Success Rate

Percentage of nodes that successfully receive a broadcast transaction

99.5%

< 98%

Orphaned / Stale Rate

Percentage of transactions not included in the next block

< 0.5%

2%

Mempool Churn Rate

Rate at which transactions are cleared from the mempool

Varies by chain load

Sudden drop to 0%

Peer Count (Outbound)

Number of active outbound connections to other nodes

8-50 (varies by client)

< 5

Invalid Tx Rejection Rate

Percentage of transactions rejected due to invalidity

< 0.1%

1%

data-collection-storage
DATA COLLECTION, STORAGE, AND VISUALIZATION

Setting Up Transaction Propagation Monitoring

Learn how to build a system to track transaction propagation latency and reliability across a blockchain network.

Transaction propagation monitoring is critical for understanding network health and user experience. It involves measuring the time it takes for a transaction to spread from its origin node to the majority of the network. High latency or inconsistent propagation can indicate network congestion, peer connectivity issues, or the presence of censorship. By setting up monitoring, developers and node operators can identify bottlenecks, validate the effectiveness of their node's peer connections, and gather data to benchmark performance against network averages. This is foundational data for anyone running infrastructure or building latency-sensitive applications like high-frequency trading bots or real-time settlement systems.

Data collection begins by instrumenting your node. For Ethereum clients like Geth or Erigon, you can subscribe to the P2P network using the devp2p protocol or leverage the admin RPC APIs to log peer connections and message propagation. A practical method is to broadcast a uniquely identifiable test transaction—such as a zero-value transfer to yourself—from your node and then query multiple geographically distributed beacon nodes or archive nodes via their RPC endpoints (eth_getTransactionByHash) to record when they first see it. Tools like geth's --metrics flag expose Prometheus metrics, including p2p_ingress_connections_total and p2p_egress_connections_total, which can be scraped for connection health.

For systematic storage, time-series databases like Prometheus paired with Grafana for visualization are the industry standard. You would configure Prometheus to scrape metrics from your instrumented nodes. For custom propagation latency data, write a script that publishes results to Prometheus using its client libraries (e.g., prom-client for Node.js) or push them to a more flexible datastore like TimescaleDB (PostgreSQL extension) or InfluxDB. Structuring your data with tags for transaction_hash, source_region, target_node, and latency_ms allows for powerful aggregated queries. This setup enables you to not only store raw data but also calculate key metrics like average propagation time, standard deviation, and propagation success rate across node cohorts.

Visualization transforms raw data into actionable insights. In Grafana, create dashboards with panels showing: a world map with nodes colored by latency using the geomap panel, a time-series graph of average propagation latency, and a histogram showing the distribution of propagation times. Alerts can be configured in Grafana or Prometheus Alertmanager to trigger when latency exceeds a threshold (e.g., >2 seconds for 5 minutes) or if propagation to a key region fails. For a more advanced analysis, you can visualize the propagation as a network graph, using libraries like vis.js, to see how transactions flow through specific peer connections, potentially identifying weak links in the network topology.

Beyond basic setup, consider these advanced practices. Implement synthetic monitoring by periodically sending test transactions from multiple cloud regions (using AWS Lambda or GCP Cloud Functions) to different node providers like Infura, Alchemy, and your own infrastructure for comparative analysis. Use the Ethereum Network Intelligence API (eth-netstats) as a reference for overall network metrics. For L2s or other EVM chains, the principles are the same, but you must adapt to their specific RPC methods and peer discovery. Always ensure your monitoring transactions are non-spammy (use a private testnet or minimally fund a dedicated address) and that your node's RPC endpoints are secured to prevent unauthorized access to your metrics pipeline.

TRANSACTION PROPAGATION

Frequently Asked Questions

Common questions and troubleshooting steps for setting up and using transaction propagation monitoring with Chainscore.

Transaction propagation monitoring tracks the journey of a blockchain transaction from the moment it's broadcast by a wallet until it's included in a block. It measures key metrics like peer-to-peer (P2P) network latency, mempool acceptance rates, and propagation paths across nodes.

You need it to:

  • Diagnose failures: Identify if a transaction was dropped by the network or rejected by nodes.
  • Optimize performance: Ensure your dApp's transactions reach miners/validators quickly to avoid front-running or high slippage.
  • Ensure reliability: Monitor the health of your node connections and the broader P2P network you rely on.
conclusion-next-steps
MONITORING ESSENTIALS

Conclusion and Next Steps

You have now configured a system to monitor transaction propagation across the mempool. This guide has covered the core setup, from establishing a WebSocket connection to analyzing propagation delays and orphan rates.

The monitoring system you've built provides a foundational view of network health and node performance. By tracking metrics like first_seen timestamps, propagation delays, and orphaned transactions, you can identify bottlenecks—whether they stem from your node's peer connections, geographic location, or network-wide congestion. This data is critical for optimizing node placement, selecting reliable RPC providers, and ensuring your applications broadcast transactions efficiently. Regularly review these metrics to establish a performance baseline for your node.

To deepen your analysis, consider integrating additional data sources. Correlate your propagation data with on-chain metrics from block explorers like Etherscan or block headers from your node's RPC. For example, a spike in propagation delay during periods of high base_fee or full blocks can indicate network-wide stress. You can also implement alerting using tools like Prometheus and Grafana to notify you when propagation delays exceed a threshold (e.g., >2 seconds) or when the orphan rate climbs above 1%, prompting immediate investigation.

For developers building high-frequency trading bots or latency-sensitive dApps, the next step is to implement transaction simulation and replacement strategies. Use the eth_estimateGas and eth_call RPC methods to simulate transactions before broadcasting. If a transaction is stuck, use the eth_sendRawTransaction method with a higher maxPriorityFee and the same nonce to replace it. Libraries like ethers.js and web3.py provide utilities for managing these workflows. Always test replacement logic on a testnet like Sepolia or Holesky first.

The final, advanced step is to move from a single-node observer to a multi-node surveillance network. Deploy lightweight monitoring agents across different cloud regions and from various providers (e.g., Alchemy, Infura, QuickNode, and your own nodes). Aggregate this data to create a mesh view of the network, distinguishing between localized issues and global events. This approach helps you avoid single points of failure and provides a robust data set for analyzing geographic propagation paths and the reliability of different infrastructure providers.

How to Monitor Blockchain Transaction Propagation | ChainScore Guides