Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

Setting Up Block Propagation Visibility

A technical guide for developers to configure and monitor block propagation latency and peer behavior on Ethereum and Bitcoin nodes using built-in metrics and dashboards.
Chainscore © 2026
introduction
NETWORK HEALTH

Introduction to Block Propagation Monitoring

Block propagation monitoring tracks the speed and reliability with which new blocks are distributed across a peer-to-peer network, providing a critical view of a blockchain's health and performance.

In a decentralized blockchain network, a new block must be transmitted from the miner or validator who created it to every other participating node. The speed and reliability of this process, known as block propagation, is a fundamental performance metric. Slow or inconsistent propagation leads to network latency, increased orphan rates (where competing blocks are created before the previous one is widely known), and can impact the security assumptions of consensus mechanisms like Proof-of-Work. Monitoring this process is essential for node operators, infrastructure providers, and protocol developers to ensure network stability and user experience.

Setting up visibility into block propagation involves instrumenting your node to log and analyze key events. The primary data points to capture include: the timestamp when a new block is first seen (block_announce), the timestamp when the full block data is received and validated (block_received), and the peer ID of the node that sent it. By comparing these timestamps across your node's connections, you can calculate propagation delays. Tools like Geth's --metrics flag or Erigon's detailed logging can expose this data, which can then be piped into time-series databases like Prometheus for visualization in Grafana.

A practical implementation involves subscribing to new block headers via your node's JSON-RPC interface. For example, using the eth_subscribe method with the "newHeads" parameter will push notifications. You can then immediately request the full block data via eth_getBlockByNumber and log the delta. Here's a simplified Python snippet using Web3.py:

python
from web3 import Web3
w3 = Web3(Web3.HTTPProvider('http://localhost:8545'))
def handle_new_block(block_hash):
    receipt_time = time.time()
    block = w3.eth.get_block(block_hash, full_transactions=False)
    # Calculate latency vs. block.timestamp
    print(f"Block {block.number} received at {receipt_time}")
subscription_id = w3.eth.subscribe('newHeads', handle_new_block)

Analyzing the collected data reveals network topology and bottlenecks. You can identify slow peers that consistently relay blocks late, which may indicate poor connectivity or being far from the propagation frontier. Monitoring propagation times by geographic region can uncover infrastructural weak points. Furthermore, observing spikes in propagation delay after a hard fork or a major DeFi transaction can help correlate network performance with on-chain activity. This analysis is crucial for optimizing peer selection algorithms and informing decisions about node geographic distribution.

For production systems, consider using specialized monitoring services or open-source tools that aggregate data from multiple nodes. Projects like Ethereum's Network Upgradability Suite or Blocknative's Mempool Explorer provide broader network perspectives. However, running your own monitoring gives you tailored insights into your specific node's experience, which is vital for ensuring the reliability of applications—such as MEV searchers, bridges, or oracles—that depend on low-latency block data. Consistent monitoring acts as an early warning system for network degradation.

prerequisites
BLOCK PROPAGATION VISIBILITY

Prerequisites and Setup

This guide details the prerequisites and initial setup required to monitor and analyze block propagation across a blockchain network.

Before configuring block propagation visibility, ensure your environment meets the core requirements. You will need access to a blockchain node (e.g., Geth, Erigon, or a consensus client like Lighthouse) that can be configured to log or expose peer-to-peer (P2P) network data. A fundamental understanding of your node's configuration file (like geth.toml or config.yaml) is essential. For comprehensive analysis, you should also have a time-series database (e.g., Prometheus, InfluxDB) and a data visualization tool (e.g., Grafana) ready to ingest metrics. Basic familiarity with command-line interfaces and network diagnostics (using tools like netstat or tcpdump) will be highly beneficial.

The primary setup involves enabling verbose logging on your node to capture propagation events. For an Ethereum Geth node, you would modify the geth.toml file to increase the verbosity of the p2p and eth modules. A typical configuration snippet might set verbosity=5 for detailed P2P message logging. Simultaneously, you must configure your monitoring stack. This involves setting up exporters—such as a custom script or a tool like the Prometheus Node Exporter—to scrape metrics from your node's logs or metrics endpoint (often on port 6060 for Geth's metrics). These metrics are then stored in your time-series database for later querying and visualization.

Once your data pipeline is established, you can begin to instrument specific propagation tracking. This often requires writing or deploying a monitoring agent that subscribes to your node's event stream. For instance, you can listen for new block announcements via the eth_subscribe JSON-RPC method to timestamp when a block is first seen. The agent calculates the latency by comparing this timestamp to the block's official timestamp or to when it was first seen by a reference node. These calculated latencies, along with peer connection counts and bandwidth usage, become the key metrics pushed to your dashboard, forming the basis of your block propagation visibility.

key-concepts-text
BLOCKCHAIN FUNDAMENTALS

Key Concepts: Propagation, Latency, and Orphans

Understanding how blocks move through a network is critical for node operators and developers. This guide explains the core mechanics of block propagation, the resulting latency, and the creation of orphaned blocks.

Block propagation is the process by which a newly mined or validated block is transmitted across a peer-to-peer network. When a miner successfully creates a block, it broadcasts this block to its immediate peers. Each peer validates the block—checking the proof-of-work, transaction signatures, and state transitions—before forwarding it to its own peers. The speed and efficiency of this gossip protocol directly impact network health. Slow propagation increases the chance that two miners will produce competing blocks simultaneously, leading to network forks. Optimizing propagation involves efficient serialization formats and minimizing redundant data transmission.

Network latency is the delay between a block's creation and its reception by the majority of nodes. It is influenced by physical distance, internet bandwidth, node software performance, and the size of the block. High latency creates a window where nodes have different views of the blockchain's tip. For example, if a 2 MB Bitcoin block takes 10 seconds to reach 50% of the network, miners working on the old chain tip are wasting computational power. Monitoring tools like bitcoin-cli getpeerinfo can show propagation delays between nodes. Reducing latency is a key goal for scaling solutions, leading to innovations like compact block relay and Graphene.

An orphaned block (or stale block) is a valid block that is not part of the canonical chain. This occurs when two miners produce blocks at similar times, creating a temporary fork. The network eventually converges on one chain based on the longest chain rule (or, in proof-of-stake, the chain with the most accumulated stake), and the losing block is orphaned. The transactions in an orphaned block typically return to the mempool to be included in a future block. Orphan rates are a key network health metric; a high rate indicates propagation problems and leads to security inefficiencies, as hashrate is wasted on stale blocks.

The relationship between these concepts is direct: poor propagation increases latency, which in turn raises the orphan rate. For developers, understanding this is essential for designing robust applications. A DeFi protocol, for instance, should require multiple confirmations to ensure a transaction is buried deep enough in the canonical chain, protecting against chain reorganizations that could include orphaned blocks. Node operators can mitigate these issues by maintaining high-bandwidth connections and using relay networks to receive blocks faster than the standard P2P gossip allows.

tools
BLOCK PROPAGATION

Essential Monitoring Tools

Monitor the health and performance of your blockchain node by tracking block propagation times, peer connections, and network latency.

configure-geth
NETWORK VISIBILITY

Step 1: Configure Geth for Ethereum Metrics

To analyze block propagation, you must first configure your Geth node to collect and expose the necessary performance data. This setup enables you to monitor the health and efficiency of your connection to the Ethereum network.

Geth, the official Go implementation of an Ethereum node, includes built-in metrics collection. By default, this feature is disabled to conserve resources. To activate it, you must start your Geth client with the --metrics flag. This command exposes a local HTTP endpoint (typically on port 6060) that serves a detailed metrics dashboard and a JSON API. For example, the command geth --metrics --metrics.addr 0.0.0.0 --metrics.port 6060 starts the metrics server accessible from other machines on your network, which is essential for remote monitoring tools.

The metrics endpoint provides granular data critical for propagation analysis. Key metrics include eth/chain/head/block (the latest block number), eth/chain/head/receipts (receipt processing), and p2p/peers (connected peer count). For timing analysis, the eth/chain/insert/block/time timer and eth/downloader/headers/bodies/states gauges are invaluable. These metrics allow you to measure the exact time between receiving a block header and fully processing it, pinpointing delays in your node's synchronization pipeline.

For advanced analysis, you can configure Geth to export metrics to external systems like Prometheus, a popular monitoring toolkit. Use the flag --metrics.influxdb to send data directly to an InfluxDB instance, or scrape the HTTP endpoint with Prometheus. This setup enables long-term data retention, visualization in Grafana, and alerting. Configuring these tools allows you to create dashboards that track block arrival latency, peer connection stability, and resource usage over time, transforming raw data into actionable network intelligence.

configure-bitcoin-core
DATA COLLECTION

Step 2: Configure Bitcoin Core for Propagation Data

Configure your Bitcoin Core node to log the detailed mempool and block propagation data required for network analysis.

The default Bitcoin Core configuration prioritizes performance and privacy. To collect the granular data needed to analyze block propagation, you must enable specific debug logging categories. This is done by adding directives to your bitcoin.conf file, typically located in the Bitcoin data directory (e.g., ~/.bitcoin/bitcoin.conf on Linux). The most critical settings control the verbosity of logging for the mempool, net, and validation subsystems.

Add the following lines to your bitcoin.conf to enable comprehensive propagation logging:

code
debug=net
debug=mempool
debug=validation
logips=1
logtimemicros=1
  • debug=net: Logs peer-to-peer network messages, including inv (inventory), getdata, block, and tx announcements.
  • debug=mempool: Records transactions entering and leaving the node's memory pool.
  • debug=validation: Provides detailed output on block and transaction validation checks.
  • logips=1: Includes peer IP addresses in logs (essential for network topology analysis).
  • logtimemicros=1: Adds microsecond-precision timestamps to log entries, crucial for measuring propagation latency.

After updating the configuration, restart your Bitcoin Core node for the changes to take effect. The debug output will be written to the debug.log file in your data directory. Be aware that enabling these flags significantly increases log file size and can impact disk I/O. It is recommended to have ample storage and to consider log rotation. For a long-running analysis node, you may want to pipe logs directly to a processing script or use the -printtoconsole flag for real-time ingestion.

To verify your configuration is working, grep the debug.log for key messages after a restart. Look for entries like received: block or AddToWallet to confirm net and validation logging are active. The timestamps should include microseconds (e.g., 2024-01-15T10:30:45.123456Z). Your node is now instrumented to capture the raw data feed of block and transaction propagation events as they occur on the Bitcoin peer-to-peer network.

build-dashboard
VISUALIZATION

Step 3: Build a Propagation Dashboard with Grafana

Transform raw block propagation data into actionable insights with a custom Grafana dashboard. This guide covers connecting Prometheus, building key visualizations, and interpreting network health metrics.

After configuring Prometheus to scrape metrics from your Chainscore node, the next step is to visualize this data. Grafana is the industry-standard tool for this, allowing you to create interactive dashboards that display block propagation times, peer performance, and network latency in real-time. You can install Grafana using Docker (docker run -d -p 3000:3000 grafana/grafana-oss) or via your system's package manager. Once running, log into the web interface (typically at http://localhost:3000) and add your Prometheus server as a data source. The connection URL will be http://<your-prometheus-host>:9090.

With the data source configured, you can begin building panels. Start with a Time Series graph for the core metric: chainscore_block_propagation_duration_seconds. This shows the latency for each block from proposal to your node's reception. Use PromQL queries like histogram_quantile(0.95, rate(chainscore_block_propagation_duration_seconds_bucket[5m])) to track the 95th percentile propagation time, a key indicator of network health. Create another panel for chainscore_peers_count to monitor your node's connectivity, and a stat panel for chainscore_head_block_number to track chain progression.

Effective dashboards segment data for clarity. Create a row for Network Overview with high-level stats, and another for Peer Analysis. For peer analysis, use the by (remote_peer_id) modifier in your queries to break down propagation times per peer. This can identify slow or unreliable peers. For example, sum by (remote_peer_id) (rate(chainscore_block_propagation_duration_seconds_count[5m])) shows which peers are sending you the most blocks. You can set alert rules in Grafana to notify you via Slack or email if propagation latency exceeds a threshold (e.g., >2 seconds) or if peer count drops suddenly.

To interpret the dashboard, focus on trends rather than individual spikes. Consistently high propagation percentiles may indicate network congestion or that your node needs better bandwidth or peer connections. A widening gap between the median and 95th percentile latency suggests inconsistent performance, often due to geographic peer distribution. Correlate propagation delays with chainscore_block_size_bytes metrics to see if larger blocks cause slowdowns. Regularly exporting and versioning your dashboard JSON configuration allows you to replicate setups across development, staging, and production environments.

BLOCK PROPAGATION

Key Metrics: Ethereum vs. Bitcoin

Comparison of core network metrics relevant to monitoring block propagation and network health.

MetricEthereum (Post-Merge)Bitcoin

Target Block Time

12 seconds

10 minutes

Average Block Size

~80-150 KB

~1.0-1.5 MB

Propagation Time (P95)

< 1 second

2-4 seconds

Uncle Rate / Stale Rate

~1-2%

< 0.5%

Consensus Mechanism

Proof-of-Stake (PoS)

Proof-of-Work (PoW)

Peers (Typical Node)

50-100

8-125

Propagation Protocol

Ethereum Wire Protocol (ETH)

Bitcoin P2P Protocol

Mempool Behavior

Local, non-global

Global, consistent

BLOCK PROPAGATION

Troubleshooting Common Issues

Common challenges and solutions when setting up and monitoring block propagation across Ethereum nodes.

If your node is not receiving new blocks, it's typically a connectivity or peer issue. First, check your node's peer count using the admin RPC method admin_peers. A healthy Geth or Erigon node should have 50-100 peers. If the count is low, ensure your node is fully synced and that port 30303 (or your configured P2P port) is open and forwarded correctly on your firewall and router. Also, verify your node is using the correct network ID (1 for mainnet). Stale blocks can also result from a corrupted database; consider resyncing if other checks pass.

BLOCK PROPAGATION

Frequently Asked Questions

Common questions and troubleshooting for setting up and understanding block propagation visibility with Chainscore.

Block propagation is the process by which a newly mined or validated block is transmitted across a peer-to-peer network to all participating nodes. For your node, fast and reliable propagation is critical for maintaining network health and your own operational efficiency.

Key reasons it matters:

  • Network Consensus: Slow propagation can lead to stale blocks (uncles/ommers), reducing network security and efficiency.
  • Validator Performance: As a validator or miner, slow outbound propagation increases your orphan rate, directly impacting rewards.
  • DEX/MEV Performance: For applications like decentralized exchanges or MEV searchers, being among the first to see a new block is a competitive advantage.

Monitoring propagation latency helps you diagnose peer quality, network connectivity issues, and optimize your node's role in the ecosystem.

conclusion
IMPLEMENTATION SUMMARY

Conclusion and Next Steps

You have successfully configured a system to monitor block propagation across the Ethereum network. This guide covered the essential steps from setting up the infrastructure to analyzing the data.

The core setup involves running a Geth or Erigon node with the --graphql flag enabled and connecting it to a Chainscore monitoring agent. This agent collects the chain_head and new_heads subscription data, which is then visualized in the Chainscore dashboard. Key metrics to monitor include block propagation latency (the time between a block's creation and its arrival at your node) and orphaned block rates, which can indicate network instability or the presence of selfish mining.

For ongoing analysis, consider implementing automated alerts for significant latency spikes, which may signal network congestion or a peer connection issue. You can also track the block source distribution to see if you are receiving blocks primarily from a single relay (like Flashbots) or a diverse set of peers, which impacts your view of network health. Advanced users can export the raw timestamp data for custom analysis, such as correlating propagation times with gas price surges or MEV activity.

To deepen your understanding, explore the following resources: the official Ethereum Execution API specification for WebSocket subscriptions, Geth's metrics documentation for additional node telemetry, and research papers on network layer security like "Eclipse Attacks" on peer-to-peer networks. The next practical step is to configure monitoring for transaction propagation using the pendingTransactions subscription, completing your visibility into the mempool lifecycle.