Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Setting Up a Cross-Chain MEV Monitoring Dashboard

A technical tutorial for developers to build a real-time dashboard tracking MEV activity across Ethereum, Arbitrum, and other EVM chains using public APIs and on-chain data.
Chainscore © 2026
introduction
INTRODUCTION

Setting Up a Cross-Chain MEV Monitoring Dashboard

A guide to building a real-time dashboard for tracking Maximum Extractable Value (MEV) activity across multiple blockchain networks.

Maximum Extractable Value (MEV) represents the profit that can be extracted by reordering, including, or censoring transactions within a block. While initially an Ethereum-centric phenomenon, MEV strategies now operate across numerous chains like Arbitrum, Optimism, and Base. Monitoring this activity in real-time is critical for developers building DeFi applications, researchers analyzing market efficiency, and validators seeking to understand network dynamics. A cross-chain dashboard aggregates this data into a single view, revealing patterns and risks that are invisible when looking at individual networks.

This guide will walk through building a functional monitoring dashboard using Chainscore's MEV APIs and a simple frontend framework. We'll cover three core components: data ingestion from multiple blockchains, event processing to identify common MEV patterns like arbitrage and liquidations, and visualization of key metrics. You'll learn how to fetch real-time data streams, process them to detect sandwich attacks and back-running, and display the results in an interactive chart. The final dashboard will track metrics such as total extracted value per chain, top MEV bots by profit, and the most exploited protocols.

Before writing code, you'll need a basic development environment with Node.js (v18+) and a package manager like npm or yarn. You must also obtain API keys from Chainscore for data access and from a node provider like Alchemy or Infura for direct blockchain queries. We'll use Next.js for the frontend framework and Recharts for data visualization to keep the stack simple and modern. All code examples will be in TypeScript for better type safety when handling complex blockchain data structures.

The architecture centers on a backend service that polls or subscribes to Chainscore's WebSocket endpoints for events like Bundle, Arbitrage, and Liquidation. This service will normalize data from different chains—each with its own gas token and block times—into a common format. For example, profits on Ethereum are in ETH, while on Polygon they are in MATIC; our processor will convert all values to USD using a price feed. The processed data is then stored in a local database or cache (like Redis) for the frontend to query via a simple REST API.

In the following sections, we will implement the data pipeline step-by-step. We'll start by setting up the API clients and defining TypeScript interfaces for MEV events. Then, we'll build the aggregation logic to calculate hourly and daily summaries. Finally, we'll create the dashboard UI with charts showing MEV volume trends, profit distribution across searchers, and a real-time feed of the largest recent transactions. By the end, you'll have a deployable tool that provides actionable insights into the cross-chain MEV landscape.

prerequisites
SETUP GUIDE

Prerequisites and Tech Stack

Before building a cross-chain MEV monitoring dashboard, you need the right tools and foundational knowledge. This guide outlines the essential software, libraries, and concepts required to track MEV activity across multiple blockchains.

To monitor MEV, you must first understand its core components. Maximum Extractable Value (MEV) refers to profit that can be extracted by reordering, including, or censoring transactions within blocks. A monitoring dashboard tracks this activity by analyzing mempool data, block production, and on-chain events. You'll need to be familiar with concepts like arbitrage, liquidations, and sandwich attacks, as these are the primary strategies your dashboard will detect. Familiarity with blockchain fundamentals, including transaction lifecycle and consensus mechanisms, is assumed.

Your development environment is critical. We recommend using Node.js (v18+) or Python 3.10+ as your primary runtime. For data persistence, you'll need a database; PostgreSQL or TimescaleDB are excellent choices for handling time-series blockchain data. Essential infrastructure includes a reliable RPC provider for each chain you intend to monitor (e.g., Alchemy, Infura, QuickNode) and an Etherscan-like API key for contract verification and label data. Containerization with Docker is highly recommended for consistent deployment of services like databases and indexers.

The core of your stack will be specialized data indexing and processing tools. For Ethereum and EVM chains, Erigon or Geth in archive mode provide deep historical data, but for real-time analysis, you'll likely rely on RPCs. Libraries like ethers.js v6 or web3.py are essential for interacting with nodes. To detect MEV bundles and transactions, you'll integrate with services like Flashbots Protect RPC or the Flashbots MEV-Share API. For non-EVM chains (e.g., Solana, Cosmos), you'll need their respective SDKs, such as @solana/web3.js or cosmjs.

Your dashboard's frontend will visualize this complex data. A modern framework like React with TypeScript ensures maintainability. For charting time-series data—showing MEV revenue over time or gas price spikes—libraries like Recharts or Chart.js are effective. You will need to build a backend API, for which Express.js (Node) or FastAPI (Python) are robust choices. This API will aggregate data from your database and external sources, serving processed insights to the frontend client in a secure and performant manner.

Finally, consider operational tools. Use PM2 or systemd for process management in production. Logging with Winston or Pino and monitoring with Prometheus/Grafana will help you track the health of your data pipelines. Since you're handling sensitive financial data, implement security best practices: store API keys and RPC URLs in environment variables (using dotenv), use connection pooling for your database, and consider rate-limiting your public API endpoints to prevent abuse.

architecture-overview
SYSTEM ARCHITECTURE OVERVIEW

Setting Up a Cross-Chain MEV Monitoring Dashboard

This guide outlines the core components and data flow for building a dashboard to track MEV activity across multiple blockchains.

A cross-chain MEV monitoring dashboard aggregates and visualizes data related to Maximal Extractable Value extraction across different networks like Ethereum, Arbitrum, and Polygon. The primary goal is to provide a unified view of MEV activity, including metrics like arbitrage profits, liquidations, and sandwich attack volumes. The architecture must be modular to handle varying blockchain data structures and RPC endpoints. Key challenges include synchronizing block data in real-time, normalizing transaction formats, and calculating consistent profit metrics across chains with different native assets.

The backend system typically consists of three core services. First, a set of indexers or crawlers subscribe to new blocks via WebSocket connections to node providers like Alchemy or QuickNode. For each block, they extract and decode transactions, identifying MEV-related patterns using heuristics or pre-defined bundles. Second, a data processing pipeline (often using Apache Kafka or similar) streams this raw data to analytics engines that calculate profits in USD, identify participating searchers or bots via their address patterns, and tag specific MEV strategies. Third, a database (TimescaleDB for time-series, PostgreSQL for relational data) stores the processed results for querying.

The frontend dashboard connects to this processed data via a GraphQL or REST API. It should visualize trends over time, such as total daily MEV extracted per chain or the most profitable strategies. Interactive charts from libraries like Chart.js or D3.js can show profit distribution, while tables list recent large MEV transactions. Implementing address labeling—connecting anonymous wallets to known entities like Flashbots searchers or liquidity providers—adds significant analytical value. The dashboard must also handle real-time updates, using WebSockets or frequent polling to reflect new blocks as they are processed.

For developers, setting up the initial data ingestion is a critical first step. Here's a simplified Python example using Web3.py to listen for new blocks and extract basic transaction data:

python
from web3 import Web3
import json

w3 = Web3(Web3.WebsocketProvider('wss://eth-mainnet.g.alchemy.com/v2/YOUR_KEY'))
def handle_block(block_hash):
    block = w3.eth.get_block(block_hash, full_transactions=True)
    for tx in block.transactions:
        # Analyze transaction for MEV patterns
        print(f"Tx: {tx['hash'].hex()} Value: {w3.from_wei(tx['value'], 'ether')}")

new_block_filter = w3.eth.filter('latest')
while True:
    for block_hash in new_block_filter.get_new_entries():
        handle_block(block_hash)

This snippet captures the foundational step of streaming raw blockchain data, which subsequent services will analyze.

To make the dashboard actionable, focus on key performance indicators (KPIs) that matter to users: total value extracted, success rate of MEV attempts, gas costs as a percentage of profit, and cross-chain flow of arbitrage opportunities. Alerting mechanisms can be built on top of this architecture, notifying users when MEV activity spikes on a new chain or when a specific wallet executes a large, profitable trade. The entire system should be deployed using container orchestration (e.g., Kubernetes) to manage the separate indexers for each chain, ensuring scalability and resilience if one blockchain's RPC endpoint fails.

data-sources-resources
MONITORING INFRASTRUCTURE

Key Data Sources and APIs

Building a cross-chain MEV dashboard requires aggregating and analyzing data from specialized providers. These are the core data sources and APIs for tracking searchers, bundles, and arbitrage opportunities.

QUANTITATIVE ANALYSIS

Core MEV Metrics to Track and Calculate

Key performance indicators and risk metrics for monitoring cross-chain MEV activity.

MetricDefinitionCalculation MethodTarget Range / Threshold

Cross-Chain Arbitrage Profit

Gross profit from arbitrage between two chains after gas costs.

Sum(Arbitrage Revenue) - Sum(Gas Costs on Both Chains)

$500 - $5,000 per profitable bundle

Bundle Success Rate

Percentage of submitted bundles that land on-chain and are profitable.

(Successful Bundles / Total Submitted Bundles) * 100

85%

Average Latency (Bridge Finality)

Time from detecting an opportunity to finality on the destination chain.

Timestamp(Dest. Chain Finality) - Timestamp(Source Chain Detection)

< 2 minutes for optimistic bridges

Gas Cost per Bundle

Total gas spent across all chains for a single MEV bundle.

Sum(Gas Used * Gas Price) for all txs in bundle

< 30% of estimated bundle profit

Searcher Profit Share

Percentage of extracted MEV value retained by the searcher after paying builders/relays.

(Searcher Profit / Total Extracted Value) * 100

60% - 80%

Cross-Chain Sandwich Attack ROI

Return on investment for a sandwich attack spanning two DEXs on different chains.

(Profit from Attack / Capital Required) * 100

5% per attack (high risk)

Failed Bundle Slippage

Value lost due to failed transactions or partial fills in a cross-chain bundle.

Intended Swap Output - Actual Swap Output (for failed legs)

Monitor for spikes > $1,000

step-1-data-ingestion
ARCHITECTURE

Step 1: Building the Data Ingestion Service

This step establishes the foundational pipeline that collects and processes raw blockchain data from multiple networks, which is essential for detecting MEV opportunities and suspicious activity across chains.

A cross-chain MEV monitoring dashboard requires a reliable, high-throughput data ingestion service. This service is responsible for connecting to various blockchain nodes—such as Ethereum, Arbitrum, and Polygon—and subscribing to real-time events. The core components are a node provider (like Infura, Alchemy, or a self-hosted node) and a streaming client that listens for new blocks, pending transactions, and finalized state. For performance, you should use the WebSocket JSON-RPC interface (eth_subscribe) to receive instant updates instead of polling with HTTP requests.

The ingestion logic must normalize data from different chains into a common schema. This involves extracting key fields from each transaction and block: sender (from), receiver (to), gas price, transaction hash, block number, and the input data (input) for contract interactions. For EVM chains, this process is similar, but you must handle chain-specific nuances like different base currencies (ETH vs. MATIC) and precompiled contracts. A robust service will also implement reconnection logic and fallback RPC endpoints to maintain data continuity during provider outages.

Here is a basic Python example using Web3.py to stream pending transactions from an Ethereum node. This snippet captures the raw data that will later be analyzed for MEV patterns like arbitrage or liquidations.

python
from web3 import Web3
import json

# Connect to WebSocket endpoint
w3 = Web3(Web3.WebsocketProvider('wss://mainnet.infura.io/ws/v3/YOUR_KEY'))

def handle_transaction(tx_hash):
    tx = w3.eth.get_transaction(tx_hash)
    # Normalize and emit data
    normalized_tx = {
        'hash': tx_hash.hex(),
        'from': tx['from'],
        'to': tx['to'],
        'value': str(tx['value']),
        'gasPrice': str(tx['gasPrice']),
        'input': tx['input'].hex()
    }
    print(json.dumps(normalized_tx))

# Subscribe to new pending transactions
pending_filter = w3.eth.filter('pending')
while True:
    for tx_hash in pending_filter.get_new_entries():
        handle_transaction(tx_hash)

After capturing the raw stream, the data should be placed into a durable message queue like Apache Kafka or Redis Pub/Sub. This decouples the ingestion service from the heavier analysis processes, ensuring the listener isn't blocked by downstream computation. Each transaction event can be published to a topic named for its chain of origin (e.g., eth-mainnet-pending). This queueing layer is critical for scaling and allows multiple consumer services—for risk analysis, profit calculation, and alerting—to process the same data stream independently.

Finally, you must implement idempotency and deduplication. The same transaction can appear multiple times in a pending pool before being mined. Your ingestion service should track seen transaction hashes in a short-term cache (using Redis with a TTL) to avoid redundant processing. The output of this step is a clean, normalized, and queued stream of cross-chain transaction data, ready for the next stage: identifying MEV bundles and calculating potential profit.

step-2-metric-calculation
DATA PIPELINE

Processing and Calculating MEV Metrics

Transform raw blockchain data into actionable insights by calculating key MEV metrics for your cross-chain dashboard.

Once you have ingested raw transaction and block data from multiple chains, the next step is to process it into standardized MEV metrics. This involves parsing transaction logs, identifying MEV-related patterns, and applying mathematical formulas. Common foundational metrics include extracted value (the profit an MEV searcher made from a transaction), gas used by MEV bundles, and latency (the time between transaction creation and inclusion in a block). You'll need to write scripts, often in Python or TypeScript, to filter and transform your data.

A critical processing step is classifying the type of MEV activity. This requires detecting specific on-chain signatures. For example, you can identify an arbitrage by finding a sequence where a user swaps Token A for Token B on one DEX and immediately swaps back on another at a profit. Liquidations are identified by calls to protocols like Aave or Compound that close an undercollateralized position. Sandwich attacks are spotted by analyzing the order of transactions within a block: a victim's swap is preceded and followed by the attacker's own swaps.

For accurate cross-chain comparison, you must normalize these metrics. Value extracted should be converted to a common denominator, typically USD, using historical price oracles from the specific block time. Gas costs, denominated in each chain's native token (like ETH, MATIC, or AVAX), must also be converted to USD to understand the real cost of executing the MEV. This allows you to answer questions like, "Was the same arbitrage opportunity more profitable on Arbitrum or Optimism after accounting for fees?"

Here is a simplified Python pseudocode example for calculating arbitrage profit from two DEX swaps on Ethereum, using the web3.py library:

python
# Assume tx1 and tx2 are transaction receipt objects for the two swaps
profit_wei = tx2["logs"][0]["data"] - tx1["logs"][0]["data"] - (tx1.gasUsed * tx1.gasPrice) - (tx2.gasUsed * tx2.gasPrice)
# Convert profit from Wei to ETH, then to USD using a price feed
profit_eth = Web3.fromWei(profit_wei, 'ether')
profit_usd = profit_eth * get_eth_price_at_block(tx1.blockNumber)

This calculation subtracts the total gas cost from the net token output to find the searcher's profit.

Finally, structure the processed metrics into a time-series database like TimescaleDB or ClickHouse. Each record should include a timestamp, chain ID, MEV type, extracted value (USD), gas cost (USD), and involved addresses (searcher, victim, contracts). This structured data layer is what your dashboard visualization will query to render charts and tables, enabling you to track MEV trends, volumes, and dominant strategies across all monitored chains in real time.

step-3-visualization-frontend
DATA VISUALIZATION

Step 3: Creating the Visualization Dashboard

Transform raw MEV data into actionable insights by building a custom dashboard with Grafana and Chainscore's data streams.

With your data pipeline operational, the next step is to build a visualization layer. We'll use Grafana, an open-source analytics platform, to create a dashboard that connects to your PostgreSQL database. This setup allows you to query and visualize cross-chain MEV activity in real-time. First, ensure Grafana is installed and running. Then, add your PostgreSQL database as a data source within Grafana's configuration menu, using the connection details from your .env file. This establishes the link between your visual panels and the processed MEV data.

The core of your dashboard will be a series of SQL queries against the mev_opportunities table. Start by creating key panels to monitor MEV health and volume: a time-series graph of opportunities per hour across chains, a stat panel showing total estimated profit (USD) in the last 24 hours, and a bar chart comparing volume by source chain (e.g., Ethereum, Arbitrum, Base). Use Grafana's query builder with statements like SELECT SUM(estimated_profit_usd) FROM mev_opportunities WHERE timestamp > NOW() - INTERVAL '24 HOURS' to power these visualizations.

To identify high-value patterns and potential risks, create more advanced panels. Build a table widget listing the top 10 most profitable opportunity types (like arbitrage or liquidations) in the last hour. Implement an alert panel that triggers when the sandwich_count metric exceeds a threshold, indicating potential malicious activity. You can also visualize the latency between block creation and your system's detection by plotting the block_timestamp against the created_at timestamp from your database.

For cross-chain analysis, leverage Grafana's transformation features. Create a panel that uses a merge transformation to combine time-series data from multiple chains into a single graph, allowing you to spot correlated MEV spikes. Another useful view is a geomap panel that plots transaction senders' geographic locations (derived from IP data, if collected) to analyze geographical clustering of MEV activity, which can be relevant for regulatory or infrastructure planning.

Finally, ensure your dashboard is actionable. Configure Grafana Alerts to send notifications to Slack or Discord when critical metrics breach defined thresholds, such as a sudden drop in detected opportunities (possible scraper failure) or a surge in profit from a single contract (potential exploit). Organize your panels logically into rows: Overview, Chain-Specific Metrics, Alerting & Anomalies, and Cross-Chain Correlation. Share the dashboard with your team via a stable URL to enable collaborative monitoring of the MEV landscape.

step-4-alerting-system
CROSS-CHAIN MEV MONITORING

Step 4: Implementing Alerting for Anomalous Activity

Configure real-time notifications to detect and respond to suspicious cross-chain MEV activity, such as sandwich attacks or liquidity drain attempts.

Effective MEV monitoring requires moving from passive dashboards to active alerting. The goal is to create a system that notifies you of anomalous activity—like a sudden spike in failed arbitrage transactions, a cluster of profitable sandwich attacks on a specific DEX, or suspicious liquidity movements across bridges—within seconds. This involves defining key heuristics and thresholds that, when breached, trigger an alert via email, Slack, Discord, or PagerDuty. For a cross-chain context, you must aggregate and correlate data from multiple blockchains and mempools to identify coordinated attacks.

Start by defining your alert rules based on the metrics you are already tracking. Key indicators for cross-chain MEV include: cross_chain_arbitrage_profit_usd exceeding a threshold (e.g., $10,000 profit in 5 minutes), sandwich_attack_count spiking on a specific token pair, or failed_tx_ratio increasing dramatically on a bridge, which may indicate a denial-of-service attack on pending transactions. Use time-series databases like TimescaleDB or Prometheus to store this data and tools like Grafana with its alerting engine or dedicated services like PagerDuty to manage the rules and notifications.

Here is a conceptual example of a Python script using the schedule library to periodically check a metric and send a Slack alert using webhooks. This script queries your analytics database for the total MEV profit extracted via a specific bridge in the last hour.

python
import schedule
import time
import requests
from your_analytics_module import query_database

def check_bridge_mev_profit():
    query = """
    SELECT SUM(profit_usd) as total_profit
    FROM cross_chain_mev_trades
    WHERE bridge = 'wormhole'
    AND timestamp > NOW() - INTERVAL '1 hour'
    """
    result = query_database(query)
    total_profit = result[0]['total_profit'] or 0
    
    ALERT_THRESHOLD = 50000  # $50,000
    if total_profit > ALERT_THRESHOLD:
        slack_webhook_url = "YOUR_SLACK_WEBHOOK_URL"
        message = {
            "text": f"🚨 High MEV Alert on Wormhole Bridge!\n"
                     f"Profit in last hour: ${total_profit:,.2f}\n"
                     f"Threshold: ${ALERT_THRESHOLD}"
        }
        requests.post(slack_webhook_url, json=message)

schedule.every(5).minutes.do(check_bridge_mev_profit)
while True:
    schedule.run_pending()
    time.sleep(1)

For production systems, consider more robust orchestration. You can implement alerting directly within your data pipeline using Apache Airflow or Prefect to run detection tasks, or use the native alerting features in Grafana or Datadog. A critical best practice is to avoid alert fatigue by implementing multi-stage alerts. A Stage 1 alert might be a Slack notification for a moderate anomaly, while a Stage 2 alert for a severe, confirmed threat could trigger a PagerDuty incident. Always include contextual data in alerts: transaction hashes, wallet addresses, profit amounts, and links to your dashboard for immediate investigation.

Finally, integrate your alerting with on-chain response mechanisms where possible. For protocols or DAOs, this could mean having alert triggers that automatically pause a vulnerable bridge contract or increase slippage tolerance on a DEX pool via a governance multisig. Continuously refine your alert thresholds based on historical false-positive rates and the evolving MEV landscape. Document all alerts and responses to build a knowledge base, which will help you distinguish between a novel attack vector and normal, high-volume arbitrage activity across chains.

ARCHITECTURE

Chain-Specific Implementation Notes

Core Data Sources

MEV monitoring on Ethereum and EVM chains (Arbitrum, Optimism, Base) relies on a standard set of tools. The primary data source is the mempool, accessed via specialized RPC providers like Flashbots Protect, BloxRoute, or a local Geth/Erigon node with txpool APIs enabled.

Key Implementation Steps:

  1. Subscribe to newPendingTransactions via WebSocket for real-time data.
  2. Use the eth_getBlockByNumber RPC with fullTransactions: true to analyze included bundles.
  3. Decode transactions using the relevant ABI to identify MEV patterns (e.g., arbitrage, liquidations).
  4. Track priority fee (maxPriorityFeePerGas) and base fee trends to gauge network congestion and bid competitiveness.

Tooling: Use libraries like ethers.js or web3.py for RPC interaction and @flashbots/sdk for Flashbots-specific endpoints.

CROSS-CHAIN MEV DASHBOARD

Frequently Asked Questions

Common technical questions and troubleshooting steps for developers building real-time MEV monitoring across Ethereum, Arbitrum, and other EVM chains.

Delayed data typically stems from RPC node latency or indexing lag. For real-time MEV monitoring, you need a low-latency connection to archive nodes.

Primary causes:

  • RPC Provider Limits: Free tier public RPCs (Infura, Alchemy) have rate limits and slower block propagation. Upgrade to a paid tier with WebSocket support.
  • Indexing Overhead: If you're processing transactions locally, complex event filtering can cause delays. Consider using a specialized indexer like The Graph or Chainscore's pre-processed streams.
  • Cross-Chain Sync Issues: Data from L2s like Arbitrum or Optimism has a finality delay. Use the chain's sequencer RPC for pre-confirmation data and a standard RPC for finalized blocks.

Quick fix: Implement a multi-RPC fallback system and subscribe to newHeads via WebSocket instead of polling eth_getBlockByNumber.

How to Build a Cross-Chain MEV Monitoring Dashboard | ChainScore Guides