Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Launching a Post-Upgrade Monitoring and Alert System

A step-by-step guide for developers to implement monitoring for key metrics and events after a smart contract upgrade, including code examples for Tenderly and OpenZeppelin Defender.
Chainscore © 2026
introduction
GUIDE

Launching a Post-Upgrade Monitoring and Alert System

A practical guide to implementing automated monitoring for smart contract upgrades on Ethereum and EVM-compatible chains to ensure operational integrity and detect anomalies.

A post-upgrade monitoring and alert system is a critical component of a secure deployment pipeline. After a smart contract upgrade via a proxy pattern (like OpenZeppelin's Transparent or UUPS), the new implementation's on-chain behavior must be verified. This process moves beyond simple unit tests to real-time validation against the live network. The core goal is to detect deviations from expected state changes, transaction failures, or unexpected gas usage that could indicate a critical bug or vulnerability introduced by the upgrade.

To build this system, you need to define a set of post-upgrade invariants. These are conditions that must always hold true for your protocol. Examples include: the total supply of a token remaining constant after a migration, user balances summing to the total supply, vault assets being fully accounted for, or specific admin functions reverting for non-admin addresses. These invariants are codified into monitoring scripts that run at regular intervals, querying the blockchain via an RPC node or indexer.

A robust implementation involves three key components: a data fetching layer (using ethers.js, viem, or The Graph), a logic/checking layer that runs the invariant tests, and an alerting layer that triggers notifications on failure. For instance, after upgrading a lending pool's interest rate model, a monitor would check that the borrowRate function returns a non-negative value for all supported assets and that the calculated rates fall within predefined safe bounds. Alerts can be sent via PagerDuty, Slack webhooks, or Telegram bots.

Here is a simplified code example using ethers.js to monitor a token's total supply invariant after an upgrade:

javascript
const { ethers } = require('ethers');
const provider = new ethers.JsonRpcProvider(process.env.RPC_URL);
const tokenAddress = '0x...'; // Your proxy address
const tokenABI = ['function totalSupply() view returns (uint256)'];
const tokenContract = new ethers.Contract(tokenAddress, tokenABI, provider);

async function checkTotalSupply() {
    const expectedSupply = ethers.parseUnits('1000000', 18); // 1M tokens
    const actualSupply = await tokenContract.totalSupply();
    
    if (actualSupply !== expectedSupply) {
        // Trigger alert: SMS, email, or Slack
        console.error(`INVARIANT BROKEN: Supply mismatch. Expected ${expectedSupply}, got ${actualSupply}`);
        // Implement your alerting logic here (e.g., axios.post to webhook)
    } else {
        console.log('Total supply invariant holds.');
    }
}
// Run check every 5 minutes
setInterval(checkTotalSupply, 5 * 60 * 1000);

This script periodically verifies that the total supply has not been corrupted by the upgrade.

For production systems, consider using dedicated monitoring services like Chainlink Functions for decentralized computation, Tenderly Alerts for event-based triggers, or OpenZeppelin Defender Sentinel for managed monitor creation. These platforms handle scheduling, retries, and provide richer alerting dashboards. The key is to start with a few critical, high-value invariants and expand coverage over time. Monitoring should be considered a non-negotiable part of the upgrade lifecycle, as critical bugs like the 2022 Nomad bridge exploit were triggered by initialization oversights post-upgrade.

prerequisites
PREREQUISITES AND SETUP

Launching a Post-Upgrade Monitoring and Alert System

This guide covers the essential prerequisites and initial setup required to deploy a robust monitoring and alerting system for your blockchain node after a major network upgrade.

Before configuring alerts, you must have a running, fully synced node for the upgraded network. For Ethereum, this means running an execution client (like Geth, Nethermind, or Erigon) and a consensus client (like Lighthouse, Prysm, or Teku) that are both updated to the post-upgrade version. Ensure your node's API endpoints (e.g., the Execution client's JSON-RPC port and the Consensus client's Beacon API) are accessible. You will also need administrative access to a server or cloud instance to host your monitoring stack.

The core of the system is a time-series database and a monitoring agent. We recommend using the Prometheus and Grafana stack. First, install Prometheus to scrape metrics from your node clients. Each major client exposes a metrics endpoint; for example, Geth uses http://localhost:6060/debug/metrics/prometheus. You must configure Prometheus's scrape_configs in its prometheus.yml file to target these endpoints. Grafana is then installed to visualize the collected data through dashboards.

For effective alerting, you need to define Prometheus Alerting Rules. These rules are written in YAML and evaluate metric expressions. A critical rule for post-upgrade health is monitoring sync status. For a consensus client, you might alert if beacon_head_slot does not increase over a 5-minute period, indicating a stall. Another essential rule checks peer count (libp2p_peers) to ensure your node remains connected to the network. These rule files are loaded by Prometheus.

Alerts from Prometheus are sent to an Alertmanager service, which handles deduplication, grouping, and routing to various receivers like email, Slack, or PagerDuty. You must install and configure Alertmanager with a config.yml file, defining your notification channels. A key setup step is configuring silence periods and inhibition rules to prevent alert storms during known maintenance windows or cascade failures.

Finally, validate your setup. Start all services, ensure Prometheus targets are "UP," and import a pre-built dashboard for your client (like the Grafana Ethereum Dashboard) to visualize key metrics: block production, attestation performance, and resource usage. Test your alerting pipeline by triggering a safe, simulated failure, such as stopping your consensus client temporarily, to confirm notifications are delivered correctly.

key-monitoring-targets
POST-DEPLOYMENT CHECKLIST

What to Monitor After an Upgrade

A successful mainnet upgrade is just the beginning. These are the critical systems and metrics you must monitor to ensure network stability and user safety.

02

Transaction Throughput and Gas

Track changes in the mempool and on-chain activity. A successful upgrade should not degrade performance. Set alerts for:

  • Average TPS (Transactions Per Second) vs. pre-upgrade baseline
  • Mempool backlog size and clearance rate
  • Average gas price spikes, which indicate congestion
  • Failed transaction rate, which can signal new contract incompatibilities For example, an Ethereum client upgrade that inadvertently reduces gas throughput requires immediate rollback planning.
04

Economic Security and Slashing

For Proof-of-Stake networks, monitor the economic layer for unintended penalties. This is critical for validator trust.

  • Slashing events or unexpected inactivity leaks
  • Staking deposit contract withdrawal credential functionality
  • Reward issuance rate compared to the expected annual percentage rate (APR)
  • Total value staked (TVS) to detect rapid validator exits A flaw in the reward mechanism can lead to mass unstaking within hours.
monitoring-with-tenderly
POST-UPGRADE MONITORING

How to Set Up Alerts with Tenderly

A step-by-step guide to creating a real-time monitoring and alert system for your smart contracts after a major upgrade using Tenderly.

After deploying a smart contract upgrade, the real work begins: ensuring it operates as intended. A post-upgrade monitoring system is critical for catching edge cases, unexpected user behavior, or subtle bugs that weren't apparent in testing. Tenderly provides a powerful suite of tools for this, allowing you to set up alerts based on specific on-chain events, transaction failures, or changes in contract state. This proactive approach is far more effective than relying on user reports or manual checks.

To start, you'll need a Tenderly account and a project. Navigate to the Alerts section in your Tenderly dashboard and click 'Create Alert'. You can define triggers from multiple sources: - Event Emission: Monitor for specific event logs from your upgraded contract. - Function Call: Watch for calls to critical functions. - Failed Transactions: Get notified of any transaction that reverts against your contract. - State Change: Track when a specific storage variable crosses a threshold. For a post-upgrade scenario, monitoring failed transactions and specific event patterns is often the highest priority.

Configuring the alert requires precise details. For an event-based alert, you must specify the contract address (your newly upgraded one), the ABI, and the exact event signature. You can filter further using parameter conditions, like value > 1000. For a failed transaction alert, you can filter by the receiving contract address. It's crucial to use the correct ABI for your upgraded contract version to ensure the decoding and filtering work accurately. Mis-matched ABIs are a common source of missed alerts.

Once the trigger is set, define the notification channels. Tenderly supports direct integrations with Discord, Slack, Telegram, and Webhooks. For a production system, setting up a dedicated incident channel in your team's communication platform is recommended. Each alert notification includes a link to the full transaction simulation on Tenderly, giving your team immediate context with gas usage, state diffs, and the exact revert reason, which drastically speeds up diagnosis.

For comprehensive coverage, create a suite of alerts. Key alerts post-upgrade might include: 1. Upgrade Event: Alert when the Upgraded(address) event is emitted from your proxy. 2. Initialization Failure: Alert on any failed call to your upgrade's initialize function. 3. Critical Function Failure: Monitor core logic functions for reverts. 4. Anomalous Volume: Use a state change alert if a daily volume variable spikes unexpectedly. This layered approach creates a safety net around the new contract logic.

Finally, test your alert system before the upgrade goes live. Use Tenderly's Simulation feature or a testnet deployment to generate transactions that will trigger your alerts. Confirm that notifications arrive as expected and contain the necessary information. A well-configured Tenderly alert system transforms post-upgrade monitoring from a reactive chore into a proactive, automated safeguard, providing confidence that your new code is performing correctly in the wild.

monitoring-with-defender
POST-DEPLOYMENT MONITORING

How to Set Up Alerts with OpenZeppelin Defender

A guide to configuring automated monitoring and alerting for your smart contracts after an upgrade using OpenZeppelin Defender.

After deploying a smart contract upgrade, continuous monitoring is critical for security and operational awareness. OpenZeppelin Defender provides a managed service to automate this process. Its Sentinel module allows you to create custom monitors that watch your contracts for specific on-chain events, function calls, or state changes. Once a condition is met, Defender can trigger an Action—such as sending an email, Slack, or Discord notification—to alert your team instantly. This creates a robust, hands-off monitoring system that operates 24/7.

To begin, you need an OpenZeppelin Defender account and an API key with the appropriate permissions. Navigate to the Sentinel section in the Defender dashboard and click Create Sentinel. You'll configure several key parameters: the Network (e.g., Ethereum Mainnet, Arbitrum), the Address of your upgraded contract, and the ABI to decode transactions. The core of a Sentinel is its Conditions. You can monitor for specific events, failed transactions, high gas usage, or custom function calls using a Forta-compatible detection bot for complex logic.

For a post-upgrade scenario, a common alert monitors for unexpected calls to a newly added or modified function. You can set a condition to watch for the functionCalled event with a specific signature. For more granular control, use a Forta bot written in JavaScript. Here is a basic example that alerts on calls to a hypothetical adminWithdraw function:

javascript
module.exports = {
  initialize: async (context) => {},
  handleTransaction: async (txEvent) => {
    const iface = new ethers.utils.Interface(['function adminWithdraw(address,uint256)']);
    const funcSig = iface.getSighash('adminWithdraw');
    if (txEvent.transaction.data.startsWith(funcSig)) {
      return {
        alertId: 'ADMIN-WITHDRAW-CALLED',
        severity: 'HIGH',
        metadata: { from: txEvent.transaction.from }
      };
    }
  }
};

After defining the condition, you configure the Alert Channels. Defender integrates with email, Slack, Discord, Telegram, and webhooks. For a development team, connecting to a dedicated Slack #alerts channel is highly effective. You can set the alert frequency (e.g., for every occurrence or a daily summary) and severity level. Finally, review and activate the Sentinel. It will immediately begin scanning blocks and notifying your team of any matches, providing a real-time safety net for your upgraded contract.

Beyond basic function calls, consider setting up Sentinels for Failed Transactions to your contract, which can indicate user interface issues or malicious probing. Monitor for Event Emissions from critical state changes like ownership transfers or pause state toggles. You can also set Balance Monitors to alert if contract ETH or ERC-20 holdings fall below a safety threshold. Combining these monitors creates a comprehensive dashboard of your contract's health and activity post-upgrade.

Maintaining your alert system is straightforward. The Defender dashboard shows the history of all triggered alerts, which is useful for incident review and tuning false positives. As your protocol evolves, you can pause, edit, or duplicate existing Sentinels. For maximum reliability, ensure your API keys and connected alert channels are periodically verified. This proactive monitoring layer, built in minutes with Defender, significantly reduces operational risk and provides confidence that your upgrade is performing as intended in production.

PLATFORM COMPARISON

Tenderly vs. OpenZeppelin Defender for Monitoring

A feature and capability comparison of two leading platforms for post-upgrade smart contract monitoring and alerting.

Feature / MetricTenderlyOpenZeppelin Defender

Alert Trigger Types

Transaction events, Custom errors, Function calls, Gas usage, State changes, Failed txs

Transaction events, Custom errors, Function calls, Failed txs, Time-based (cron)

Alert Destination

Email, Slack, Discord, Webhook, PagerDuty

Email, Slack, Discord, Webhook, Telegram, PagerDuty

On-Chain Automation

Simulation & Forking

Real-time Monitoring

Alert Latency

< 15 seconds

< 30 seconds

Free Tier Monitoring

Up to 5 alerts, 1 project

Up to 3 monitored contracts, 5 actions

Pricing Model (Pro)

Usage-based (monitored contracts, alerts)

Team-based seats + usage (actions, relays)

Multi-chain Support

EVM chains + Starknet, Solana

EVM chains only

implementing-custom-metrics
OPERATIONAL GUIDE

Launching a Post-Upgrade Monitoring and Alert System

A step-by-step guide to implementing custom metrics and health checks to ensure your smart contract or protocol upgrade is functioning as intended.

A successful smart contract upgrade is only the beginning. The critical phase that follows requires a robust monitoring system to verify the new deployment's health and performance in real-time. This involves moving beyond basic RPC node uptime checks to track custom on-chain metrics and protocol-specific health signals. A well-designed post-upgrade monitoring stack acts as an early warning system, catching issues like incorrect state transitions, unexpected user behavior, or degraded performance before they escalate into significant financial loss or protocol downtime.

The foundation of this system is defining the right Key Performance Indicators (KPIs) for your protocol. For a lending protocol, this might include tracking the total value locked (TVL) growth rate, the health of collateralization ratios, or the frequency of liquidations. For a DEX, you would monitor slippage, pool imbalance, or failed transaction rates. These custom metrics are often not exposed by standard infrastructure providers and require you to write specific queries or indexers. Tools like The Graph for subgraph creation or Dune Analytics for custom dashboards are essential for aggregating and visualizing this on-chain data.

Implementing automated health checks is the next step. These are scripts or services that perform regular, automated validations of your protocol's core logic. A simple example is a script that simulates a user transaction—like a swap or a deposit—on a forked mainnet environment using Foundry or Hardhat to ensure the expected state changes occur. More advanced checks could involve monitoring for specific event emissions, verifying that oracle prices are updating within expected bounds, or ensuring that access control roles are correctly configured. These checks should run on a schedule (e.g., every 5 minutes) via a cron job or a service like GitHub Actions.

To make this system actionable, you must integrate alerting. When a health check fails or a KPI deviates from its expected range, your team needs to be notified immediately. Connect your monitoring logic to alerting platforms like PagerDuty, Opsgenie, or even a dedicated Discord/Slack webhook. The alert should be specific: "Health Check Failed: User deposit simulation on vault 0x... reverted with error 'InsufficientLiquidity'." This allows engineers to quickly diagnose whether the issue is a bug, an edge-case market condition, or a problem with the monitoring script itself.

Finally, establish a clear runbook and escalation protocol. Document the steps to take when each type of alert fires. Who is on-call? What is the first troubleshooting step? When should you consider pausing the protocol? Having this documented and tested before an incident occurs drastically reduces mean time to resolution (MTTR). A post-upgrade monitoring system transforms deployment from a high-anxiety event into a managed, observable process, providing the confidence and data needed to iterate safely.

GUIDE

Launching a Post-Upgrade Monitoring and Alert System

A structured playbook for developers to establish effective monitoring and alerting after a smart contract or protocol upgrade. This guide covers common failure modes, alert configuration, and escalation procedures to ensure system health.

Immediately after an upgrade, monitor for deviations from expected state and behavior. Key failure modes include:

  • State Corruption: Mismatches in critical storage variables (e.g., total supply, contract owner). Use invariant testing frameworks like Foundry's forge inspect or custom scripts.
  • Function Reversion: New or modified functions failing with high frequency. Track transaction revert rates via RPC node logs or a service like Tenderly.
  • Gas Consumption Spikes: A 20-30% increase in average gas for key functions can indicate inefficiencies or bugs in the new logic.
  • Event Emission Gaps: Missing expected events (e.g., Transfer, Deposit) signals broken hooks or conditional logic.
  • Oracle/Price Feed Staleness: If your upgrade interacts with oracles like Chainlink, monitor for stale price updates which can break DeFi logic.

Establish baselines from pre-upgrade data to detect anomalies.

conclusion
IMPLEMENTATION

Launching a Post-Upgrade Monitoring and Alert System

After a successful smart contract upgrade, establishing a robust monitoring and alerting framework is critical for maintaining protocol health and user trust.

A comprehensive monitoring system should track both on-chain and off-chain metrics. Key on-chain data includes contract function call volumes, gas usage patterns, and token transfer anomalies. Off-chain, you should monitor your node's health, RPC endpoint latency, and the performance of any indexers or subgraphs powering your frontend. Tools like Tenderly Alerts, OpenZeppelin Defender Sentinel, and custom scripts using Ethers.js or Viem with The Graph can automate this data collection. Setting baseline metrics before the upgrade provides a crucial reference point for detecting deviations.

Effective alerting requires defining clear thresholds and escalation paths. Configure alerts for critical failures like failed transactions from admin wallets, significant deviations in expected TVL or volume, and any unauthorized ownership or role changes. Use a multi-channel approach: send immediate, high-priority alerts for critical issues to platforms like PagerDuty or Opsgenie, while routing informational alerts (e.g., gas price spikes) to Slack or Discord channels. The goal is to minimize alert fatigue while ensuring the team is notified of genuine threats in real-time.

Beyond immediate alerts, implement a dashboard for real-time visibility. Services like Dune Analytics, Flipside Crypto, or a custom Grafana setup can visualize key health indicators. This dashboard should be accessible to the core team and, where appropriate, the community, to foster transparency. Regularly scheduled post-mortem analyses of any triggered alerts, even false positives, help refine your monitoring rules and improve system resilience. This process turns monitoring from a passive activity into an active feedback loop for protocol improvement.

Your monitoring strategy must also account for the unique risks of the upgrade mechanism itself. If you used a Transparent Proxy pattern, monitor the proxy admin ownership. For UUPS upgrades, watch for any calls to the upgradeTo function. In diamond proxy (EIP-2535) architectures, track facet changes and diamondCut events. This layer of upgrade-specific surveillance ensures the security of the upgrade mechanism remains intact long after the initial deployment.

Finally, document your monitoring runbooks and ensure they are integrated into your team's incident response protocol. Define clear roles: who acknowledges alerts, who investigates, and who executes fixes. Practice responding to simulated alerts to ensure the process is smooth. A well-monitored protocol builds confidence with users and stakeholders, demonstrating a long-term commitment to security and reliability beyond the initial upgrade event.