Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

How to Prepare Benchmark Reports for Partners

A technical guide for developers on creating standardized, data-driven benchmark reports to evaluate and communicate blockchain infrastructure performance to partners.
Chainscore © 2026
introduction
INTRODUCTION

How to Prepare Benchmark Reports for Partners

A guide to creating clear, data-driven reports that demonstrate protocol performance and value to stakeholders.

A benchmark report is a structured document that quantifies and communicates the performance, security, and reliability of a blockchain protocol or decentralized application. For partners—including investors, integrators, and governance participants—these reports provide objective, verifiable data to inform decisions. Unlike marketing materials, a benchmark report is grounded in on-chain metrics, network statistics, and comparative analysis, offering a transparent view of a protocol's health and operational efficiency. The goal is to build trust through evidence, not assertions.

Effective reports focus on key performance indicators (KPIs) relevant to the partner's interests. For a DeFi lending protocol, this might include total value locked (TVL) growth, borrow/lend utilization rates, and liquidation event frequency. For a Layer 1 or Layer 2 blockchain, core metrics are transaction throughput (TPS), average gas fees, time-to-finality, and validator/delegator decentralization metrics. Selecting the right KPIs requires understanding what your partner values: is it user growth, network stability, capital efficiency, or security resilience?

Data collection is the foundation. You must gather information from reliable, transparent sources. Primary sources include on-chain data via nodes or indexers (e.g., using The Graph), block explorer APIs, and the protocol's own smart contract event logs. Secondary sources can include aggregated data from platforms like DeFiLlama or Token Terminal for comparative context. All data should be timestamped and include the methodology for its collection to ensure reproducibility and auditability. Never rely on estimates or unaudited internal dashboards for partner reports.

The structure of the report should guide the reader from high-level insights to granular detail. A standard flow includes: an Executive Summary with top-line findings, a Methodology section detailing data sources and timeframes, a Performance Analysis with charts and commentary on selected KPIs, a Comparative Benchmarking section (e.g., vs. mainnet or competing protocols), and a Risk & Outlook assessment. Use clear visualizations like time-series graphs for trends and bar charts for comparisons. Tools like Dune Analytics dashboards can be embedded or referenced.

Finally, contextualize the data with narrative insight. A spike in transaction volume is a data point; explaining it was driven by a specific new dApp launch or incentive program provides valuable context. Similarly, note any anomalies or incidents—such as a temporary spike in failed transactions due to network congestion—and explain the protocol's response. This demonstrates operational awareness and maturity. Conclude with actionable takeaways and, if applicable, proposed next steps for the partnership based on the performance data presented.

prerequisites
PREREQUISITES

How to Prepare Benchmark Reports for Partners

Before generating a benchmark report, ensure you have the correct data access, tools, and understanding of the metrics you'll be analyzing.

To create a meaningful benchmark report, you first need data access. This typically involves obtaining a valid API key from Chainscore and configuring it in your environment. You'll use this key to programmatically fetch node performance data, including metrics like latency, uptime, and error rates. Ensure your key has the necessary permissions for the specific chains and endpoints you intend to benchmark, such as Ethereum Mainnet or Arbitrum.

Next, set up your development environment. You will need a working installation of Node.js (v18 or later) and a package manager like npm or yarn. Install the official @chainscore/node-sdk library, which provides the core functions for querying performance data. A basic project structure with a script to call the API is essential. For example, your script might initialize the SDK client with your API key and target a specific RPC provider URL.

Understanding the key performance indicators (KPIs) is crucial before you start. A standard benchmark report should analyze several core metrics: latency (response time in milliseconds), success rate (percentage of successful requests), consistency (variation in performance over time), and block propagation speed. You should decide which metrics are most relevant to your partner's use case, whether it's for high-frequency trading, NFT minting, or general dApp interactions.

You must also identify your comparison set. A benchmark is only useful when measured against something. Define the RPC endpoints or node providers you will compare. This could include public endpoints, centralized providers like Infura or Alchemy, and other decentralized services. Clearly document the URLs and any authentication details for each endpoint you plan to test to ensure consistent and fair comparisons throughout your data collection period.

Finally, plan your data collection methodology. Determine the duration (e.g., 24 hours, 7 days), request volume, and types of calls (e.g., eth_blockNumber, eth_getLogs). Use the SDK's methods to schedule calls and log results to a database or file system. Consider using a tool like cron for periodic execution. Properly structured raw data is the foundation for the insightful charts and analysis you will present in the final report to your partners.

key-concepts-text
ACTIONABLE GUIDE

How to Prepare Benchmark Reports for Partners

A structured approach to creating clear, data-driven benchmark reports that effectively communicate blockchain performance to external stakeholders.

A partner-facing benchmark report is a formal deliverable that translates raw performance data into actionable business intelligence. Its primary goal is to provide a transparent, verifiable, and objective assessment of a blockchain's capabilities, such as transaction throughput (TPS), finality times, gas costs, and network reliability. Unlike internal reports, these documents must be accessible to a non-technical audience while maintaining the technical rigor required for validation. Key elements include an executive summary, methodology, results, and clear conclusions that directly address the partner's use case requirements.

The foundation of a credible report is a rigorous and documented methodology. You must explicitly define the test environment, including the network configuration (mainnet, testnet, private network), client software and versions (e.g., Geth v1.13.0, Erigon v2.60), and hardware specifications. Detail the benchmarking tools used, such as Hyperledger Caliper, Blockbench, or custom scripts, and justify their selection. Crucially, document the workload profile: the mix of transaction types (simple transfers, ERC-20 transfers, NFT mints), the transaction send rate, and the duration of the test run. This transparency allows partners to replicate results and understand the context of the data.

Data presentation is critical for clarity. Use a combination of tables for raw metrics and graphs for trends. Essential graphs include throughput vs. latency (showing the point of network saturation), latency distribution (e.g., a histogram showing 95th percentile confirmation times), and gas cost over time. Annotate graphs to highlight key observations, such as "Network saturates at ~2,500 TPS, with latency increasing exponentially beyond this point." Always present data with proper units and confidence intervals where applicable. Include a dedicated section for anomalies and edge cases, explaining any observed spikes in latency or failed transactions, as this builds trust through honesty.

The report must contextualize the results. Don't just state that a chain achieved 5,000 TPS. Explain what this means for the partner's application: "This throughput can support approximately 125,000 user transactions per day for your proposed loyalty token system, with an average confirmation time of 2.1 seconds." Compare results against relevant benchmarks, such as the partner's current infrastructure, industry averages, or competitor chains, but ensure comparisons are fair and like-for-like. Conclude with a clear, objective summary of strengths (e.g., low-cost batch transactions) and limitations (e.g., higher latency during peak load) to set realistic expectations for the partnership's technical feasibility.

PERFORMANCE & SECURITY

Core Benchmarking Metrics and Targets

Key quantitative and qualitative metrics to evaluate and compare blockchain infrastructure providers.

Metric / TargetTier 1 (Enterprise)Tier 2 (Production)Tier 3 (Development)

RPC Request Success Rate

99.9%

99.5%

98.0%

P95 Latency (ms)

< 100 ms

< 250 ms

< 500 ms

Concurrent Connections

Unlimited

Up to 500

Up to 100

Historical Data Access

Archive Node Access

MEV Protection

SLA & Uptime Guarantee

99.95%

99.5%

Monthly Request Limit

Unmetered

10M requests

1M requests

Priority Support

step-1-environment-setup
PREPARATION

Step 1: Define Scope and Set Up Environment

Effective benchmark reports start with a clear definition of what you're measuring and a properly configured development environment. This step ensures your analysis is relevant, reproducible, and actionable for partners.

Before writing a single line of code, you must define the scope of your benchmark. This involves specifying the exact components you will test. For a blockchain benchmark, this typically includes the smart contract (e.g., a Uniswap V3 pool, an ERC-20 token), the network (e.g., Ethereum Mainnet, Arbitrum, a local testnet), and the operations (e.g., swapExactTokensForTokens, mint, transfer). Clearly document the contract addresses, ABI, and the specific function signatures you will be calling. This clarity prevents scope creep and ensures your partner knows precisely what is being evaluated.

Next, set up a robust and isolated development environment. We recommend using a framework like Hardhat or Foundry for local testing and simulation. Install the necessary dependencies: @nomicfoundation/hardhat-toolbox, ethers.js v6, and any protocol-specific SDKs. Configure your hardhat.config.js to connect to both a forked mainnet node (using a service like Alchemy or Infura) for realistic gas and state simulation, and a local Hardhat Network for rapid iteration. This dual setup allows you to develop and debug against real-world conditions without spending gas.

A critical part of environment setup is data collection tooling. You will need libraries to capture metrics. For on-chain metrics, use ethers Provider methods to fetch block numbers, gas prices, and transaction receipts. For performance timing within your test scripts, use Node.js's native performance.now() or console.time() functions. Structure your project with separate directories for scripts/ (benchmark runners), utils/ (metric calculators), and artifacts/ (saved results). This organization makes your benchmark suite maintainable and easy for partners to audit.

Finally, establish your success metrics and baselines. What defines a "good" result? Common metrics include average gas cost per operation, transaction latency from broadcast to confirmation, and throughput (operations per second) under load. If possible, gather baseline metrics from the live network or a previous report. For example, note that a simple ERC-20 transfer costs approximately 45,000 gas on Ethereum. Defining these targets upfront gives your analysis a clear frame of reference and makes the final report's conclusions much more impactful.

step-2-workload-design
METHODOLOGY

Step 2: Design Representative Workloads

A benchmark is only as useful as the workloads it tests. This step defines the specific, realistic transaction patterns used to evaluate a partner's node performance.

The core objective is to simulate the real-world traffic a node will encounter in production. Generic tests like simple balance checks are insufficient. Instead, you must model the specific interactions of your application's users. For a DeFi protocol, this means workloads heavy in eth_call queries for price data and eth_sendRawTransaction for swaps. An NFT marketplace's workload would prioritize eth_getLogs for event filtering and transaction submissions for minting and trading. The key is to answer: what RPC methods, at what frequency, and in what sequences do your dApp's smart contracts and frontend actually generate?

A representative workload is defined by several key parameters. Request Mix specifies the percentage distribution of different JSON-RPC calls (e.g., 60% eth_call, 30% eth_getBlockByNumber, 10% eth_sendRawTransaction). Load Pattern determines the request rate—constant, spiky (to simulate user activity peaks), or increasing ramps. Data Complexity involves the specificity of queries, such as calling a contract with a large state or filtering logs across a 10,000-block range. Tools like Geth's evm tool or bespoke scripts using web3 libraries can replay recorded transaction streams to create highly authentic load.

For actionable benchmarks, you must instrument your workload to collect the metrics that matter. Beyond simple latency, track tail latency (p95, p99), which often reveals instability that average latency hides. Measure error rates per RPC method and consistency across identical test runs. A critical, often-missed metric is state growth impact: how does performance degrade as the node's historical state increases after processing thousands of benchmark transactions? Documenting the exact workload configuration—including chain ID, starting block, contract addresses used, and the raw script—is essential for reproducibility and partner review.

Finally, validate that your designed workload aligns with the Service Level Objectives (SLOs) you aim to test. If your SLO requires 99.9% uptime, the workload must run long enough to detect potential outages. If you need sub-200ms response for eth_call, ensure your test environment's baseline network latency is accounted for. Share the finalized workload specification with your node partner before testing begins. This transparency ensures they understand the evaluation criteria and can optimize their infrastructure for your specific use case, turning the benchmark from a generic check into a valuable collaboration tool.

step-3-execution-analysis
BENCHMARKING WORKFLOW

Step 3: Execute Tests and Collect Data

This step details the practical execution of your benchmark tests and the systematic collection of performance data.

With your test environment configured and your scenarios defined, you can begin the execution phase. The goal is to run your test suite against the target blockchain network or protocol and gather raw performance data. This involves triggering the pre-defined transactions—such as token swaps, NFT mints, or contract deployments—and recording the system's response. Use your chosen tool (e.g., a custom script, Hyperdrive, or Tatum) to automate this process, ensuring each test run is isolated and repeatable. It is critical to run tests during periods of representative network activity to capture realistic performance, not just during off-peak times.

Data collection must be comprehensive and structured. For each transaction, you should capture key metrics including: latency (time from broadcast to finality), throughput (transactions per second, TPS), success rate, gas fees (or equivalent), and error types. Store this data in a time-series database or structured log files, tagging each entry with metadata like the test scenario ID, network conditions (e.g., current base fee), and a timestamp. This granularity is essential for later analysis, allowing you to correlate performance dips with specific network events or test parameters.

To ensure statistical significance, you must run each test scenario multiple times. A single run can be skewed by a transient network spike or a mempool anomaly. Establish a clear run protocol: for example, execute each load level (e.g., 10, 50, 100 TPS) for a fixed duration (like 5 minutes) and repeat this cycle 3-5 times. Calculate average values and standard deviations for your core metrics from these repeated runs. This rigorous approach transforms anecdotal observations into reliable, quantitative data that forms the foundation of a credible benchmark report for your partners.

REPORT COMPONENTS

Standard Report Structure Template

Essential sections for a comprehensive blockchain network benchmark report.

SectionPurposeKey MetricsRecommended Depth

Executive Summary

High-level findings and recommendations for leadership

TL;DR performance score, top 3 risks, main recommendation

1 page

Methodology & Scope

Defines how and what was measured

Test duration, node count, network conditions, tooling version

Detailed

Network Performance

Core throughput and latency analysis

TPS, block time, finality time, P2P latency

Granular, with charts

Node Performance

Resource utilization and stability

CPU/RAM usage, disk I/O, sync time, crash count

Per-node breakdown

Smart Contract Execution

EVM/WASM runtime efficiency

Gas costs for standard ops, contract deployment time, error rate

Benchmark-specific

Economic Analysis

Cost structure for validators/users

Staking APR, transaction fees, slashing conditions, inflation rate

Model with projections

Security & Decentralization

Network resilience and trust assumptions

Validator distribution, governance participation, client diversity

Risk-focused

Comparative Analysis

Context vs. other Layer 1/Layer 2 networks

Relative TPS, cost, finality, using normalized benchmarks

Data table summary

Conclusion & Recommendations

Actionable insights for the partner

Priority areas for improvement, upgrade suggestions, risk mitigation

Bulleted list

step-4-data-visualization
ANALYSIS & COMMUNICATION

Step 4: Visualize Data and Draft the Report

Transform raw blockchain metrics into actionable insights through effective visualization and structured reporting for partners.

Effective data visualization is critical for translating complex on-chain metrics into clear, actionable insights for partners. Start by selecting the right chart types for your data: use line charts for trends over time (e.g., daily active addresses), bar charts for comparisons (e.g., transaction volume per chain), and pie charts for composition (e.g., protocol market share). Tools like Dune Analytics dashboards, Flipside Crypto, or custom libraries like Plotly and Chart.js are industry standards. Ensure all visualizations include clear labels, a consistent color scheme, and highlight key data points, such as significant spikes in volume or sudden changes in user behavior.

When drafting the report, structure it to tell a coherent story. Begin with an executive summary that outlines the core findings and their business implications. Follow with a methodology section detailing your data sources (e.g., specific subgraphs, RPC nodes), timeframes, and any assumptions made. The main analysis section should pair each visualization with concise commentary. For example, a chart showing a decline in Total Value Locked (TVL) should be accompanied by an analysis of potential causes, such as a competing protocol launch or a broader market downturn. Use inline code formatting for specific metric names like daily_tx_count or contract addresses.

Incorporate comparative benchmarks to provide context. Instead of stating "Protocol X processed 50,000 transactions," frame it as "Protocol X processed 50,000 transactions, a 15% increase from last quarter but 40% below the leading competitor, Protocol Y." Reference external data from sources like DeFi Llama or Token Terminal to validate your findings. This demonstrates a comprehensive market understanding and strengthens the report's authority. Always cite your data sources with direct links, such as a Dune query or an Etherscan contract page.

The final report should be both a diagnostic tool and a strategic document. Conclude with a key takeaways section that distills insights into bullet points for quick reference, and an actionable recommendations section. Recommendations might include optimizing gas costs for users, exploring integration with a new Layer 2 to reduce fees, or adjusting incentive structures based on user retention data. Deliver the report in a partner-friendly format, typically a PDF or a shared Notion/GDoc, ensuring all charts are high-resolution and interactive links are preserved for deeper exploration.

BENCHMARK REPORTS

Frequently Asked Questions

Common questions and technical guidance for creating and interpreting blockchain performance reports for partners.

A robust benchmark report requires data from multiple, verifiable sources to ensure accuracy and prevent manipulation. Primary sources should include:

  • On-chain data: Direct queries to node RPC endpoints (e.g., Ethereum, Arbitrum, Polygon) for block times, gas fees, and transaction finality.
  • Indexed data: Services like The Graph, Dune Analytics, or Covalent for historical and aggregated metrics (e.g., TVL, daily active addresses).
  • Node provider APIs: Data from infrastructure providers like Alchemy, Infura, or QuickNode for reliability and latency metrics.
  • Explorers: Block explorers (Etherscan, Arbiscan) for transaction success rates and contract interactions.

Always cite your sources and timestamps. For example, compare TPS not just from a chain's marketing materials, but by calculating (Total Transactions in Block) / (Block Time) over a 24-hour period using raw RPC data.

conclusion
IMPLEMENTATION

Conclusion and Next Steps

This guide has outlined the core methodology for creating actionable blockchain benchmark reports. The next step is to operationalize this framework for your partners.

To begin, formalize your reporting process by creating a standardized template. This template should include sections for the executive summary, methodology (detailing the testnet, RPC endpoints, and load testing tools used), performance metrics (TPS, latency, gas costs), and actionable recommendations. Using a consistent format ensures partners can easily compare reports over time and builds trust through transparency. Tools like Notion, Google Docs, or a custom dashboard can serve as effective delivery platforms.

Next, establish a clear data pipeline. Automate data collection where possible using scripts that interact with node JSON-RPC or WebSocket APIs. For example, a script could periodically call eth_blockNumber and eth_getBlockByNumber to calculate real-time TPS and block time. Store this data in a time-series database like TimescaleDB or InfluxDB. This automation reduces manual error and provides a reliable dataset for generating the charts and graphs that make your report visually compelling and easy to digest.

Finally, schedule regular review sessions with your partners. A benchmark report is not a static document but the starting point for a technical dialogue. Present your findings, focusing on the key bottlenecks identified—such as high gas volatility during congestion or increased latency with certain RPC providers—and discuss potential solutions. This collaborative approach transforms raw data into a strategic roadmap, helping partners optimize their dApp deployment, choose the right infrastructure, and ultimately build more resilient and performant applications on-chain.

How to Prepare Benchmark Reports for Blockchain Partners | ChainScore Guides