Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
Free 30-min Web3 Consultation
Book Now
Smart Contract Security Audits
Learn More
Custom DeFi Protocol Development
Explore
Full-Stack Web3 dApp Development
View Services
LABS
Guides

How to Mitigate Abuse of Public RPC Endpoints

A technical guide for node operators on implementing rate limiting, authentication, and monitoring to protect public RPC endpoints from spam, DDoS, and resource exhaustion attacks.
Chainscore © 2026
introduction
SECURITY & RELIABILITY

The Problem with Public RPC Endpoints

Public RPC endpoints are convenient but expose your application to significant security and performance risks. This guide explains the vulnerabilities and provides concrete mitigation strategies.

Public Remote Procedure Call (RPC) endpoints are the default connection point for most Web3 applications. Services like Infura, Alchemy, and public nodes provided by chains like Ethereum and Polygon offer these endpoints for free. While they lower the barrier to entry, they introduce critical vulnerabilities: rate limiting, censorship risk, and single points of failure. An attacker can easily scrape a public endpoint URL from your frontend code and abuse it, leading to throttled requests for all your users or exhausting your allotted request quota, which can cripple your application's functionality.

The core issue is the lack of authentication. Anyone with the endpoint URL can send requests, making it trivial to launch Denial-of-Service (DoS) attacks or spam the network with computationally expensive calls like eth_getLogs. Furthermore, relying on a single public provider creates a centralization vector; if that provider experiences downtime or decides to censor certain transactions, your application goes down with it. For production applications handling real value, this is an unacceptable risk profile that compromises reliability and user trust.

To mitigate these risks, you must implement access control. The most effective method is to proxy requests through your own backend server. This keeps your primary RPC URL secret. A simple Node.js/Express proxy looks like this:

javascript
app.post('/rpc-proxy', async (req, res) => {
  const response = await fetch('https://your-secret-rpc-endpoint', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(req.body)
  });
  const data = await response.json();
  res.json(data);
});

Your frontend then calls /rpc-proxy instead of the public endpoint directly, allowing you to implement rate limiting, API keys, or IP whitelisting on your server.

For enhanced reliability, implement RPC failover and load balancing. Use multiple providers (e.g., Alchemy, Infura, Chainstack, or your own node) and configure your client, like Ethers.js or Viem, to rotate between them if one fails. The Viem client supports this natively:

typescript
import { createPublicClient, fallback, http } from 'viem';
import { mainnet } from 'viem/chains';

const client = createPublicClient({
  chain: mainnet,
  transport: fallback([
    http('https://eth-mainnet.alchemyapi.io/v2/KEY_1'),
    http('https://mainnet.infura.io/v3/KEY_2'),
  ])
});

This setup provides redundancy, improving uptime and distributing request load.

For the highest security tier, especially for transaction submission, consider running your own full node or validator client. While resource-intensive, it gives you complete control over your RPC endpoint, eliminates third-party dependency, and ensures maximum censorship resistance. Services like Chainscore provide a middle ground, offering managed, authenticated endpoints with enhanced performance monitoring and reliability guarantees, abstracting away the complexity of node operations while providing enterprise-grade security.

In summary, treat your RPC layer as critical infrastructure. Move away from exposed public endpoints by: 1) Proxying requests through an authenticated backend, 2) Implementing multi-provider failover, and 3) Monitoring performance and errors. These steps will secure your application from abuse, ensure consistent performance for your users, and protect your project from preventable downtime.

prerequisites
PREREQUISITES

How to Mitigate Abuse of Public RPC Endpoints

Public RPC endpoints are essential for blockchain access but are vulnerable to spam, rate limiting attacks, and data scraping. This guide outlines practical strategies to protect your infrastructure.

Public Remote Procedure Call (RPC) endpoints provide the primary interface for applications to interact with blockchain networks like Ethereum, Polygon, and Solana. While essential for development and user accessibility, these open endpoints are frequent targets for abuse. Common attack vectors include spamming requests to exhaust rate limits, sybil attacks to simulate thousands of users, and data scraping for front-running or analytics. Unmitigated, this abuse can lead to degraded performance, inflated infrastructure costs, and service disruption for legitimate users.

The first line of defense is implementing robust authentication and authorization. While public endpoints are open by design, adding a simple API key system can deter casual abuse. For more sensitive operations or higher rate limits, consider using JWT tokens or OAuth. Services like Chainscore provide enterprise-grade RPC endpoints with built-in key management, request signing, and usage analytics, moving the authentication burden away from your application layer. This allows you to offer tiered access without maintaining complex user management systems internally.

Rate limiting is critical but must be applied intelligently. A naive global limit per IP address is easily bypassed. Implement multi-dimensional rate limiting based on: the requesting IP address, the method being called (e.g., eth_getLogs is more expensive than eth_blockNumber), and a user session or API key. Use a token bucket or sliding window algorithm. For Ethereum JSON-RPC, prioritize limiting complex calls like eth_getLogs and debug_traceTransaction. Tools like Nginx with the limit_req module or cloud provider WAFs can enforce these rules at the edge.

Monitoring and analytics transform raw data into actionable security insights. You must track metrics like requests per second (RPS) per IP, error rates, and endpoint-specific consumption. Sudden spikes in eth_call requests might indicate a bot probing smart contracts. Set up alerts for anomalous behavior. Using a service like Chainscore provides detailed dashboards showing traffic sources, method popularity, and cost attribution, making it easier to identify and block bad actors before they impact service quality.

For production applications, relying solely on a single public endpoint is a risk. Implement fallback RPC providers and load balancing to ensure reliability. Architect your application to switch providers if latency spikes or error rates increase. Furthermore, consider using specialized providers for specific tasks: a primary provider for general queries and a separate, secured endpoint for sending transactions. This compartmentalization limits the blast radius of an attack on any single service and is a core principle of resilient Web3 infrastructure.

key-concepts-text
SECURITY GUIDE

How to Mitigate Abuse of Public RPC Endpoints

Public RPC endpoints are critical infrastructure but are vulnerable to spam, DDoS attacks, and API key scraping. This guide outlines practical strategies to protect your endpoints.

Public RPC endpoints are essential for blockchain accessibility but present a significant attack surface. Common abuse vectors include spam requests that degrade performance, Distributed Denial-of-Service (DDoS) attacks aiming to take the service offline, and API key scraping where bots harvest your endpoint URL for unauthorized use in wallets or dApps. Unmitigated, this abuse leads to increased infrastructure costs, degraded service for legitimate users, and potential security breaches if the endpoint grants access to sensitive chain data or archival nodes.

The first line of defense is implementing robust rate limiting. Instead of a global limit, apply tiered rules based on request type and origin. For example, computationally expensive calls like eth_getLogs with large block ranges should have stricter limits than simple eth_blockNumber queries. Use middleware like NGINX or Cloudflare Workers to enforce these policies at the edge. A basic NGINX configuration might limit requests to 10 per second per IP for standard calls, returning a 429 Too Many Requests status when exceeded, protecting your backend nodes from being overwhelmed.

To combat scraping and unauthorized redistribution, consider using authenticated access. Issue API keys or JWT tokens, even for free tiers, to track usage per user. Services like Infura and Alchemy use this model. For public-facing endpoints, implement a proof-of-work (PoW) challenge for suspicious IPs. A light client library can require the requester to solve a simple cryptographic puzzle before processing heavy debug_* or trace_* calls. This makes large-scale abuse economically impractical for attackers while adding minimal overhead for legitimate, sporadic users.

Advanced mitigation involves request analysis and filtering. Deploy systems to detect and block patterns indicative of abuse, such as rapid-fire calls to eth_getBalance for many different addresses (a common wallet-scanner pattern) or repeated failed eth_call attempts. Tools like Prometheus and Grafana can monitor metrics like requests-per-second, error rates, and top call methods. Setting alerts for anomalous spikes allows for proactive response. Furthermore, consider using a blockchain-specific Web Application Firewall (WAF) that understands JSON-RPC payloads to filter malicious input before it reaches your node client.

Finally, architect your infrastructure for resilience. Separate public endpoints from your core infrastructure using a gateway layer. This gateway handles authentication, rate limiting, and caching for common requests like the latest block number. Use load balancers to distribute traffic across multiple node instances and automated scaling to handle legitimate surges. For Ethereum, caching responses for immutable data (e.g., historical blocks) at the CDN level can drastically reduce load. Regularly audit your access logs and revoke compromised API keys. A layered approach combining rate limiting, authentication, monitoring, and resilient architecture is essential for maintaining a secure and reliable public RPC service.

THREAT ANALYSIS

Common RPC Abuse Vectors and Impact

A breakdown of prevalent attack methods targeting public RPC endpoints, their operational impact, and typical attacker objectives.

Attack VectorPrimary ImpactTypical ScaleAttacker Goal

Spam/DoS Requests

Endpoint Degradation

10k+ RPM

Disrupt Service

API Key Harvesting

Credential Theft

Sustained Scanning

Free Quota Abuse

State Exhaustion

Node Memory/CPU Crash

Complex Query Flood

Infrastructure Takeover

Gas Price Manipulation

Network Spam

Spike to 1000+ Gwei

Front-Running / MEV

Historical Data Scraping

Bandwidth Saturation

TB+ Data Extraction

Off-Chain Analysis

Smart Contract Simulation Abuse

High Compute Load

10k+ eth_call/min

Exploit Testing

Sybil Attacks

Rate Limit Evasion

100s of IPs

Sustain High-Volume Abuse

implement-rate-limiting
FOUNDATION

Step 1: Implement Granular Rate Limiting

Granular rate limiting is the first line of defense against API abuse, preventing a single user or bot from overwhelming your public RPC endpoint.

A public RPC endpoint without rate limiting is an open invitation for abuse. Malicious actors can launch Denial-of-Service (DoS) attacks, spam the network with low-value transactions, or scrape data at unsustainable speeds, degrading performance for all legitimate users. Implementing a blanket rate limit is a start, but it's insufficient. Granular rate limiting applies different rules based on the request type, user identity, or method being called, allowing you to protect critical functions while maintaining usability.

The core principle is to define rate limit policies per method or endpoint. For example, you should allow more frequent requests for eth_blockNumber (a lightweight call) than for eth_getLogs with a large block range (a computationally heavy call). Similarly, eth_sendRawTransaction should be strictly limited to prevent transaction spam. Tools like Nginx's limit_req module, Cloudflare Rate Limiting, or dedicated API gateways like Kong or Tyk can enforce these rules at the infrastructure layer before requests even reach your node software.

For more sophisticated control, implement rate limiting based on the requester's IP address or API key. While IP-based limits are common, they can be circumvented by distributed botnets. A more robust approach is to require free API keys for higher-tier access, allowing you to track usage per user. The JSON-RPC specification itself doesn't handle authentication, so this layer is typically added by your proxy or gateway. A basic policy might look like: 100 requests/minute per IP for read calls, 10 requests/minute for eth_estimateGas, and 2 requests/minute for eth_sendRawTransaction.

Here is a conceptual example of defining policies in an Nginx configuration file for a Geth node:

nginx
http {
    limit_req_zone $binary_remote_addr zone=rpc_read:10m rate=100r/m;
    limit_req_zone $binary_remote_addr zone=rpc_send:10m rate=2r/m;

    server {
        location / {
            if ($request_body ~* "eth_sendRawTransaction") {
                set $limit_zone rpc_send;
            }
            if ($request_body !~* "eth_sendRawTransaction") {
                set $limit_zone rpc_read;
            }
            limit_req zone=$limit_zone burst=5 nodelay;
            proxy_pass http://geth:8545;
        }
    }
}

This configuration creates two "zones": one for general reading and a stricter one for sending transactions, inspecting the request body to apply the correct rule.

Effective rate limiting requires monitoring and iteration. Use your gateway's logging or a metrics system like Prometheus to track rate limit hits (HTTP 429 responses), request volumes by method, and top-consuming IPs. This data will show you if your limits are too strict, blocking real users, or too loose, allowing abuse. Adjust your policies based on this real-world traffic. The goal is not to block users but to ensure fair resource allocation and service stability, making your endpoint resilient against the most common forms of volumetric abuse.

implement-authentication
SECURITY BEST PRACTICE

Step 2: Add Authentication and API Keys

Public RPC endpoints are vulnerable to spam and abuse. This guide explains how to secure your connection using API keys and authentication.

Public RPC endpoints are a critical but vulnerable infrastructure component. Without protection, they are subject to Sybil attacks, spam requests, and rate limit abuse from bots, which can degrade performance for legitimate users and increase operational costs. The primary defense is to move away from a completely open endpoint. Adding an authentication layer ensures that only authorized applications and users can access your node's services, allowing you to monitor usage, enforce quotas, and revoke access if needed.

The most common and effective method is to use an API key or JWT (JSON Web Token). When you make a request, you include this key in the HTTP header. The RPC provider validates the key before processing the request. For example, a request to an Ethereum node might include the header Authorization: Bearer YOUR_API_KEY. Most node client libraries, like ethers.js and web3.py, support custom headers. This simple step transforms your endpoint from a public good into a controlled, accountable service.

Implementation typically involves generating a secure key through your node provider's dashboard (e.g., Alchemy, Infura, Chainstack) or your own proxy server. For a custom setup, you can use a reverse proxy like NGINX or middleware in a service like Cloudflare Workers to inspect incoming requests. The code snippet below shows a basic ethers.js provider configured with an API key:

javascript
import { ethers } from 'ethers';
const provider = new ethers.JsonRpcProvider('https://eth-mainnet.g.alchemy.com/v2/your-api-key');

Note that the key is embedded in the URL for many providers, acting as both the endpoint identifier and authentication token.

Beyond simple keys, consider implementing request signing for higher security. This involves cryptographically signing each request with a private key, which the RPC server verifies with a corresponding public key. This pattern, used by services like AWS, prevents key interception and reuse. For team environments, use a secret management system (e.g., HashiCorp Vault, AWS Secrets Manager) to store and rotate keys instead of hardcoding them in application source files, which is a major security risk.

Finally, authentication enables detailed usage analytics and rate limiting. You can track which API key is making requests, set daily request caps, and throttle traffic based on key-tier (e.g., free vs. paid). This allows you to offer tiered services, identify malfunctioning applications, and cut off bad actors without impacting other users. Always pair authentication with HTTPS to encrypt the key in transit, and regularly audit and rotate keys as part of your security hygiene.

configure-node-parameters
NODE HARDENING

How to Mitigate Abuse of Public RPC Endpoints

Public RPC endpoints are critical infrastructure but are often exploited for spam, arbitrage bots, and denial-of-service attacks. This guide details practical steps to secure your node.

A public RPC endpoint allows anyone to submit read and write requests to your blockchain node. While essential for network accessibility, an unprotected endpoint is a major vulnerability. Common forms of abuse include spam transactions that fill mempools, rate-limited API calls from trading bots that degrade performance, and expensive JSON-RPC method calls (like eth_getLogs on large ranges) that can cripple node resources. The goal of hardening is to serve legitimate users while filtering out malicious or resource-intensive traffic.

The first line of defense is implementing rate limiting. Using a reverse proxy like Nginx or a dedicated middleware, you can restrict requests per IP address. For example, an Nginx configuration can limit connections to 10 requests per second from a single IP for the / location, returning a 429 Too Many Requests status for violators. It's crucial to apply stricter limits to computationally expensive methods. You should also whitelist and blacklist IPs at the proxy level to block known bad actors or allow trusted services.

Next, restrict access to specific JSON-RPC methods. By default, many clients enable the eth, net, and web3 API namespaces. You should disable dangerous or unnecessary methods. For a Geth node, use the --http.api flag to expose only what's needed, like eth,net,web3. Always disable the personal and admin namespaces on public endpoints, as they allow account and node management. For methods that remain public, consider implementing request timeouts and maximum result size limits (e.g., for eth_getLogs) to prevent a single query from consuming all resources.

For advanced protection, deploy a specialized RPC gateway like Chainscore Sentinel or similar services. These gateways act as a managed proxy, offering features like automated bot detection, geographic filtering, API key authentication, and cost-based rate limiting that charges users for heavy computational requests. They provide analytics dashboards to monitor traffic patterns and identify abuse in real-time, which is difficult to achieve with basic proxy rules. This shifts the operational burden from your node infrastructure to a dedicated security layer.

Finally, monitor and log all RPC traffic. Use your node's logging (Geth's --http.vmodule="rpc=5") or proxy access logs to track request volumes, methods called, and source IPs. Set up alerts for sudden traffic spikes or repeated errors. Combine this with system monitoring for CPU, memory, and disk I/O. Regular review of these logs will help you fine-tune your rate limits and filtering rules, adapting to new attack vectors. A hardened configuration is not a one-time setup but requires ongoing observation and adjustment.

COMPARISON

Node Client Security Configuration Flags

Key security flags for Geth and Erigon clients to restrict public RPC endpoint access.

Configuration FlagGethErigonPurpose

HTTP RPC Port

--http.port 8545

--http.port 8545

Defines the listening port for HTTP JSON-RPC

Allowed HTTP Origins

Restricts CORS to specific domains

RPC API Modules

--http.api eth,net,web3

--http.api eth,net,web3

Exposes only specified APIs; omit debug, txpool

RPC Rate Limiting

--rpc.gascap 50000000

--rpc.gascap 50000000

Maximum gas limit for eth_call/estimateGas

RPC Request Timeout

--rpc.evmtimeout 5s

--rpc.evmtimeout 5s

Timeout for eth_call execution

Max HTTP Request Size

--rpc.maxrequestcontentlength 10240

--rpc.maxrequestcontentlength 10240

Limits request body size (in KB)

Enable Personal Namespace

Disables account management APIs (personal_*)

Enable Admin Namespace

Disables node control APIs (admin_*)

monitor-and-alert
OPERATIONAL SECURITY

Step 4: Set Up Monitoring and Alerting

Proactive monitoring is essential to detect and respond to abuse of your public RPC endpoints before it impacts service quality or leads to financial loss.

Effective monitoring begins with defining key metrics that signal potential abuse. Track request volume per IP, error rates, and gas consumption patterns. A sudden, sustained spike in requests from a single IP or a small cluster of IPs is a primary indicator of a scraping bot or a denial-of-service attempt. Similarly, monitor for a high frequency of eth_estimateGas or eth_call requests, which are computationally expensive and often abused for tasks like frontrunning or spam. Tools like Prometheus for metric collection and Grafana for visualization are industry standards for this purpose.

Setting up intelligent alerting transforms raw metrics into actionable insights. Configure alerts based on thresholds, such as "IP X makes >10,000 requests in 5 minutes" or "global error rate exceeds 5%." Use tools like Alertmanager (paired with Prometheus) or PagerDuty to route these alerts to your team. For more sophisticated detection, implement anomaly detection algorithms that learn your endpoint's normal traffic patterns and flag deviations. This can catch novel attack vectors that simple threshold-based rules might miss.

Beyond infrastructure metrics, implement application-level logging to understand the intent behind requests. Log details like the JSON-RPC method called, the parameters (e.g., contract addresses for eth_call), and the requesting IP. This log data is invaluable for forensic analysis after an incident. Centralize these logs using the ELK Stack (Elasticsearch, Logstash, Kibana) or a managed service like Datadog. Correlating log events with metric spikes allows you to reconstruct an attack and refine your mitigation rules.

For automated response, integrate your monitoring stack with mitigation systems. When an alert fires for a malicious IP, you can automatically trigger an API call to your rate limiter (e.g., Nginx, Cloudflare) or firewall to temporarily block the offending address. This creates a feedback loop where detection leads directly to containment. However, ensure these automated rules have safety mechanisms, such as short block durations and manual oversight, to avoid accidentally blocking legitimate users during traffic surges.

Finally, establish a clear incident response playbook. Define roles, communication channels, and escalation procedures for when a major abuse event is detected. The playbook should answer: Who investigates the alert? How is the team notified? What are the steps to deploy a temporary block or adjust rate limits? Regularly review alert effectiveness and false-positive rates to keep your monitoring system tuned and reliable, ensuring your RPC endpoint remains robust and available for legitimate users.

tools-and-middleware
DEVELOPER RESOURCES

Tools and Middleware for RPC Protection

Public RPC endpoints are vulnerable to spam and abuse. These tools and concepts help developers secure their infrastructure and manage traffic.

06

Adopt a Multi-Layered Security Strategy

No single tool is sufficient. A defense-in-depth approach combines multiple techniques.

  1. Edge Layer (Cloud/WAF): DDoS protection and IP reputation filtering (e.g., Cloudflare, AWS Shield).
  2. Proxy/Middleware Layer: Rate limiting, method filtering, and request validation.
  3. Authentication Layer: API keys for trusted users and services.
  4. Node Client Layer: P2P configuration and client-specific security settings.
  5. Monitoring Layer: Real-time alerts and log analysis for all layers.

This strategy ensures that if one layer is bypassed, others provide continued protection.

PUBLIC RPC SECURITY

Frequently Asked Questions

Common questions and solutions for developers dealing with rate limits, downtime, and security risks when using public RPC endpoints.

Public RPC endpoints are shared resources with strict rate limits (e.g., 10-100 requests per second) to prevent abuse and manage infrastructure costs. When traffic spikes from popular dApps or bots, these limits are quickly exhausted, causing 429 Too Many Requests errors or timeouts.

Primary causes include:

  • High-frequency polling: Frontends or scripts that poll for new blocks or account states every second.
  • Batch request abuse: Sending large eth_getLogs queries over wide block ranges.
  • MEV bot traffic: Searchers and arbitrage bots flooding the endpoint with transaction simulations.

To diagnose, check your error logs for HTTP status codes 429, 503, or ETIMEDOUT socket errors. The issue is almost always on the client side, not the provider's permanent downtime.

conclusion
SECURING YOUR DAPP

Conclusion and Next Steps

Protecting your application from RPC abuse is an ongoing process that requires a multi-layered security strategy.

Mitigating abuse of public RPC endpoints is not a one-time fix but a continuous security posture. The strategies discussed—from implementing rate limiting and authentication to using private endpoints and load balancers—form a defense-in-depth approach. The most effective solution is often a combination of these methods tailored to your application's specific traffic patterns and threat model. For high-value or high-traffic dApps, relying solely on free, public RPC providers is a significant operational risk.

Your immediate next steps should involve auditing your current RPC usage. Use the debug_traceCall method or provider analytics to identify unusual patterns, such as spikes from single IPs or repetitive calls to expensive operations like eth_getLogs. Tools like Chainscore provide detailed metrics on request volume, method distribution, and error rates, giving you the visibility needed to detect and respond to abuse before it impacts service quality or costs.

For development, consider using services like Alchemy's Supernode, Infura's Dedicated Plans, or QuickNode's Elastic Endpoints, which offer enhanced reliability and abuse protection over their free tiers. In production, architect your backend to use a fallback RPC provider strategy. This involves having a primary private endpoint and one or more secondary providers (which could be another paid service or a self-hosted node) to switch to if the primary is rate-limited or fails. Libraries like ethers.js and web3.py support this natively.

For teams with the resources, self-hosting an Ethereum node client like Geth, Erigon, or Nethermind provides the ultimate control. While this introduces operational overhead, it eliminates dependency on third-party providers and associated usage limits. Services like Chainstack, Blockdaemon, or AWS Managed Blockchain can reduce this burden by offering managed node hosting. Remember to configure your self-hosted node's RPC API securely, disabling public access to sensitive methods.

Finally, stay informed about evolving best practices. The Ethereum ecosystem continuously develops new solutions, such as JSON-RPC batch request limits and more sophisticated DoS protection modules for node clients. Engage with provider documentation, follow security announcements, and consider implementing a Web3 gateway or API management layer for enterprise-grade traffic shaping, caching, and security policies. Proactive management of your RPC layer is essential for building robust, user-friendly decentralized applications.