Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

Launching a Compliance Reporting Dashboard

A technical guide for developers building an internal dashboard to aggregate KYC, transaction monitoring, and sanctions data for regulatory reporting.
Chainscore © 2026
introduction
INTRODUCTION

Launching a Compliance Reporting Dashboard

A guide to building a dashboard for monitoring and reporting on-chain compliance requirements.

A compliance reporting dashboard is a centralized interface for monitoring wallet activity against regulatory frameworks like the Travel Rule (FATF-16), Anti-Money Laundering (AML) directives, and sanctions screening. In Web3, this involves programmatically querying blockchain data—transaction histories, counterparty addresses, and asset flows—to generate auditable reports. Unlike traditional finance, the pseudonymous and cross-chain nature of crypto assets requires tools that can aggregate data from multiple sources, including block explorers, indexers, and intelligence platforms like Chainalysis or TRM Labs.

The core technical challenge is data ingestion and normalization. Your dashboard's backend must connect to various data providers: RPC nodes for real-time state, The Graph for indexed historical data, and specialized AML API endpoints. You'll need to structure this data around key entities: wallets, transactions, counterparties, and risk flags. A common architecture uses a PostgreSQL database with tables for raw logs and a separate schema for enriched compliance data, where addresses are tagged with risk scores and jurisdiction information.

For developers, implementing a basic dashboard involves several steps. First, set up listeners for on-chain events using Ethers.js or Viem. For example, to track ERC-20 transfers for a list of monitored addresses, you would listen for the Transfer event. Second, enrich each transaction by querying a service like Chainabuse for address reputation or using Elliptic's API to check for sanctions exposure. Finally, you must calculate and expose key metrics, such as total volume from high-risk jurisdictions or the number of transactions with unverified counterparties, through a React or Vue.js frontend.

Effective dashboards provide actionable intelligence, not just raw data. This means implementing alerting rules (e.g., flag transactions over $10,000 to wallets in sanctioned countries) and report generation in standard formats like CSV or PDF. Using a framework like Apache Superset or Metabase can accelerate development for visualization, but custom code is often needed for blockchain-specific logic. The goal is to create a system where compliance officers can audit activity, generate reports for regulators like FinCEN, and demonstrate a proactive Risk-Based Approach (RBA) to oversight.

When launching, start with a focused Minimum Viable Product (MVP). Monitor a single chain (e.g., Ethereum Mainnet) for a specific asset standard (ERC-20). Define clear Key Risk Indicators (KRIs), such as the percentage of transactions involving Virtual Asset Service Providers (VASPs) without Travel Rule compliance. Use open-source tools like Ethereum ETL for initial data pipelines. This phased approach allows you to validate the data model and reporting logic before scaling to support multiple networks and more complex regulatory requirements, ensuring the dashboard is both useful and maintainable.

prerequisites
LAUNCHING A COMPLIANCE REPORTING DASHBOARD

Prerequisites and System Architecture

Before deploying a dashboard, you need the right infrastructure and data sources. This guide covers the essential components and their interactions.

A compliance dashboard is a data pipeline that aggregates, processes, and visualizes on-chain activity. The core prerequisites are a reliable blockchain data indexer and a secure backend. For Ethereum and EVM chains, services like The Graph for subgraphs or Chainscore's API for pre-built compliance metrics are foundational. You'll also need a database (PostgreSQL or TimescaleDB) for storing processed data and a frontend framework like React or Next.js for the user interface. Ensure your development environment has Node.js v18+ and a package manager like npm or yarn installed.

The system architecture follows a modular design. The data ingestion layer pulls raw transaction and log data from RPC nodes or indexed services. This data flows into a processing engine, often built with Node.js or Python, where business logic applies compliance rules (e.g., tagging transactions from sanctioned addresses using the OFAC SDN list). Processed results are stored in the database. The API layer (built with Express.js or FastAPI) serves this data to the frontend dashboard, which renders charts, tables, and alerts. Each module should be containerized using Docker for consistent deployment.

Key architectural decisions involve data freshness and scalability. For real-time alerts, you need a WebSocket connection to your data source or a service that polls for new blocks frequently. Batch processing is sufficient for daily reports. Consider using a message queue like RabbitMQ or Apache Kafka to decouple data ingestion from processing, ensuring the system can handle peak loads during market volatility. All sensitive data, such as API keys and database credentials, must be managed via environment variables or a secrets manager, never hardcoded.

Your dashboard's effectiveness depends on data quality. Integrate multiple sources for robustness: use a primary RPC provider (Alchemy, Infura), a secondary for redundancy, and an indexing service for complex queries. Implement data validation checks to flag missing blocks or inconsistent token prices from your oracle (Chainlink or Pyth). The backend should expose metrics endpoints for monitoring pipeline health, tracking block processing latency, and error rates. This observability is crucial for maintaining a reliable compliance tool.

Finally, plan the deployment environment. For development, local Docker Compose setups are ideal. For production, use a cloud provider (AWS, GCP, Azure) with managed services for databases and compute. Implement CI/CD pipelines using GitHub Actions or GitLab CI to automate testing and deployment. Security is paramount: enable HTTPS, use API rate limiting, and implement authentication (JWT tokens or OAuth) for dashboard access. The architecture should be documented using diagrams (e.g., with Mermaid.js) in your repository's README.

data-aggregation-backend
ARCHITECTURE

Step 1: Building the Data Aggregation Backend

The foundation of any compliance dashboard is a robust backend that aggregates, normalizes, and structures on-chain data from multiple sources. This step focuses on designing the data ingestion pipeline.

A compliance reporting dashboard requires raw data from blockchains, which you can source via RPC nodes or specialized data providers. For Ethereum, you might use Alchemy or Infura. For Solana, Helius or Triton. The first architectural decision is choosing between a full node you operate (high control, high cost) and a managed node service (lower overhead, potential vendor lock-in). Your backend must connect to these nodes to fetch transaction logs, event emissions, and state data for the addresses and protocols you monitor.

Once data is ingested, it must be normalized into a consistent schema. A transaction on Ethereum uses logs, while Solana uses instructions and innerInstructions. Your aggregation layer must parse these differing structures into a unified format. For example, you might create a Transaction model with fields for chain_id, block_number, from_address, to_address, value, and a standardized array of events. This allows your analysis engine to process data from Ethereum, Polygon, and Arbitrum with the same logic, which is critical for cross-chain compliance.

For performance and cost efficiency, implement event-driven ingestion rather than polling. Use WebSocket connections to nodes to listen for new blocks in real-time. When a block is finalized, your service should fetch it, extract relevant transactions based on a watchlist of addresses, and queue them for processing. This approach minimizes API calls and reduces latency. Tools like Apache Kafka or cloud-native message queues (AWS SQS, Google Pub/Sub) are ideal for decoupling the ingestion and processing stages, ensuring the system can handle volume spikes.

The processed data should be stored in a query-optimized database. Time-series databases like TimescaleDB (PostgreSQL extension) or InfluxDB are excellent for blockchain data, which is inherently chronological. For complex relational queries across entities (e.g., "show all transactions between these two DAO treasuries"), a traditional PostgreSQL database may be preferable. Index your tables on block_timestamp, from_address, and to_address to ensure sub-second query performance for your dashboard's filters and time-range selectors.

Finally, implement data validation and integrity checks. Cross-reference hash calculations, verify receipt statuses, and add sanity checks for token amounts. Log all ingestion errors and set up alerts for prolonged data stream interruptions. The output of this backend is a clean, reliable, and queryable data lake that serves as the single source of truth for all subsequent compliance analysis and reporting modules.

DATA LAYER

Core Data Schema for Compliance Reporting

Comparison of data schema approaches for structuring on-chain and off-chain compliance data.

Data Field & TypeRelational Database (SQL)Time-Series DatabaseDecentralized Storage (IPFS/Arweave)

Transaction Hash (bytes32)

Wallet Address (address)

Block Timestamp (uint256)

Token Transfers (array)

CID Reference

Smart Contract Logs (JSON)

CID Reference

Compliance Score (float)

AML Risk Flag (boolean)

Data Retention Policy

7 years

30 days

Permanent

Query Latency for 1M rows

< 100 ms

< 50 ms

2-5 sec

Schema Modification

Migration Required

Flexible

Immutable

visualizing-key-risk-indicators
DASHBOARD IMPLEMENTATION

Step 2: Visualizing Key Risk Indicators (KRIs)

This guide details how to build a dashboard for monitoring on-chain compliance risks using real-time data from protocols like Aave and Compound.

A Key Risk Indicator (KRI) dashboard transforms raw blockchain data into actionable compliance insights. For DeFi protocols, common KRIs include Total Value Locked (TVL) concentration, borrow utilization rates, collateralization ratios, and governance proposal activity. Visualizing these metrics allows compliance teams to monitor protocol health, identify concentration risks, and detect anomalous behavior that may signal regulatory or operational issues. Tools like Dune Analytics or Flipside Crypto are often used to query and visualize this data.

To build a dashboard, you first need to define and query the specific metrics. For example, monitoring collateral risk on Aave involves tracking the Weighted Average Health Factor across all users. A health factor below 1.0 indicates undercollateralized positions eligible for liquidation. You can query this using Dune's SQL interface on the Aave v3 Ethereum dataset. The query would calculate the total borrowed amount, total collateral value, and derive the health factor for specific asset pools like USDC or WETH.

Here is a simplified SQL example for querying health factor data from Aave:

sql
SELECT 
    DATE_TRUNC('day', evt_block_time) AS day,
    AVG(
        (total_collateral_usd / NULLIF(total_debt_usd, 0))
    ) AS avg_health_factor
FROM aave_v3_ethereum.Pool_evt_ReserveDataUpdated
WHERE reserve = '0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48' -- USDC contract
GROUP BY 1
ORDER BY 1 DESC;

This query provides a daily average health factor for the USDC pool, which can be charted to spot trends toward increased risk.

After querying the data, the next step is visualization. Effective dashboards use a combination of time-series charts (for trends), gauge charts (for threshold alerts), and data tables (for detailed inspection). Set clear visual thresholds: for example, color-code a health factor gauge red below 1.1, yellow between 1.1 and 1.5, and green above 1.5. This allows for immediate risk assessment. Dashboards should be updated in near real-time, leveraging the subgraph or indexer's refresh rate, to ensure compliance monitoring is proactive.

Finally, integrate alerts based on KRI thresholds. Using a platform like Dune, you can set up email or Slack alerts triggered when a metric breaches a defined limit. For instance, an alert can fire if the borrow utilization rate for a stablecoin pool on Compound exceeds 85%, indicating high demand and potential liquidity stress. Automating these alerts ensures that compliance and risk teams are notified of critical issues without manual dashboard monitoring, enabling faster response to emerging on-chain risks.

generating-regulatory-reports
BUILDING THE DASHBOARD

Step 3: Automating Regulatory Report Generation

This guide explains how to build an automated dashboard that generates compliance reports for blockchain transactions, focusing on data aggregation, rule application, and secure document creation.

A compliance reporting dashboard automates the collection of on-chain and off-chain data to generate required regulatory documents like Transaction Monitoring Reports (TMR) and Suspicious Activity Reports (SAR). The core architecture involves three layers: a data ingestion layer that pulls raw transaction data from node RPCs and indexed services like The Graph, a processing engine that applies compliance rules (e.g., tagging transactions over $10,000 or involving sanctioned addresses), and a reporting interface that formats the results into PDFs or CSVs for auditors. Automation here replaces error-prone manual spreadsheet tracking.

The first technical step is setting up the data pipeline. You'll need to connect to data sources for the chains your protocol operates on. For Ethereum, this might involve using an Etherscan API for confirmed transactions and a Alchemy Enhanced API for internal traces. For Solana, the Helius or Triton RPC services provide comprehensive transaction decoding. A robust implementation uses a message queue (like RabbitMQ or AWS SQS) to handle the stream of incoming transactions, ensuring the system can scale during high network activity. Each transaction object should be enriched with wallet labels from services like Chainalysis or TRM Labs before moving to the rules engine.

The rules engine is where compliance logic is executed. This is typically a separate service that consumes enriched transactions. Rules are defined in a human-readable format (like YAML) or via a UI for compliance officers. For example, a rule might flag any transaction where transaction.value > 10000 AND destination_wallet.risk_score > 75. The engine evaluates each transaction against all active rules, appending metadata tags (e.g., "flag": "HIGH_VALUE_SUSPECT"). These tagged transactions are then stored in a query-optimized database like TimescaleDB or ClickHouse, which is essential for generating time-bound reports.

Finally, the reporting module queries the tagged database to generate official documents. A common approach is to use a templating engine like Jinja2 or PDFKit to populate a standard report template with the aggregated data. For instance, a monthly TMR might query for all HIGH_VALUE transactions from the past 30 days, sum volumes by jurisdiction, and list originating VASPs. The generated report should include a cryptographic hash (like a SHA-256 checksum) of its contents, which is then anchored on-chain—perhaps via a low-cost transaction on a chain like Polygon—to provide an immutable audit trail proving the report has not been altered post-generation.

Key operational considerations include scheduling (using cron jobs or Apache Airflow DAGs to run reports daily/weekly/monthly), access control (ensuring only authorized personnel can generate or view reports via role-based permissions), and data retention policies to comply with regulations like GDPR or FINRA's 6-year rule. Logging all report generation events with user IDs and timestamps is critical for internal audits. The final output is a system that turns raw blockchain data into actionable, regulator-ready documentation with minimal manual intervention.

audit-findings-tracker
DATA AGGREGATION

Step 4: Implementing an Audit Findings Tracker

A centralized tracker is essential for managing the lifecycle of security vulnerabilities from discovery to resolution. This step details how to build a structured database for audit findings.

The core of a compliance dashboard is a structured audit findings tracker. This is not a simple spreadsheet; it's a relational database that links vulnerabilities to specific smart contracts, assigns severity, tracks remediation status, and logs evidence. A typical schema includes tables for Findings, Contracts, RemediationActions, and AuditReports. Each finding should have a unique ID, a standardized status (e.g., Open, In Progress, Resolved, Acknowledged Risk), a severity level (Critical, High, Medium, Low), and timestamps for discovery and closure. This structure enables programmatic querying and reporting.

To implement this, you can use a backend service (like a Node.js API with PostgreSQL) or a managed solution like Supabase or Airtable. The key is creating a secure API to ingest findings. For example, you could write a script that parses audit report PDFs from firms like OpenZeppelin or ChainSecurity, extracting findings into a structured JSON payload. A more automated approach integrates directly with on-chain monitoring tools like Forta or Tenderly, which can emit events when suspicious transactions are detected, creating findings automatically.

Here's a simplified example of a Finding model schema in TypeScript and a function to create a new entry:

typescript
interface AuditFinding {
  id: string; // UUID
  contractAddress: string;
  reportId: string; // Links to the source audit
  title: string;
  description: string;
  severity: 'Critical' | 'High' | 'Medium' | 'Low' | 'Informational';
  category: 'Access Control' | 'Reentrancy' | 'Logic Error' | 'Oracle';
  status: 'Open' | 'In Progress' | 'Resolved' | 'Acknowledged';
  discoveredAt: Date;
  resolvedAt?: Date;
  evidenceLinks: string[]; // Links to code snippets, transactions
}

async function createFinding(finding: Omit<AuditFinding, 'id'>) {
  // Implementation to save to your database
  const newFinding = { ...finding, id: crypto.randomUUID() };
  // ... database insert logic
}

Effective tracking requires establishing a clear workflow. When a new finding is logged, it should trigger notifications to the relevant development team via Slack, Discord, or email integrations. The status should update as work progresses: moving from Open to In Progress when assigned, and to Resolved only after the fix is verified on-chain. This verification can be automated by linking the finding to a specific transaction hash of the contract upgrade or patch deployment, which your tracker can confirm via an RPC call.

Finally, this tracker becomes the single source of truth for compliance reporting. You can generate metrics like mean time to resolution (MTTR), open critical issues count, and remediation rate over time. These metrics are crucial for internal reviews and for demonstrating due diligence to regulators or partners. By implementing a robust tracker, you transform ad-hoc security responses into a measurable, accountable process that directly enhances your protocol's security posture and operational transparency.

security-and-access-controls
SECURITY, ACCESS CONTROLS, AND DATA RETENTION

Launching a Compliance Reporting Dashboard

A secure, permissioned dashboard is essential for managing regulatory reporting, internal audits, and stakeholder transparency in Web3 projects.

A compliance dashboard centralizes critical on-chain and off-chain data for reporting obligations like the EU's Markets in Crypto-Assets (MiCA) regulation or the Financial Action Task Force (FATF) Travel Rule. Core data sources include: - Transaction logs from your smart contracts or node infrastructure - Know Your Customer (KYC) verification status from providers like Veriff or Sumsub - Wallet screening results from services like Chainalysis or TRM Labs - Internal audit trails of administrative actions. The dashboard must present this data in a clear, auditable format, often aggregating it into standardized reports.

Implementing granular role-based access control (RBAC) is non-negotiable. Define roles such as Compliance Officer, Auditor, and View-Only Analyst. Enforce these roles at the application layer and, where possible, integrate with enterprise identity providers using protocols like OAuth 2.0 or OpenID Connect. For on-chain components, consider using multi-signature wallets or access control contracts like OpenZeppelin's AccessControl to gate sensitive functions. All user sessions should be logged, and failed access attempts must trigger alerts.

Data retention policies must align with jurisdictional requirements, which often mandate keeping records for 5-7 years. For on-chain data, this involves maintaining access to a full archive node or using a blockchain indexing service like The Graph or Covalent. Off-chain data (KYC documents, internal logs) should be stored in encrypted, immutable storage. A practical implementation involves using IPFS with Filecoin for decentralized archival, with cryptographic hashes of the stored data recorded on-chain to prove integrity over time.

The dashboard's backend must be architected for security. Use environment variables for API keys, implement rate limiting, and employ a Web Application Firewall (WAF). All data in transit should use TLS 1.3. For database queries, use parameterized statements to prevent SQL injection. Here's a basic example of a secure API endpoint using Node.js and Express that checks a user's role before returning compliance data:

javascript
app.get('/api/compliance/report', authenticateToken, (req, res) => {
  if (!req.user.roles.includes('compliance-officer')) {
    return res.status(403).json({ error: 'Insufficient permissions' });
  }
  // Fetch and return report data from a secure service
  const reportData = fetchComplianceData(req.user.orgId);
  res.json(reportData);
});

Automate report generation and delivery to reduce operational risk. Use scheduled tasks (e.g., Cron jobs, AWS Lambda) to compile daily or weekly reports, which can then be encrypted and sent to designated emails or uploaded to a secure portal. For blockchain-specific reporting, tools like Etherscan's API or Blocknative can be used to monitor and report on large or suspicious transactions automatically. Ensure you have a clear data deletion procedure for when the retention period expires, documenting the method and proof of deletion.

Finally, regular security audits and penetration testing are critical. Engage third-party firms to test the dashboard application and its infrastructure. Maintain an incident response plan for potential data breaches. By building with these security and compliance fundamentals, your dashboard becomes a trusted source of truth that satisfies regulators, protects user data, and streamlines internal governance processes.

COMPLIANCE REPORTING

Frequently Asked Questions

Common questions and solutions for developers building and integrating on-chain compliance dashboards using Chainscore's APIs and data infrastructure.

The dashboard aggregates and analyzes data from multiple on-chain and off-chain sources to provide a comprehensive compliance view.

Primary sources include:

  • On-chain data: Transaction histories, wallet interactions, token transfers, and smart contract calls sourced directly from node providers and indexed by Chainscore.
  • Attribution data: Entity and protocol labels from our proprietary attribution engine, which maps wallet addresses to known entities (e.g., exchanges, mixers, DeFi protocols).
  • Risk intelligence: Threat feeds and risk scores derived from analyzing patterns associated with sanctions, hacks, scams, and money laundering.

Data is normalized and enriched via our APIs, such as getWalletRiskScore and getEntityTransactions, to create a unified reporting layer.

conclusion
IMPLEMENTATION

Conclusion and Next Steps

Your compliance dashboard is now operational. This section outlines how to maintain its effectiveness and expand its capabilities.

Launching your dashboard is the beginning of an ongoing process. To ensure it remains a valuable tool, establish a regular review cadence. Schedule weekly checks for data ingestion failures from sources like Chainalysis or TRM Labs and monthly audits of your alert logic and risk scoring thresholds. Document all configuration changes and maintain a version history for your compliance rules, treating them with the same rigor as your smart contract code. This operational discipline transforms the dashboard from a static report into a dynamic compliance engine.

With core monitoring in place, consider advanced integrations to enhance functionality. Connect the dashboard to your internal ticketing system (e.g., Jira, Linear) to automatically create issues for high-risk alerts. Implement webhook notifications to Slack or Discord channels for real-time team awareness. For deeper analysis, export aggregated, anonymized data to business intelligence tools like Metabase or Looker to track trends in user behavior, transaction volumes per jurisdiction, and the effectiveness of your screening processes over time.

The regulatory landscape for DeFi and digital assets is evolving rapidly. Proactively plan for future requirements by designing a modular architecture. This allows you to plug in new data providers or regulatory rule sets without overhauling the entire system. Monitor developments like the EU's Markets in Crypto-Assets (MiCA) regulation and the Financial Action Task Force (FATF) guidance to anticipate new reporting obligations. Your dashboard should be built to adapt, ensuring long-term compliance as both your protocol and the legal framework mature.

Finally, leverage the insights your dashboard generates. Use the data to refine your product's user onboarding flow, adjust risk parameters for certain geographic regions, or inform the development of new, compliant features. A well-maintained compliance reporting system is not just a defensive tool; it provides a strategic advantage by building trust with users, partners, and regulators, ultimately contributing to the sustainable growth of your Web3 project.