The core promise of self-custody is user sovereignty over assets, removing reliance on a centralized third party. However, for institutions—such as funds, DAOs, or corporations—pure, individual self-custody introduces unacceptable risks. These include single points of failure (a lost private key), lack of internal governance, and an inability to enforce compliance policies like transaction approvals or spending limits. The challenge is to preserve the cryptographic security of non-custodial wallets while layering on the operational controls institutions require.
How to Architect a Non-Custodial Solution with Institutional Controls
Introduction: The Need for Controlled Self-Custody
A technical guide to designing non-custodial systems that meet institutional requirements for security, compliance, and operational control.
Controlled self-custody is the architectural paradigm that solves this. It uses smart contract accounts, often called smart accounts or account abstraction (AA), as the foundational layer instead of traditional Externally Owned Accounts (EOAs). A smart account is a programmable wallet whose logic is defined by its on-chain contract code. This allows developers to embed rules directly into the wallet itself, such as requiring multiple signatures (multi-sig), setting daily transfer limits, defining authorized spenders, or adding transaction time-locks. The private keys controlling the account are still held by the institution's members, not a custodian.
The technical shift is from key-centric security to policy-centric security. In an EOA, security is the private key; whoever holds it has absolute, irrevocable control. In a controlled self-custody setup, security is defined by the smart contract's policy engine. For example, a treasury contract might be configured so any transaction over 1 ETH requires 3-of-5 designated officers to sign, while payments to a pre-approved vendor address under that limit only need 1 signature. Frameworks like Safe{Wallet} (formerly Gnosis Safe), ERC-4337 for account abstraction, and OpenZeppelin's Governor provide the modular components to build these systems.
Implementing this requires careful architecture. Key decisions involve selecting a signature scheme (e.g., ECDSA, Schnorr, BLS), designing the access control logic, and integrating off-chain signing services for key management. A typical stack might use Safe{Wallet} contracts as the base account, with a custom security policy module that enforces rules. Signing can be orchestrated by an off-chain signing service that holds private keys in secure enclaves (HSMs) and only produces signatures for transactions that pass the policy check, ensuring keys never touch an internet-connected server.
This guide will detail how to architect such a solution. We'll cover: setting up a multi-signature smart account, writing and attaching custom policy modules for spend limits and role-based access, integrating real-world identity for compliance, and building the off-chain infrastructure for secure key management and transaction relay. The outcome is a non-custodial system where assets are secured on-chain, controlled by the institution's own keys, yet governed by enforceable, transparent rules fit for enterprise use.
Prerequisites and Core Technologies
Building a non-custodial system with institutional-grade controls requires a deliberate selection of foundational technologies. This section outlines the core components and knowledge required before implementation.
A non-custodial architecture ensures users retain exclusive control of their private keys, while institutional controls enforce governance, compliance, and security policies. The core technology stack typically involves smart contract wallets (like Safe or Argent), multi-party computation (MPC) for key management, and access control layers that define permission rules. Understanding the trade-offs between these technologies is the first step. For instance, a smart contract wallet offers programmable recovery and transaction logic but operates at the speed and cost of its underlying blockchain, while an MPC-based solution can provide faster signing but with different trust assumptions regarding the key-share custodians.
Before architecting your solution, you must define the specific controls required. Common institutional requirements include: - Transaction policies (spending limits, whitelisted addresses) - Approval workflows (M-of-N multisig, time-locks) - Compliance integrations (travel rule, sanctions screening) - Audit and reporting (immutable logs, real-time monitoring). These controls are implemented via a combination of on-chain smart contracts and off-chain services. The Safe{Wallet} documentation provides a comprehensive reference for programmable account abstraction, which is a leading framework for building such systems.
Your development environment must be configured for the target blockchain. For Ethereum and EVM-compatible chains, this means setting up Hardhat or Foundry, understanding ERC-4337 for account abstraction, and being proficient with a language like Solidity or Vyper. You'll also need to integrate with oracles (like Chainlink) for price feeds for limit validation and identity attestation services (like Gitcoin Passport or Verite) for compliance checks. A typical test setup involves deploying mock policy contracts and simulating multi-signature proposals to validate the governance flow before committing to mainnet.
Security is paramount. The architecture must be designed with the principle of least privilege, where each component has only the permissions necessary to perform its function. This involves rigorous testing, including static analysis with Slither, formal verification for critical logic, and audits from specialized firms. Furthermore, you must plan for key lifecycle events: generation (distributed key generation for MPC), rotation, recovery (via social or hardware-backed methods), and revocation. Tools like OpenZeppelin's Contracts Wizard can help bootstrap secure, standard-compliant access control logic for your policies.
Architectural Overview: MPC vs. Smart Contract Wallets
This guide compares Multi-Party Computation (MPC) and Smart Contract Wallets for building non-custodial solutions with enterprise-grade controls, detailing their architectures, trade-offs, and implementation considerations.
When architecting a non-custodial solution for institutions, the choice between Multi-Party Computation (MPC) and Smart Account Abstraction (AA) wallets defines the security model, user experience, and feature set. MPC wallets, like those from Fireblocks or Coinbase's wallet-as-a-service, rely on cryptographic secret sharing across multiple parties or devices to sign transactions. Smart contract wallets, such as Safe (formerly Gnosis Safe) or ERC-4337 account abstraction accounts, are programmable smart contracts that hold assets and execute logic on-chain. The core distinction is trust placement: MPC secures the private key off-chain, while smart contracts manage assets on-chain via immutable code.
MPC architecture excels at providing granular, policy-based controls before a transaction is signed. A typical setup involves distributing key shares among client devices, cloud HSM services, and backup providers. A 2-of-3 threshold scheme is common, requiring two signatures to authorize a transaction. Policies can enforce rules like transaction limits, whitelisted addresses, and time-locks at the signing layer. This happens off-chain, so policy checks and complex approvals incur no gas fees. However, MPC's logic is proprietary and centralized within the vendor's infrastructure, creating a dependency on their service availability and security audits.
Smart contract wallet architecture moves access control logic onto the blockchain itself. An ERC-4337 EntryPoint contract bundles user operations, and a singleton Account contract holds the assets and validation logic. Controls like multi-signature requirements, spending limits, and social recovery are implemented as verifiable, on-chain smart contract functions. For example, a validateUserOp function can check a signature against a list of owners and enforce a daily spendLimit. This model is transparent and composable with other DeFi protocols but requires paying gas for all operations, including policy evaluation. It also introduces smart contract risk, though audited, battle-tested contracts like Safe mitigate this.
The integration and extensibility paths differ significantly. MPC solutions typically offer REST APIs and SDKs (e.g., fireblocks-sdk) for integration, providing a streamlined but vendor-locked experience. Adding a new blockchain often just requires the vendor to add support. Smart contract wallets are extended by deploying new modules or plugins to the account contract. An institution could add a transaction simulation module to pre-check for exploits or a delegated signing module for specific roles. This on-chain programmability is powerful but requires Solidity development and introduces upgradeability complexities.
For institutions, the decision often hinges on asset type, regulatory requirements, and desired blockchain interoperability. MPC is dominant for trading desks managing native assets (BTC, ETH) across many chains due to its speed and lack of chain-specific deployment. Smart contract wallets are ideal for DeFi-native operations on EVM chains, where programmable spending rules, batched transactions, and gas sponsorship (paymasters) are critical. A hybrid approach is emerging: using MPC to secure a signer for a smart contract wallet, combining off-chain policy enforcement with on-chain programmability. This leverages MPC for key management while using the smart account for its rich feature set and DeFi composability.
MPC vs. Smart Contract Wallet: Technical Comparison
A technical breakdown of two dominant non-custodial wallet architectures for institutional use cases, focusing on security, flexibility, and operational trade-offs.
| Feature / Metric | Multi-Party Computation (MPC) Wallet | Smart Contract Wallet (e.g., Safe) |
|---|---|---|
Custodial Model | Non-custodial (distributed key) | Non-custodial (on-chain account) |
Core Security Primitive | Cryptographic secret sharing (e.g., GG20, FROST) | On-chain smart contract logic & multisig |
Key Management | Private key is never assembled; held as shares | Private keys for signers are standard EOA keys |
On-Chain Footprint | Single EOA address for all transactions | Unique smart contract address per wallet |
Inheritance/Recovery | Protocol-level share refresh/redistribution | Requires pre-configured on-chain guardian logic |
Gas Overhead per Tx | Standard EOA gas cost | ~40k-100k+ extra gas for contract execution |
Native Chain Support | Any EVM/non-EVM chain with standard signatures | Requires contract deployment & verification per chain |
Approval Flexibility | M-of-N thresholds via protocol | Complex logic (time-locks, spending limits) via Solidity |
Step 1: Implementing an MPC Wallet with Threshold Signatures
This guide details the foundational architecture for a non-custodial wallet using Multi-Party Computation (MPC) and threshold signatures, enabling institutional-grade security and operational controls.
A Multi-Party Computation (MPC) wallet replaces a single private key with a cryptographic secret that is split into multiple key shares, distributed among different parties or devices. No single party ever has access to the complete key. To authorize a transaction, a predetermined threshold of parties (e.g., 2-of-3) must collaborate using a threshold signature scheme (TSS) to produce a valid signature, without ever reconstructing the full private key. This architecture fundamentally eliminates single points of failure and provides a non-custodial foundation with built-in redundancy.
The core technical choice is the signature scheme. ECDSA is the standard for Ethereum and Bitcoin, while EdDSA (like Ed25519) is common for Solana and other chains. Libraries like ZenGo's tss-lib (for ECDSA/EdDSA) or Coinbase's Kryptology provide production-tested implementations. Your architecture must decide on the threshold parameters (m-of-n), key generation ceremony security, and whether to use a centralized coordinator or a peer-to-peer network for signing rounds. Each participant runs a client that manages their key share and participates in the distributed signing protocol.
For institutional controls, the wallet backend must manage policy engines and approval workflows. A typical setup involves: a user client (initiates), a backend policy service (validates against rules like whitelists, limits), and multiple signer nodes (hold key shares). The policy service checks the transaction request; if approved, it orchestrates the MPC signing ceremony among the signer nodes. This separation ensures operational governance (handled by policy) is distinct from cryptographic security (handled by MPC).
Implement key generation and storage carefully. The initial Distributed Key Generation (DKG) ceremony is critical. Each participant generates a secret share, and through a secure multi-party protocol, they collectively derive a public address without any party knowing the full private key. Shares must be stored securely: in Hardware Security Modules (HSMs), on secure enclaves (like Intel SGX or Apple Secure Element), or encrypted with strong hardware-bound keys. Never store a share in plaintext on a server.
Here is a simplified conceptual flow for a 2-of-3 ECDSA signing using a coordinator, inspired by tss-lib patterns:
javascript// 1. Initiate signing const signingMessage = hashTransaction(tx); // 2. Coordinator requests partial signatures from signer nodes const party1SigShare = await signerNode1.createSignatureShare(signingMessage, contextId); const party2SigShare = await signerNode2.createSignatureShare(signingMessage, contextId); // 3. Coordinator aggregates shares into a final signature const finalSignature = await coordinator.aggregateSignatureShares([party1SigShare, party2SigShare]); // 4. Broadcast the valid signature to the network await broadcastTransaction(tx, finalSignature);
The contextId ensures all parties are signing the same data, preventing replay attacks.
Auditing and monitoring are essential. Log all ceremony events (key generation, signing rounds) for non-repudiation. Implement key share rotation protocols to proactively refresh shares without changing the wallet's public address, limiting the blast radius of a potential compromise. By combining MPC-TSS cryptography with a policy-driven backend, you achieve a non-custodial system that meets institutional requirements for security, control, and operational transparency.
Step 2: Building a Smart Contract Wallet with ERC-4337
This guide details the implementation of a non-custodial smart contract wallet using the ERC-4337 standard, focusing on core components like the Account Abstraction EntryPoint, UserOperation handling, and modular security controls suitable for institutional use.
The foundation of an ERC-4337 wallet is a smart contract account that implements the IAccount interface. Unlike an Externally Owned Account (EOA), this contract holds user assets and defines the logic for transaction validation and execution. A minimal implementation includes a validateUserOp function, which verifies signatures and checks custom rules (like daily spend limits or multi-signature requirements) before a UserOperation is executed. The contract's execute function then performs the intended actions, such as token transfers or contract calls. This separation of validation and execution is central to Account Abstraction.
To interact with the broader ecosystem, your wallet must integrate with the ERC-4337 EntryPoint contract. This singleton system contract, deployed on each supporting chain (e.g., 0x5FF137D4b0FDCD49DcA30c7CF57E578a026d2789 on Ethereum Mainnet), is the trusted verifier and orchestrator. It receives bundled UserOperation objects, calls your account's validateUserOp, and if successful, pays the gas fees (potentially in ERC-20 tokens via a Paymaster) and triggers execution. Your development and testing must target the specific EntryPoint version (e.g., v0.6) to ensure compatibility.
Institutional controls are implemented as validation logic within the validateUserOp function. Common patterns include:
- Multi-signature policies: Requiring
M-of-Nsignatures from a set of administrator keys. - Transaction limits: Enforcing daily volume caps in USD value via an oracle.
- Allowlisting: Restricting interactions to pre-approved smart contract addresses.
- Time-locks: Delaying execution of high-value transactions for a cooling-off period. These rules are enforced on-chain, providing transparent and non-custodial security without relying on a third-party's backend.
A critical design choice is between a monolithic wallet contract and a modular, upgradeable architecture using proxies. For institutional use, an upgradeable design (like the Transparent Proxy or UUPS pattern) is recommended to patch vulnerabilities or add features without migrating assets. Key logic for validation and execution can be separated into modules, allowing a multi-signature module to be swapped for a role-based one. The OpenZeppelin Contracts library provides standard implementations for UUPS upgradeability and access control, which form a robust starting point.
Finally, you must implement a Bundler client or integrate with an existing service like stackup, biconomy, or alchemy to submit UserOperation objects to the network. The bundler packages user intents, estimates gas, and sends transactions to the EntryPoint. For development, you can use the account-abstraction package from eth-infinitism to run a local bundler. Thorough testing with tools like Hardhat or Foundry is essential, simulating the complete flow from user intent to on-chain execution to ensure your security logic and gas estimations work correctly under all conditions.
Designing Recovery and Social Login Fallbacks
This section details the critical fallback mechanisms for institutional non-custodial wallets, focusing on secure key recovery and user-friendly access methods.
A robust non-custodial architecture must plan for key loss. Institutional-grade solutions implement multi-party computation (MPC) for recovery, which is superior to simple seed phrase backups. In an MPC-based recovery scheme, a user's private key is never stored whole. Instead, it is split into cryptographic shares distributed among trusted entities—such as the user's device, a cloud backup encrypted with a personal PIN, and a designated institutional custodian. The key can only be reconstructed when a predefined threshold of these parties (e.g., 2-of-3) collaborates, ensuring no single point of failure or compromise.
For user onboarding and accessibility, integrating social login fallbacks like Google Sign-In or Apple ID provides a familiar entry point without sacrificing security. This is achieved by using the OAuth token from the social provider to decrypt a user's encrypted key share stored on a secure backend. Crucially, the social provider never has access to the key material; it only provides an authentication signal. This method reduces friction for non-technical users while maintaining the wallet's non-custodial property, as the institution cannot unilaterally access funds without the user's authenticated session.
The technical implementation involves several steps. First, during wallet creation, an MPC protocol like GG20 generates the key shares. One share is stored locally on the user's device, another is encrypted with a key derived from the social login (e.g., using Web3Auth's infrastructure), and a third may be held by the institution. A recovery transaction requires cryptographic signatures from the threshold number of shares. Code for initiating recovery often involves calling a startRecovery function on a smart contract or backend service, which then coordinates the signing ceremony among the share holders.
Security considerations are paramount. The social login flow must use PKCE (Proof Key for Code Exchange) to prevent interception attacks. The encrypted share stored in the backend should be tied to the user's specific OAuth sub (subject) identifier and require recent authentication. Furthermore, institutions should implement rate-limiting and anomaly detection on recovery attempts. It's also advisable to include a time-delay for recovery operations, allowing users a window to cancel fraudulent requests, a concept borrowed from smart contract wallets like Safe{Wallet}.
Ultimately, these fallback mechanisms create a balanced system. They provide the user experience and recovery safety nets expected in traditional finance while preserving the core tenant of self-custody. The architecture ensures that asset control remains distributed, aligning with the principles of decentralized ownership while meeting the practical demands of institutional and mainstream adoption.
Step 4: Building a Client-Side and On-Chain Policy Engine
This section details the dual-layer policy engine that enforces institutional controls in a non-custodial wallet, balancing security with user autonomy.
A non-custodial solution with institutional controls requires a policy engine that operates on two distinct layers: a client-side policy enforcer and an on-chain policy contract. The client-side layer, embedded in the wallet application (e.g., a browser extension or mobile app), acts as the first line of defense. It intercepts transaction requests, validates them against a predefined rule set—such as spending limits, allowed token types, or destination address whitelists—and blocks non-compliant actions before they ever reach the blockchain. This provides immediate user feedback and prevents unnecessary on-chain gas expenditure for invalid operations.
The on-chain layer serves as the ultimate source of truth and enforcement mechanism. It consists of smart contracts that codify the institution's governance policies. When a compliant transaction passes the client-side check, it is submitted to a policy contract for final verification. This contract, often acting as a relayer or transaction guard, can check signatures from authorized administrators, validate complex multi-signature requirements, or enforce time-locks. Only after this on-chain validation does the transaction execute. This architecture ensures that even a compromised client application cannot bypass the core institutional rules.
Key to this design is the policy synchronization between layers. The client-side rules must be a subset or a reflection of the on-chain contract logic to prevent conflicts. For example, a rule like "maximum daily transfer: 10 ETH" is enforced client-side for UX, but the on-chain contract must also reject any transaction exceeding this limit. This is often managed by having the client fetch and cache the current policy parameters from the on-chain contract or a trusted API, ensuring the user interface reflects the enforceable reality.
Implementing the client-side engine involves hooking into the wallet's transaction construction flow. Using a framework like WalletConnect or extending an EIP-1193 provider, you can inject validation logic. A simplified code snippet for a client-side rule check might look like:
javascriptasync function validateTransaction(tx) { const policy = await fetchPolicyFromChain(); if (tx.value > policy.dailyLimit) { throw new Error('Transaction exceeds daily limit'); } if (!policy.allowedTokens.includes(tx.to)) { throw new Error('Destination not whitelisted'); } return true; }
The on-chain policy contract is typically built using a modular system like OpenZeppelin's AccessControl for permissions and may implement interfaces like EIP-2771 for meta-transactions or use a registry pattern to manage updatable rules. A critical consideration is gas efficiency; policy checks must be optimized to avoid making simple transactions prohibitively expensive. Furthermore, the contract must include secure upgrade mechanisms or policy adjustment functions that are gated by the institution's multi-signature wallet or DAO, allowing rules to evolve without migrating assets.
This dual-engine architecture achieves the core promise of non-custodial institutional DeFi: users retain possession of their private keys, while the organization can enforce necessary compliance and risk management frameworks. The client-side provides a seamless and responsive experience, while the immutable on-chain layer guarantees that the established guardrails cannot be subverted, creating a trust-minimized system for regulated participation.
Implementation Examples by Use Case
Multi-Signature Treasury Management
Institutional funds often require multi-signature (multisig) controls for treasury management. A common pattern uses a Gnosis Safe wallet as the primary non-custodial vault, with transaction execution gated by a policy engine.
Key Components:
- Gnosis Safe: Serves as the asset vault (e.g., on Ethereum mainnet).
- Policy Contract: An on-chain smart contract that defines spending limits, authorized destinations (e.g., whitelisted DEX routers like Uniswap, 1inch), and cooldown periods.
- Off-Chain Signer Service: A backend service that monitors for policy-compliant proposals, collects signatures from designated officers, and submits the batch transaction to the Safe.
Flow: A trader submits a swap proposal. The policy contract validates it against rules (limit, destination). Approved proposals are queued. The signer service prompts officers to sign via Safe UI. Once the threshold (e.g., 2-of-3) is met, the service executes the transaction.
Essential Resources and Tools
These resources cover the core primitives required to design a non-custodial architecture with institutional-grade controls. Each card focuses on a specific layer you can integrate into production systems without introducing custody risk.
Transaction Monitoring and Simulation
Pre-execution transaction simulation is critical for preventing loss in non-custodial systems. Before signatures are collected, transactions should be simulated against the current blockchain state.
Best practices include:
- Static call simulation to detect reverts or unexpected state changes
- Asset delta analysis to show exact token movements per transaction
- Contract interaction labeling to identify unknown or risky targets
Safe’s transaction service and ecosystem tooling allow teams to preview:
- Token outflows and approvals
- Delegate calls and module interactions
- Gas usage and execution paths
Institutions typically require simulations to be reviewed by multiple approvers before signing. This reduces the risk of phishing, malicious calldata injection, or operator error while preserving a fully non-custodial signing flow.
Frequently Asked Questions for Developers
Common technical questions and solutions for building non-custodial systems with enterprise-grade controls for asset management and transaction security.
The fundamental difference is private key management. In a custodial model, a third-party service holds the keys, acting as a centralized intermediary. A non-custodial architecture ensures end-users retain sole control of their keys, eliminating counterparty risk.
However, "non-custodial" doesn't mean uncontrolled. The challenge is layering institutional controls—like multi-signature wallets, transaction policy engines, and role-based access—on top of this self-custody foundation. This is achieved through smart contract logic (e.g., Safe{Wallet} modules) or specialized protocols (like MPC-TSS), which enforce rules without a single entity holding the key.
Conclusion and Next Steps
This guide has outlined the core principles for building a non-custodial solution with institutional-grade controls. The next step is to integrate these components into a production-ready system.
Architecting a non-custodial solution with institutional controls requires balancing self-custody with operational security. The core components we've covered—multi-party computation (MPC) for key management, policy engines for transaction validation, and secure off-chain infrastructure for signing—create a robust foundation. This architecture ensures that while users retain ultimate asset ownership, all actions are governed by pre-defined, auditable rules. The separation of duties between policy definition, approval, and execution is critical for mitigating internal and external threats.
For implementation, start by selecting and integrating a production-ready MPC provider like Fireblocks, Qredo, or Coinbase MPC. These platforms provide SDKs and APIs for generating wallets, proposing transactions, and gathering approvals. Your application's backend must act as the policy orchestrator, checking proposed transactions against business logic (e.g., whitelists, limits) before forwarding them to the MPC network for signature. Always use a hardened, air-gapped server or a trusted execution environment (TEE) to host the policy engine to prevent tampering.
The next phase involves rigorous testing and auditing. Deploy the system on a testnet with simulated attack vectors: test policy bypass attempts, simulate key share compromise, and stress-test the approval workflows. Engage a third-party security firm to audit the entire stack, from your smart contracts and backend logic to the integration with the MPC service. Document all controls, failure modes, and recovery procedures. This due diligence is non-negotiable for institutional adoption, as it provides the E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) required by stakeholders and regulators.
Looking forward, consider advanced features to enhance the system. Implement transaction simulation using services like Tenderly or OpenZeppelin Defender to preview outcomes before signing. Integrate real-time monitoring and alerting for anomalous activity. Explore zero-knowledge proofs (ZKPs) for creating privacy-preserving compliance reports that prove adherence to policies without revealing sensitive on-chain data. The ecosystem for institutional crypto infrastructure is rapidly evolving, so staying current with new cryptographic primitives and custody models is essential.
To continue your learning, explore the documentation for the MPC providers mentioned, study EIP-4337 (Account Abstraction) for smart contract wallet design patterns, and review security best practices from the NIST Cybersecurity Framework and SOC 2 controls. Building this solution is an iterative process—start with a minimum viable policy set, deploy in a controlled environment, and gradually expand functionality based on real-world use and feedback.