Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
Free 30-min Web3 Consultation
Book Consultation
Smart Contract Security Audits
View Audit Services
Custom DeFi Protocol Development
Explore DeFi
Full-Stack Web3 dApp Development
View App Services
LABS
Guides

How to Design an Interoperable Social Feed Aggregator

A developer guide for building an application that aggregates posts from Farcaster, Lens, and Mirror. Covers indexing, ranking, and displaying native interactions.
Chainscore © 2026
introduction
ARCHITECTURE GUIDE

How to Design an Interoperable Social Feed Aggregator

A technical guide for developers building a cross-platform social feed that unifies content from protocols like Farcaster, Lens, and Bluesky.

An interoperable social feed aggregator is a client application that fetches, normalizes, and displays social posts from multiple decentralized social (DeSo) protocols. Unlike a single-protocol client, its core challenge is creating a unified experience from disparate data models and APIs. Key architectural components include a protocol adapter layer to handle API differences, a normalization engine to create a common data schema, and a ranking algorithm to merge feeds from sources like Farcaster (on Optimism), Lens (on Polygon), and Bluesky (AT Protocol).

The first step is designing the normalized data schema. Your internal Post object must abstract away protocol-specific fields. For example, a Farcaster cast has a hash and author.fid, a Lens publication has a pubId and profile.id, and a Bluesky skeet has a uri and author.did. Your schema should map these to common fields like id, authorId, content, timestamp, and protocol. You'll also need to handle different content types: text, images, embeds, and references (replies, recasts/reposts).

Next, implement the adapter layer. Each protocol connector is a module that queries its native API and transforms responses into your normalized schema. For Farcaster, use the Hub API or Neynar. For Lens, use the Lens API or index data from the blockchain. For Bluesky, use the AT Protocol's XRPC. Use batch requests and caching to manage rate limits. Here's a simplified TypeScript interface for an adapter:

typescript
interface ProtocolAdapter {
  fetchUserFeed(userId: string, limit: number): Promise<NormalizedPost[]>;
  fetchPostThread(postId: string): Promise<NormalizedThread>;
}

The aggregation and ranking logic determines how posts from different networks are merged into a single feed. A simple chronological merge often isn't sufficient due to varying network activity levels. Consider a scoring system based on factors like post age, engagement metrics (likes, recasts), the user's social graph affinity, and protocol weighting. You might use a time-decay algorithm (like Hacker News' ranking) applied across the unified dataset. This ensures a relevant, blended feed rather than a protocol-by-protocol concatenation.

Finally, address identity mapping and user experience. Users need a way to link their identities across protocols (e.g., proving they own both a Farcaster FID and a Lens Profile ID). This can be done via signed messages or by verifying ownership of connected wallet addresses. The UI should clearly indicate the source protocol for each post (e.g., with a small icon) and handle network-specific actions—like warping a Farcaster cast versus collecting a Lens post—through the appropriate adapter. The end goal is a seamless client where the underlying protocol complexity is abstracted from the end-user.

prerequisites
ARCHITECTURE FOUNDATION

Prerequisites and Core Technologies

Building an interoperable social feed aggregator requires a robust technical stack that bridges Web2 APIs and Web3 protocols. This section outlines the core technologies and foundational knowledge needed to design a scalable, decentralized social graph.

The foundation of an interoperable aggregator is a unified data model. You must design a schema that can normalize disparate data from sources like Farcaster Frames, Lens Protocol publications, and traditional platforms via their APIs (e.g., Twitter/X). This involves creating abstract types for core entities: User, Post, Reaction, and Connection. A post, for instance, must encapsulate fields for on-chain provenance (like a Lens Publication Id or Farcaster hash), content URI, and author's decentralized identifier (DID). Tools like Ceramic Network or Tableland can be used to create and manage this composable data layer.

Smart contract interoperability is non-negotiable for reading and writing social actions across chains. You'll need to integrate with cross-chain messaging protocols like LayerZero or Axelar to query social graphs on different networks. For example, to aggregate a user's Lens posts from Polygon and their Farcaster casts from Optimism, your backend must use these protocols' SDKs to fetch and verify data. Furthermore, implementing EIP-4361 (Sign-In with Ethereum) and ERC-6551 (Token Bound Accounts) allows for unified identity, letting users interact with their aggregated feed via a single wallet, regardless of the underlying source network.

The indexing and query layer is what makes the aggregated feed fast and usable. You cannot rely on direct RPC calls for this. Instead, you must use or build a dedicated indexer that subscribes to events from the relevant smart contracts (Lens's PostCreated, Farcaster's IdRegistry) and API streams. This indexer populates a database with the normalized data model. For querying, consider using GraphQL endpoints powered by The Graph subgraphs (e.g., the existing Lens API or a custom subgraph) or an Apollo Server instance. This setup enables complex queries like "fetch all posts from accounts I follow, sorted by timestamp, across all integrated protocols."

Finally, the client application must handle wallet connectivity and state management for a seamless user experience. Integrate a library like wagmi or ethers.js to connect to Ethereum Virtual Machine (EVM) compatible chains. Use SIWE for authentication sessions. For state management, consider a library that can handle the asynchronous nature of cross-chain data; React Query or SWR are excellent for fetching, caching, and synchronizing the aggregated feed data from your GraphQL endpoint. The frontend should render posts by parsing the content URI (often pointing to IPFS or Arweave) and displaying the associated on-chain interactions (likes, recasts) in real-time.

architecture-overview
SYSTEM ARCHITECTURE OVERVIEW

How to Design an Interoperable Social Feed Aggregator

This guide outlines the core architectural patterns for building a decentralized social feed aggregator that unifies content from protocols like Farcaster, Lens, and Bluesky.

An interoperable social feed aggregator is a middleware application that queries, filters, and presents content from multiple decentralized social graphs. Unlike a traditional aggregator, it must handle disparate data models, authentication methods, and indexing strategies. The primary goal is to create a unified user experience from fragmented sources, enabling features like a single feed, cross-protocol notifications, and composite user profiles. Key challenges include data normalization, real-time updates, and maintaining the cryptographic verifiability inherent to Web3 social data.

The architecture typically follows a modular, event-driven pattern. A core Aggregator Service acts as the orchestrator, communicating with specialized Protocol Adapters for each supported network (e.g., Farcaster's Hubs, Lens's API, AT Protocol's XRPC). Each adapter is responsible for translating protocol-specific queries and data structures into a common internal schema. This design allows new protocols to be integrated with minimal impact on the core logic. An indexing layer, which may use services like The Graph or custom indexers, is crucial for efficient historical data retrieval and complex filtering.

Data flow begins with a user query through a frontend client. The aggregator service authenticates the user (often via Sign-In with Ethereum) and parses the request. It then fans out parallel queries to the relevant protocol adapters. For example, a request for "posts from my follows" would query Farcaster for casts, Lens for publications, and Bluesky for skeets. The adapters fetch data, the aggregator normalizes it into a standard Post object—mapping fields like content, author, timestamp, and protocol—and applies business logic (ranking, filtering). The normalized posts are merged, sorted, and returned.

A critical component is the unified identity layer. Users have different identifiers on each protocol (Farcaster FID, Lens Profile ID, Bluesky DID). The aggregator must maintain a mapping, often in a user-controlled data store like Ceramic or a smart contract, linking these identities to a primary wallet address. This enables features like showing a user's activity across all their connected profiles. Privacy considerations are paramount; this mapping should be opt-in and user-consent driven.

For real-time capabilities, the system must subscribe to events from each protocol. This involves listening to on-chain events (e.g., Lens's PostCreated event) or websocket streams from protocol nodes (e.g., Farcaster Hub subscriptions). When a new event is detected, the relevant adapter processes it, and the aggregator updates its cache and pushes the update to subscribed clients via a WebSocket connection. This ensures feeds remain live without constant polling.

Finally, the architecture must be decentralization-forward. While the aggregator service may be centrally hosted for performance, its design should allow for alternative clients or even a peer-to-peer mesh of aggregators. All data remains verifiable; posts can include signatures or on-chain proofs. The Farcaster Frames specification is an example of a portable, aggregator-friendly content type. By adhering to these principles, the aggregator enhances, rather than replaces, the underlying decentralized networks.

data-normalization
TUTORIAL

Creating a Unified Data Schema

A practical guide to designing a canonical data model for aggregating social activity across Web3 protocols like Farcaster, Lens, and ENS.

A unified data schema is the foundational layer for any social feed aggregator. It defines a single, canonical data model that can represent user posts, interactions, and profiles originating from disparate protocols. Without this, your application logic becomes a tangled mess of conditional checks for each source's unique data format. The goal is to create an abstraction layer that normalizes data from protocols like Farcaster (casts), Lens (publications), and ENS (text records) into a common set of fields your frontend can consistently render.

Start by identifying the core entities. For a social feed, this typically includes a Post, a Profile, and an Interaction (like, recast, comment). For each, define the essential fields that are common across most sources. A Post schema, for instance, might include: id (a unique composite key), author, content, timestamp, sourceProtocol (e.g., 'farcaster'), sourceUrl (link to original), and metadata (a flexible JSON field for protocol-specific data). Use a TypeScript interface or GraphQL schema to formally define this structure.

The metadata field is crucial for handling protocol-specific data you don't want to lose. For a Farcaster cast, you might store the hash and parentHash here. For a Lens publication, you could store the pubId and profileId. This approach keeps your core schema clean and extensible. When designing, prioritize fields needed for ranking, filtering, and display. Can you sort posts by timestamp from any source? Can you filter by a specific author's addresses across protocols? Your schema must enable these operations.

Implementation requires a data transformation layer, often called a normalizer. This is a set of functions that map the raw API response from each protocol to your unified schema. For example, a Farcaster normalizer would extract the text from a cast, the fid and connected addresses for the author, and format the timestamp. Always include the original source data verbatim in a field like _raw for debugging and future feature support. Tools like Zod for runtime validation are invaluable here.

Finally, consider the storage and indexing strategy. Will you use a traditional database, a decentralized storage network, or an indexer? The schema influences this choice. For high-performance feeds, you'll need to index fields like timestamp and author. Remember that user identity is fragmented; a single user may have a Farcaster fid, a Lens profileId, and an ENS name. Your Profile schema should link these via a list of verified addresses or proofs to create a unified cross-protocol identity for aggregation.

SOCIAL GRAPH & CONTENT LAYERS

Protocol Feature and API Comparison

Comparison of core protocols for building an interoperable social feed aggregator, focusing on data access and composability.

Feature / MetricLens ProtocolFarcaster FramesCross-Chain Smart Posts

Primary Data Type

On-chain social graph & publications

Cast embeds & client-side actions

Portable posts with on-chain verification

Content Storage

Arweave / IPFS (decentralized)

Centralized servers (for frame logic)

Any L1/L2 (state stored on-chain)

Read API Access

GraphQL endpoint (The Graph)

Farcaster Hub & Neynar APIs

Direct contract calls & indexers

Write API / Client SDK

Lens Client SDK v2.1+

Frames SDK & Signers

EIP-712 signing libraries

Cross-Chain Native

Feed Algorithm Control

Fully customizable by aggregator

Limited (client determines frame display)

Deterministic by source chain state

Typical Latency for Reads

< 2 sec (indexed)

< 1 sec (cached)

2-5 sec (varies by chain)

Developer Cost to Integrate

Gas fees for profile minting

Hosting costs for frame server

Gas fees for post publication

ranking-algorithm-design
TUTORIAL

Designing a Cross-Protocol Ranking Algorithm

Learn how to build a ranking system that aggregates and scores content from disparate social protocols like Farcaster, Lens, and Bluesky to create a unified, high-quality feed.

A cross-protocol social feed aggregator must ingest content from multiple, often incompatible, decentralized networks. The first challenge is data normalization. Each protocol has its own data model: Farcaster uses Frames and Casts, Lens uses Publications and Mirrors, and Bluesky uses Feeds and Skeets. Your algorithm needs a unified schema, mapping these to core attributes like author, content, timestamp, engagement_metrics, and original_protocol. This abstraction layer is critical for applying a consistent ranking logic across all sources.

The ranking algorithm itself should be a weighted scoring function that evaluates each normalized post. Key signals include: on-chain reputation (e.g., Farcaster FID age, Lens profile NFT ownership), cross-protocol engagement (aggregating likes/recasts across platforms), temporal decay (prioritizing recent content), and social graph proximity (weighting content from followed users or their engagements higher). A simple implementation might look like:

python
score = (author_reputation * 0.3) + (engagement_score * 0.4) + (recency_decay * 0.2) + (graph_weight * 0.1)

The weights are hyperparameters that must be tuned based on desired feed quality.

To prevent spam and manipulation, incorporate sybil resistance and cost-of-attack metrics. Leverage on-chain proof-of-personhood systems like World ID, stake-weighted reputation from protocols like Lens, or transaction history. Additionally, implement negative signals for accounts exhibiting bot-like behavior—such as extremely high post frequency or identical content cross-posting. These signals should dynamically reduce an item's ranking score or filter it out entirely.

For dynamic personalization, the algorithm must process implicit feedback. Track user interactions (dwell time, likes, hides) within your aggregator to create a user-specific model. This can be a simple collaborative filtering approach or a more complex model using embeddings for content similarity. The final feed score can blend the global ranking (for discovery and quality) with the personalization score (for relevance), ensuring users see both high-signal network content and items tailored to their interests.

Finally, continuous evaluation is essential. Define quantitative metrics like user engagement rate, time spent, and spam reports. Use A/B testing to compare different weighting schemes or signal combinations. Because underlying protocols evolve—new features, governance changes, spam tactics—your ranking algorithm must be modular and regularly updated. The goal is a feed that feels coherent and valuable, despite sourcing from the fragmented landscape of decentralized social media.

user-preference-portability
TUTORIAL

Implementing User Preference Portability

This guide explains how to design a social feed aggregator that respects user sovereignty by porting preferences and social graphs across platforms using decentralized protocols.

A traditional social feed algorithm is a black box controlled by a central platform, locking your preferences—your likes, follows, and engagement history—within its walled garden. An interoperable aggregator flips this model. It uses decentralized identity (like Ethereum's ERC-725/ERC-735 or Ceramic's ComposeDB) to give users a portable profile. Your social graph (who you follow) and preference signals (content types you engage with) are stored in a verifiable credential or a decentralized data graph that you own. The aggregator queries this user-controlled data layer to personalize feeds from multiple sources, breaking platform dependency.

The core technical challenge is standardizing the preference schema. You need a shared data model that different social protocols (like Farcaster, Lens Protocol, or Nostr) can map onto. A practical approach is to define a JSON-LD context or use a Tableland table schema that describes key attributes: contentType (e.g., text, image, video), topicTags (e.g., #defi, #governance), interactionType (like, recast, reply), and sourceProtocol. Your aggregator's indexing service would listen for on-chain or off-chain interactions, normalize the data against this schema, and write it to the user's decentralized storage, such as an IPFS bucket referenced by their DID document.

Here's a simplified code snippet showing how an aggregator backend might fetch and apply ported preferences to filter a raw feed. It assumes the user's preferences are stored in a composable database under their did:key identifier.

javascript
// Pseudocode for preference-based feed filtering
async function generatePersonalizedFeed(userDID, rawFeedItems) {
  // Fetch the user's portable preference graph
  const preferenceGraph = await ComposeDB.get(userDID, 'PreferenceGraph');
  
  // Score each feed item based on stored preferences
  const scoredItems = rawFeedItems.map(item => {
    let score = 0;
    // Match against followed profiles
    if (preferenceGraph.followedProfiles.includes(item.creatorDID)) score += 10;
    // Match against preferred topics
    item.tags.forEach(tag => {
      if (preferenceGraph.preferredTopics.includes(tag)) score += 5;
    });
    // Weight by historical interaction type
    if (item.type === preferenceGraph.mostEngagedContentType) score += 7;
    return { ...item, score };
  });
  
  // Return sorted feed
  return scoredItems.sort((a, b) => b.score - a.score).slice(0, 50);
}

For the aggregator frontend, you must implement a transparent preference dashboard. This interface allows users to view, edit, and export their preference graph. Crucially, it should show the provenance of each preference—which protocol and interaction generated it (e.g., "Liked a post on Lens about DAOs"). Use SIWE (Sign-In with Ethereum) for authentication, and encrypt sensitive preferences (like muted accounts) using the user's public key before storage. The dashboard should also let users set cross-platform rules, such as "always prioritize posts from @user.eth, regardless of source protocol."

The final architectural component is a decentralized recommendation engine. Instead of a central server, you can deploy a Lit Protocol or Bacalhau serverless function that users can run to process their own data. This function consumes the user's private preference graph and public on-chain social data to generate a feed. Users can choose or even stake on different curation algorithms (e.g., a "community-trending" algo vs. a "close-friends-only" algo), fostering an open market for transparent feed ranking. This shifts control from the aggregator developer to the user, fully realizing preference portability.

Successful implementation requires careful consideration of privacy and spam. Storing detailed interaction data on-chain can be expensive and public. A hybrid model is often best: store lightweight attestations (like "liked 5 posts about DeFi this month") on-chain via EAS (Ethereum Attestation Service) for portability, while keeping granular data in encrypted, user-controlled storage like Lit-encrypted IPFS or GunDB. Furthermore, to prevent sybil attacks on recommendations, integrate proof-of-personhood protocols like Worldcoin or BrightID when weighting certain social signals in your algorithm.

DEVELOPER FAQ

Frequently Asked Questions

Common technical questions and troubleshooting for building a cross-chain social feed aggregator using protocols like Farcaster, Lens, and XMTP.

An interoperable social feed aggregator is a unified interface that pulls social data from multiple, isolated Web3 social protocols like Farcaster, Lens Protocol, and XMTP. It works by querying each protocol's decentralized data layer—such as Farcaster's Hubs or The Graph subgraphs for Lens—and normalizing the data into a single feed.

Core components include:

  • Indexer/API Layer: Fetches casts, posts, and messages from protocol-specific APIs.
  • Identity Resolution: Maps user addresses to protocol-specific handles (e.g., @alice.lens, @bob.farcaster).
  • Aggregation Logic: Applies ranking, filtering, and deduplication rules to merge feeds.
  • Wallet Integration: Uses libraries like WalletConnect or wagmi to authenticate users and sign actions across chains.

The aggregator itself is typically a stateless backend service or a smart contract that references off-chain data, avoiding the need to store all content on-chain.

conclusion-next-steps
BUILDING THE FUTURE OF SOCIAL

Conclusion and Next Steps

This guide has outlined the architectural blueprint for a decentralized, interoperable social feed. The next steps involve implementation, iteration, and community building.

Building an interoperable social feed aggregator is a multi-layered challenge that bridges protocol design, data indexing, and user experience. The core architecture rests on three pillars: a decentralized identity layer (like Sign-In with Ethereum or Lens Protocol handles), a flexible data aggregation engine that queries multiple sources (on-chain actions, Farcaster casts, Lens posts), and a composable front-end that can be embedded across the web. Success is measured not by building a walled garden, but by creating an open utility that enhances existing social graphs.

For developers, the immediate next step is to build a minimal viable aggregator (MVA). Start by integrating a single protocol, such as Farcaster's Warpcast API or the Lens API, to fetch and display a basic feed. Implement a simple ERC-4337 account abstraction wallet connection for seamless onboarding. Use a Graph Protocol subgraph or a Covalent Unified API to efficiently index and query on-chain activity like NFT mints or token transfers related to followed addresses. This focused prototype validates the data pipeline and user flow.

The subsequent phase involves cross-protocol aggregation and algorithm design. This requires normalizing data from disparate sources (e.g., a Farcaster 'cast' and a Lens 'post') into a unified schema. You must then design a sovereign algorithm—a set of transparent, user-configurable rules for ranking content. This could prioritize signals like: social proof (likes/recasts), financial weight (governance token holdings), temporal decay, and direct connection strength. Avoid opaque, centralized ranking models.

Long-term growth depends on ecosystem composability. Publish your aggregation logic as a smart contract or an open API so other dApps can use your feed as a module. Consider issuing a non-transferable reputation token (a Soulbound Token) to reward active curators or data validators within your network. The goal is to transition from a standalone application to a foundational primitive in the decentralized social stack, where the feed itself becomes a platform for further innovation.

Finally, engage with the communities you aim to serve. Deploy testnets on Optimism or Base for low-cost interaction, gather feedback, and iterate. The technical vision must be matched by genuine utility. An interoperable feed that reduces fragmentation and returns control to users isn't just another app; it's a critical piece of infrastructure for the next generation of the social web.

How to Build a Decentralized Social Feed Aggregator | ChainScore Guides