DL Research Content

A conversation with Dmitry Zhelezov, Co-Founder of SQD

A conversation with Dmitry Zhelezov, Co-Founder of SQD
Illustration: Andrés Tapia; Photo: Courtesy of SQD

How does SQD ensure its data indexing platform remains decentralised without sacrificing scalability and security?

Our approach to decentralisation involves several key strategies. Firstly, we ensure efficient storage by compressing data and distributing it across the network, making it accessible from multiple chains. This setup supports scalability and aligns with web3 values by being permissionless and decentralised. Our network architecture is designed to scale linearly—each new worker node enhances capacity and bandwidth.

Nodes are rewarded in SQD tokens, and we’ve structured the rewards system to encourage decentralisation by requiring delegators to allocate their funds among top-performing nodes. We’ve also ensured that running a node is accessible to a broad range of participants, even during market downturns, which is crucial for maintaining a decentralised network.

Finally, we’ve modularised and optimised our tech stack to address scalability and security concerns without compromising performance, leveraging the latest innovations in data management.

How does SQD balance high-performance data indexing with the scalability requirements of web3 applications?

The primary challenge for web3 applications is accessing data efficiently, as traditional RPC nodes, which are optimised for writing to the chain, do not scale well for the frequent read operations needed by decentralised applications (dApps). This challenge is magnified for multichain dApps that require data from various chains.

SQD addresses this by offering access to raw data at scale without relying on RPC providers or centralised APIs. Our network’s bandwidth increases with each additional node, allowing for more efficient block synchronisation and faster data delivery compared to traditional methods. By placing all raw data in a decentralised data lake and providing tools for client-side processing, SQD offers developers a familiar, scalable, and cost-effective alternative to traditional RPCs.

What is a decentralised data lake, and what is it used for?

Join the community to get our latest stories and updates

A decentralised data lake has two key components: decentralisation and the data lake itself. Decentralisation ensures there is no single point of failure, with data flowing through peer-to-peer connections. Data consumers access information through a local gateway rather than a specific URL.

The data lake serves as a repository where various types of data are stored, offering a unified interface for extraction and querying. It functions as a mega-database, optimised for high-throughput selects over vast amounts of data, while abstracting the complexity of managing large datasets. Developers can use familiar tools, with the data lake handling the heavy lifting.

How does SQD’s vision for ‘SQLite’ succeed where other data indexer competitors fail?

SQLite, a well-established database management tool in web2, is at the core of our vision for SQD’s Light Clients. Although SQLite is widely used on mobile devices for managing data locally, it has yet to be applied in web3.

SQD Light Clients aim to bring the benefits of SQLite to web3, offering a verifiable, decentralised, and permissionless data management experience. Users will be able to run an indexer on a clone of the onchain data database on their own devices. This allows them to enrich it with private data, use secure enclaves, and process everything locally without latency, making the experience smooth and efficient.

How does SQD’s Gateway 2.0 plan to replace RPCs for data access with a more permissionless and trustless method, enhancing web3 scalability?

Gateway 2.0, now branded as SQD Portal, is designed to build a comprehensive ecosystem of data tools on top of the SQD Network’s low-level interface. The Portal is a lightweight software that developers can install locally to access data more ergonomically than through direct SQD Network queries. It intelligently manages concurrent requests and caching, ensuring a seamless data stream while masking the complexities of node connectivity within the SQD Network. We plan to expand the Portal’s ecosystem with plugins and tools, making it a versatile solution for streaming, decoding, and aggregating data, thereby unlocking the full potential of the SQD Network.