The DAS API methods will only return up to 1000 records. If you need to retrieve more than 1000 items, you will need to paginate the records. This is achieved by making multiple API calls and crawling through multiple “pages” of data. We support two mechanisms: page-based and keyset pagination.
We recommend page-based pagination for beginners. It is the simplest and best way to get started. Keyset pagination is recommended for advanced users who need to query across large (500k+) datasets efficiently.
With this method, the user specifies the page number and the number of items they want per page. To iterate to the next page, increment the page number. This is easy, intuitive, and fast for most use cases.
Using pages requires the database to crawl across all items until it reaches the next page. For example, if you ask for page 100 and page size 1000, the database must traverse the first 1M records before returning your data.
For this reason, page-based pagination is not recommended for large datasets. Keyset pagination is far better suited for these types of workloads.
With this method, the user defines pages by providing conditions that filter the data set. For example, you can say, “Get me all assets with an ID > X but an ID < Y”. The user can traverse the entire dataset by modifying X or Y on each call. We provide two methods of keyset pagination:
Cursor-based – Easier to use but less flexible.
Range-based – More complex but very flexible.
Keyset pagination is only supported when sorting by id.
To query across a range, you can specify before and/or after. The query is essentially identical to “get me all assets after X but before Y”. You can traverse the dataset by updating each call’s before or after parameter.
Copy
Ask AI
const url = `https://mainnet.helius-rpc.com/?api-key=<api_key>`const example = async () => { // Two NFTs from the Tensorian collection. // The "start" item has a lower asset ID (in binary) than the "end" item. // We will traverse in ascending order. let start = '6CeKtAYX5USSvPCQicwFsvN4jQSHNxQuFrX2bimWrNey'; let end = 'CzTP4fUbdfgKzwE6T94hsYV7NWf1SzuCCsmJ6RP1xsDw'; let sortDirection = 'asc'; let after = start; let before = end; let items = []; while (true) { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ jsonrpc: '2.0', id: 'my-id', method: 'searchAssets', params: { grouping: ['collection', '5PA96eCFHJSFPY9SWFeRJUHrpoNF5XZL6RrE1JADXhxf'], limit: 1000, after: after, before: before, sortBy: { sortBy: 'id', sortDirection: sortDirection }, }, }), }); const { result } = await response.json(); if (result.items.length == 0) { console.log('No items remaining'); break; } else { console.log(`Processing results with (after: ${after}, before: ${before})`); after = result.items[result.items.length - 1].id; items.push(...result.items); } } console.log(`Got ${items.length} total items`);};example();
Advanced users needing to query large datasets (e.g., entire compressed NFT collections) must use keyset-based pagination for performance reasons. The following example shows how users can parallel query by partitioning the Solana address range and leveraging the before/after parameters. This method is fast, efficient, and safe. If you have any questions or need help, don’t hesitate to reach out on Discord!
In the example below, we scan the entire Tensorian collection (~10k records). It partitions the Solana address space into 8 ranges and scans those ranges concurrently. You’ll notice that this example is far faster than any other.
Copy
Ask AI
import base58 from 'bs58';const url = `https://mainnet.helius-rpc.com/?api-key=<api_key>`const main = async () => { let numParitions = 8; let partitons = partitionAddressRange(numParitions); let promises = []; for (const [i, partition] of partitons.entries()) { let [s, e] = partition; let start = bs58.encode(s); let end = bs58.encode(e); console.log(`Parition: ${i}, Start: ${start}, End: ${end}`); let promise: Promise<number> = new Promise(async (resolve, reject) => { let current = start; let totalForPartition = 0; while (true) { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ jsonrpc: '2.0', id: 'my-id', method: 'searchAssets', params: { grouping: ['collection', '5PA96eCFHJSFPY9SWFeRJUHrpoNF5XZL6RrE1JADXhxf'], limit: 1000, after: current, before: end, sortBy: { sortBy: 'id', sortDirection: 'asc' }, }, }), }); const { result } = await response.json(); totalForPartition += result.items.length; console.log(`Found ${totalForPartition} total items in parition ${i}`); if (result.items.length == 0) { break; } else { current = result.items[result.items.length - 1].id; } } resolve(totalForPartition); }); promises.push(promise); } let results = await Promise.all(promises); let total = results.reduce((a, b) => a + b, 0); console.log(`Got ${total} total items`);};// Function to convert a BigInt to a byte arrayfunction bigIntToByteArray(bigInt: bigint): Uint8Array { const bytes = []; let remainder = bigInt; while (remainder > 0n) { // use 0n for bigint literal bytes.unshift(Number(remainder & 0xffn)); remainder >>= 8n; } while (bytes.length < 32) bytes.unshift(0); // pad with zeros to get 32 bytes return new Uint8Array(bytes);}function partitionAddressRange(numPartitions: number) { let N = BigInt(numPartitions); // Largest and smallest Solana addresses in integer form. // Solana addresses are 32 byte arrays. const start = 0n; const end = 2n ** 256n - 1n; // Calculate the number of partitions and partition size const range = end - start; const partitionSize = range / N; // Calculate partition ranges const partitions: Uint8Array[][] = []; for (let i = 0n; i < N; i++) { const s = start + i * partitionSize; const e = i === N - 1n ? end : s + partitionSize; partitions.push([bigIntToByteArray(s), bigIntToByteArray(e)]); } return partitions;}main();