Introduction to gRPC Streams

Yellowstone gRPC streams (often referred to as Geyser gRPC streams) offer a high-performance, efficient method for streaming real-time Solana blockchain data. By tapping directly into Solana leaders, our RPC nodes receive shreds as they are produced, delivering ultra-low latency data to your application.

With gRPC, you can subscribe to various data types, including:

  • Blocks
  • Slots
  • Transactions
  • Account Updates

These subscriptions are highly configurable, allowing you to precisely filter and limit the data you receive. Client-server communication enables immediate creation or cancellation of subscriptions.

Accessing Yellowstone gRPC stream capabilities is flexible, with options tailored to different needs:

  • LaserStream: Our highly available, multi-tenant gRPC service. LaserStream is designed for robust, real-time data delivery, leveraging a distributed architecture for enhanced durability. It’s an excellent choice for most applications requiring reliable Solana data streams without managing infrastructure.

  • Dedicated Nodes: For users who require maximum control, custom configurations, guaranteed resource isolation, or have very high, specific throughput demands, a dedicated node provides an exclusive gRPC endpoint. This option offers deep customization and performance tuning capabilities.

You can provision a dedicated node via the Helius Dashboard. Learn more about Dedicated Nodes or explore the LaserStream documentation.

For more in-depth examples and implementation details, please refer to the Yellowstone gRPC source repository.

Understanding the Subscribe Request

To initiate a subscription, your request must include several key parameters. You will also specify filters to tailor the data stream to your needs.

Core Subscription Parameters

These parameters are fundamental to any subscribe request:

commitment
string
required

Specifies the commitment level for the data. Valid options are:

  • processed: The node has processed the transaction.
  • confirmed: The transaction has been confirmed by the cluster.
  • finalized: The transaction has been finalized by the cluster.
accounts_data_slice
array

An array of objects, each specifying an offset (uint64) and length (uint64). This allows you to request only specific byte ranges from account data, optimizing data transfer.

[
  { "offset": 0, "length": 100 },
  { "offset": 200, "length": 50 }
]
ping
boolean

Set to true to keep the gRPC connection alive, especially if you are behind a load balancer or proxy that might close idle streams (e.g., Cloudflare). When enabled, the server will send a Pong message every 15 seconds. This avoids the need to resend filters upon reconnection.

Data-Specific Filters

After setting the core parameters, you’ll define filters for the specific types of data you wish to receive.

filter_by_commitment
boolean
default:"false"

By default, slot updates are sent for all commitment levels. Set to true to receive slot updates only for the commitment level specified in the main commitment parameter.

Code Examples

The following examples demonstrate how to subscribe to gRPC streams using TypeScript.

The GRPC_URL and X_TOKEN in these examples are placeholders. Replace them with your actual dedicated node endpoint and API token.

import Client, {
  CommitmentLevel,
  SubscribeRequest,
} from "@triton-one/yellowstone-grpc";

const GRPC_URL = "your-geyser-grpc-endpoint";
const X_TOKEN = "your-api-token";
const PING_INTERVAL_MS = 30_000; // 30 seconds

async function main() {
  // 1. Open Connection
  const client = new Client(GRPC_URL, X_TOKEN, {
    "grpc.max_receive_message_length": 64 * 1024 * 1024, // 64MiB
  });

  // 2. Subscribe to Events
  const stream = await client.subscribe();

  // 3. Handle Stream Closure and Errors
  const streamClosed = new Promise<void>((resolve, reject) => {
    stream.on("error", (error) => {
      console.error("Stream error:", error);
      reject(error);
      stream.end(); // Ensure stream is closed on error
    });
    stream.on("end", () => {
      console.log("Stream ended.");
      resolve();
    });
    stream.on("close", () => {
      console.log("Stream closed.");
      resolve();
    });
  });

  // 4. Handle Incoming Data
  stream.on("data", (data) => {
    const ts = new Date().toUTCString();
    if (data.slot) { // Check if it's a slot update
      console.log(
        `${ts}: Received slot update: ${data.slot.slot}, Commitment: ${data.slot.status}`
      );
    } else if (data.pong) {
      console.log(`${ts}: Received pong (ping response id: ${data.pong.id})`);
    } else {
      // console.log(`${ts}: Received other data:`, data); // For debugging other message types
    }
  });

  // 5. Define Slot Subscription Request
  const slotRequest: SubscribeRequest = {
    slots: {
      // No specific slot filter here, will receive all based on commitment.
      // To filter by commitment on slot messages themselves:
      // slot: { filterByCommitment: true },
    },
    commitment: CommitmentLevel.CONFIRMED, // Requesting CONFIRMED slots

    // Other subscription types (set to empty objects if not used)
    accounts: {},
    accountsDataSlice: [],
    transactions: {},
    // transactionsStatus: {}, // Deprecated or handled by transaction filters
    blocks: {},
    blocksMeta: {},
    entry: {},
  };

  // 6. Send Subscribe Request
  try {
    await new Promise<void>((resolve, reject) => {
      stream.write(slotRequest, (err) => {
        if (err) {
          console.error("Failed to send slot subscription request:", err);
          reject(err);
        } else {
          console.log("Slot subscription request sent successfully.");
          resolve();
        }
      });
    });
  } catch (error) {
    console.error("Error in sending slot subscription:", error);
    client.close(); // Close client if initial subscription fails
    return;
  }


  // 7. Implement Ping to Keep Connection Alive
  const pingRequest: SubscribeRequest = {
    ping: { id: Math.floor(Math.random() * 1000000) }, // Use a unique ID for pings
    // All other filter fields must be present but can be empty
    accounts: {},
    accountsDataSlice: [],
    transactions: {},
    blocks: {},
    blocksMeta: {},
    entry: {},
    slots: {},
  };

  const pingInterval = setInterval(async () => {
    try {
      await new Promise<void>((resolve, reject) => {
        console.log(`${new Date().toUTCString()}: Sending ping (id: ${pingRequest.ping?.id})`);
        stream.write(pingRequest, (err) => {
          if (err) {
            console.error("Failed to send ping:", err);
            reject(err);
          } else {
            resolve();
          }
        });
      });
       // Update ping ID for next ping
      pingRequest.ping = { id: Math.floor(Math.random() * 1000000) };
    } catch (error) {
      console.error("Error sending ping:", error);
      // Consider logic to handle persistent ping failures (e.g., close stream, reconnect)
    }
  }, PING_INTERVAL_MS);

  // 8. Wait for Stream to Close
  try {
    await streamClosed;
  } catch (error) {
    console.error("Stream closed due to an error:", error);
  } finally {
    clearInterval(pingInterval); // Stop sending pings
    client.close(); // Close the gRPC client
    console.log("Client closed, ping interval cleared.");
  }
}

main().catch(console.error);

Additional Resources & Best Practices

  • Language Examples: You can find official examples for other languages in the Yellowstone gRPC repository:

  • Connection Persistence: gRPC connections, especially when routed through load balancers or proxies, may be terminated after a period of inactivity (often around 10 minutes).

    Always implement a ping mechanism in your client application (as shown in the examples) to send periodic messages to the gRPC server. This keeps the connection active and prevents unexpected termination.

  • Error Handling and Reconnection: Robust applications should implement comprehensive error handling and reconnection logic. If a stream errors or closes unexpectedly, your application should attempt to re-establish the connection and re-subscribe to the necessary data feeds. Consider implementing exponential backoff for reconnection attempts.

  • Resource Management: Be mindful of the volume of data you are subscribing to. Use filters effectively to request only the data your application requires. Unnecessarily broad subscriptions can lead to high bandwidth usage and processing overhead on both the client and server.