Agave 2.3 Banner
/Updates

Agave v2.3 Update: All You Need to Know

12 min read

Many thanks to 0xIchigo, Kirill Lykov, and Greg Cusack for reviewing earlier versions of this work.

Introduction

The v2.3 release of the Agave validator client marks another significant advancement for Solana. As with past updates, this new version introduces crucial improvements designed to enhance both network performance and the developer experience.

Notable Agave 2.3 Updates 

  • New TPU Client tpu-client-next
  • Optimizations to AccountsDB with reductions in disk I/O usage
  • Leader schedule now keyed by validator vote accounts
  • Slashable event verification
  • Greedy scheduler is enabled by default
  • Snapshot enhancements
  • Upgrades to gossip
  • Faster epoch transitions

Each section of this article is self-contained, allowing readers to skip to the topics most relevant to them easily. Whether you're a validator operator, developer, or engaged user, this comprehensive guide to Agave 2.3 should provide you with the key insights needed to fully leverage the latest improvements.

Anza is accelerating the pace of Agave releases. Less than three months after 2.2, version 2.3 is already live, with 13% of total stake currently running versions of the new client. Adoption is expected to grow rapidly in the coming weeks. In the meantime, mainnet feature gate activations have been temporarily paused during the rollout and will resume soon as part of the planned activation sequence.

New TPU Client

Agave 2.3 introduces a new implementation of the Transaction Processing Unit (TPU) client, replacing the previous ConnectionCache. This TPU client is responsible for sending serialized transactions to validators over the network using the QUIC protocol. The new design, known as tpu-client-next, is a complete rework aimed at significantly improving performance, reducing resource usage, and simplifying the overall architecture.

The TPU client is used in two main scenarios: in the ForwardingStage, where validators forward transactions to the upcoming leader, and in the SendTransactionService, which is used by RPCs to relay transactions to the leader.

The previous implementation, ConnectionCache, was built to support both UDP and QUIC protocols, which made it unnecessarily complex. ConnectionCache relied on an internal asynchronous queue to store transactions, rather than an explicit channel, and included a cache warm-up logic that sent empty packets. It also had persistent issues related to Quinn (i.e., a Rust implementation of QUIC) endpoint management.

The new tpu-client-next resolves these issues with a streamlined, asynchronous design. Internally, it follows an agent-based model in which individual worker tasks handle each QUIC connection. These workers communicate with a centralized ConnectionWorkersScheduler using asynchronous channels. 

When the scheduler receives a transaction batch, it is broadcast to the appropriate set of workers according to the configured strategy. The architecture eliminates multistreaming entirely, thereby reducing traffic fragmentation. Connections are also pre-established, which eliminates the latency associated with opening new streams at send time.

The tpu-client-next demonstrates clear performance gains. 

  • In testnet RPC experiments, both the old and new clients achieved similar mean TPS under load; however, tpu-client-next displayed noticeably lower jitter. 
  • In validator use cases, ForwardingStage achieved a 10% increase in forwarded transaction volume, accompanied by more stable traffic patterns and a 30% reduction in CPU usage.
  • Anza’s closed-source stress testing tool, transaction-bench, is built using the latest client. It generates twice as many transactions per second as the previous bench testing tool, bench-tps.

tpu-client-next is fully backward-compatible and is now the default implementation for Agave nodes. If issues arise, operators can revert to the previous behavior by launching their node with the --use-connection-cache flag to restore the old implementation.

In summary, tpu-client-next delivers:

  • An async-friendly, agent-based architecture
  • Consistent traffic with reduced jitter
  • Reduced CPU and memory usage
  • Configurable scheduling and queuing policies
  • Simplified integration and cleaner API surface

Leader Schedule Keyed by Vote Accounts

As part of the Agave 2.3 release cycle, Solana will activate SIMD-0180: Use Vote Account Address To Key Leader Schedule. This change alters how the network determines which validators are scheduled to produce blocks, shifting from using validator identity addresses to using vote account addresses as the primary key in the leader schedule.

This migration addresses a longstanding ambiguity in the Solana protocol: the inability to reliably associate a block-producing validator with a specific stake. Under the current design, the leader schedule is keyed by validator identity addresses. However, multiple vote accounts may delegate to the same validator identity, making it difficult to trace a particular slot's block production back to a specific set of delegated stake.

By instead keying the leader schedule to vote account addresses, this change establishes a clear and direct link between delegated stake and the validator's leadership role. This seemingly minor modification unlocks several important capabilities.

First, it provides the necessary foundation for block reward distribution, as specified in SIMD-0123, which passed a formal governance vote in March. This enables validators to set a commission rate on block fees and distribute the remaining revenue proportionally among delegators. This system mirrors the reward-sharing model already used for inflationary staking rewards, bringing greater alignment of economic incentives between validators and their stakers.

Second, using vote account addresses to key the leader schedule is essential for enabling programmatic slashing, which penalizes validators who violate network rules by submitting duplicate blocks or casting votes on multiple forks. This leads us nicely to the next update.

Slashable Event Verification

Slashing is a mechanism for penalizing malicious validators by verifying misconduct on-chain and burning a portion of their delegated stake. It serves as a key deterrent against behavior that threatens the network’s security or stability.

There are two primary slashing models:

Social Slashing (Current System on Solana)

Solana currently relies on a manual, community-driven consensus approach, referred to as social slashing. Under this model, if a validator behaves maliciously, for example by compromising network liveness or safety, honest participants can coordinate off-chain to initiate a hard fork, restarting the network and slashing the offender’s stake. While this method enables flexible, case-by-case judgment, it carries significant coordination overhead and is inherently reactive.

Programmatic Slashing (In-Protocol Slashing)

In contrast, programmatic slashing is enforced entirely on-chain. If a validator violates protocol rules, a cryptographic proof of the infraction can be submitted to a dedicated program, which then automatically triggers slashing. This model reduces reliance on human coordination and enables enforcement for minor infractions without disrupting network operations, paving the way for scalable, decentralized accountability.

Slashing involves two key steps:

  • Fault detection and attribution: identifying the misbehavior and the validator responsible.
  • Penalty enforcement: economically punishing the offender by slashing their stake and holding them accountable.

As part of the Agave 2.3 release cycle, Solana is set to activate a feature gate for the Slashing Program as outlined in SIMD: SIMD-0204: Slashable Event Verification. This marks the first step toward enabling programmatic slashing on Solana, with a focus on fault detection and attribution. It introduces an on-chain program that allows anyone to report and log slashable behavior, laying the groundwork for future automated enforcement.

This program does not alter stakes or rewards; it solely verifies and logs infractions, serving as an on-chain record of validator misbehavior. An early program prototype has been deployed on Testnet (e.g., sample DuplicateBlockProof transaction). 

The program will initially focus on detecting and logging duplicate block production, with future support planned for additional violations, such as double voting. Crucially, programmatic slashing is limited to clearly provable misbehavior. As a result, enforcement of more subjective or systemic issues, such as deliberate slow block production or MEV extraction, is significantly more challenging and is unlikely to be addressed by the slashing program.

Submitted proofs include two conflicting shreds for the same slot, both signed by the same validator. The slashing program verifies the proof by ensuring the shreds form a valid duplicate block proof, confirming they belong to the same slot and are correctly signed by the offending validator. This logic mirrors the approach used in Solana’s gossip protocol for handling duplicate block proofs in the fork choice process. 

Once a proof is successfully verified, the results are stored in a Program-derived Address (PDA) for future reference. This makes it easy to build dashboards that surface slashing-related data by simply running getProgramAccounts on the slashing program. Validators can use this to check if they've been reported for violations and take corrective action as needed.

A future SIMD will address the economic enforcement of slashing, including parameters such as the stake penalty for various offenses. Since these decisions impact the economics of running a Solana validator, any proposed changes will require approval through a full governance vote.

Faster Epoch Transitions

Agave 2.3 introduces a major improvement to the speed of epoch transitions. Epoch reward calculations are now completed in under 500 milliseconds. This results in fewer skipped slots, more reliable transaction landing around the epoch boundary.

Additionally, if the first leader slot of a new epoch is skipped, Agave 2.3 ensures that reward calculations are not rerun. Instead, the previously computed results are reused, eliminating redundant computation and enabling a smoother start to the new epoch.

AccountsDB Optimizations

Storage efficiency has seen significant improvements in this release. Disk I/O usage has dropped by approximately 75%, and the volume of repair requests has been reduced by around 85%. Together, these optimizations result in more consistent and reliable node performance, particularly during periods of high network load.

Greedy Scheduler Enabled by Default

In Agave 2.3, the greedy scheduler is now enabled by default. The previous central scheduler often became a bottleneck under heavy network load due to the time required to sort transactions and construct a dependency graph. The newer greedy approach significantly accelerates transaction scheduling through simplified logic and smaller batch sizes.

Readers can learn more about the greedy scheduler in our previous Helius blog post.

Gossip Shred Version

Agave 2.3 introduces stricter enforcement of shred version matching in the gossip network. Nodes are now only allowed to establish inbound gossip connections if their shred version matches the cluster’s, helping reject misconfigured nodes early.

Previously, spy nodes could join the network without matching the cluster’s shred version. With this change, all nodes, including those in spy mode, must obtain the correct shred version, either from a cluster entrypoint or by explicitly setting it via the command line.

This update builds on recent efforts to reduce gossip overhead. In recent months, ingress gossip traffic has dropped by approximately 61%, thanks to the deprecation of three gossip message types and the removal of epoch slot advertising from unstaked validators.

RPC Simulations Include Resource Usage

To improve visibility into transaction resource usage, a new field has been added to the default `simulateTransaction` RPC method: `loadedAccountsDataSize`. This field reports the total number of bytes of account data loaded during simulation.

This addition enables developers to estimate transaction costs and fine-tune priority fees with greater accuracy. Loading account data consumes compute units (CUs) at a rate of 8 CUs per 32 KB, which is based on Solana’s heap page allocation size. By surfacing this metric, developers can better optimize for cost efficiency when building and sending transactions.

Snapshot Enhancements

Snapshots serve as periodic save points, allowing nodes to restore their state. Nodes continuously generate these snapshots, replacing older ones with newer versions over time.

This release introduces several quality-of-life improvements to snapshot behavior:

  • Default Interval Updated: The default full snapshot interval has been increased from 25,000 to 50,000 slots, reducing the frequency of snapshot creation.
  • New Flag for Disabling Snapshots: A new `--no-snapshots` flag has been introduced to disable snapshot generation explicitly. The previous method of using `--snapshot-interval-slots 0` is now deprecated.
  • Improved Geyser Behavior: When restoring from a snapshot, account notifications sent through Geyser are no longer deduplicated.

An added benefit of extending the snapshot interval is smoother disk performance, reducing spikes in IOPS (input/output operations per second).

SBPF Toolchain Improvements

Agave 2.3 introduces several quality-of-life upgrades for developers working with the SBPF toolchain:

  • Version Targeting: Developers can now explicitly target specific BPF VM versions (v0–v3) when compiling programs, offering greater control.
  • Rust-Only for SBPFv3: Starting with SBPFv3, only the Rust-based toolchain is supported. The legacy C toolchain is no longer compatible with future versions.
  • New Optimization Flag: A new `--optimize-size` build flag has been added to generate smaller program binaries for deployment. This can help reduce storage, though it may slightly increase compute unit (CU) usage.

Other Changes

Additional updates included in this release:

  • Automatic Cluster Recovery: A new cluster recovery feature, `wen-restart`, has been introduced to automatically trigger a cluster restart in the event of a chain crash.
  • Updated Logging ABI: The `TimedTracedEvent` logging ABI has been updated to include some new diagnostics. As a result, validators must update any external tracing or analytics tools that rely on these logs to ensure compatibility. Existing trace data should be cleared following the upgrade to avoid format mismatches.
  • CLI Improvements: Added withdraw-stake AVAILABLE to simplify withdrawing all unstaked lamports, and updated solana-test-validator to bind RPC services to localhost (127.0.0.1) by default for improved security.
  • Faster Startup: Validator startup times have been significantly reduced. Nodes now take approximately 3 minutes to start and load the ledger, and about 5 minutes to catch up to the tip of the chain. However, fastboot now requires a graceful shutdown using the new --wait-for-exit flag. Operators must allow the validator process to fully terminate on its own before restarting, rather than issuing an immediate restart command.

Conclusion

Agave 2.3 marks another key milestone for the Solana protocol. Key highlights include the launch of a new TPU client (`tpu-client-next`), significant reductions in disk I/O through optimizations to AccountsDB, enhanced snapshot performance, improvements to the gossip network, and faster epoch transitions and startup times. Together, these updates make the network more robust while improving the experience for both developers and validator operators.

Solana continues to make steady progress toward a robust multi-client network, with over 8% of total stake operating Firedancer and climbing. Nearly a year and a half of uninterrupted uptime reflects the growing maturity and stability of the network’s core software. Meanwhile, the pace of both major and minor releases is accelerating.

Up next: Agave 3.0!

Further Resources

Related Articles

Subscribe to Helius

Stay up-to-date with the latest in Solana development and receive updates when we post