Constellation: A Proposer For Multiple Concurrent Leaders on Solana
/Research

Constellation: A Proposal For MCP on Solana

56 min read

Many thanks to Matt, Nick, Alessandro, Brennan, and Max for reviewing earlier versions of this work.

Actionable Insights

  • Constellation is the first formal, protocol-level proposal to implement Multiple Concurrent Proposers (MCP) on a production blockchain at scale.
  • Constellation introduces two new roles (i.e., proposers and attesters) that constrain the leader’s discretion over block construction. Approximately 16 proposers operate concurrently on a 50ms cycle, assembling transactions into erasure-coded pslices distributed to 256 attesters. The attestation record cryptographically binds the leader to the set of transactions it includes. If a pslice is attested to by a sufficient number of attesters, then the leader cannot exclude the transaction without producing an invalid block that the network will reject.
  • Constellation has the selective censorship-resistance property: either all fee-competitive transactions are included or none are in any given cycle.
  • Content-visible ordering and timing-manipulation attacks still remain unsolved. Transactions under Constellation are visible to all proposers who receive the transaction at submission time. Because of MCP’s multi-proposer architecture, this may actually widen these attack surfaces instead of narrowing them. Time-based latency games are acknowledged as unpunishable under the current design.
  • Constellation restructures existing fees—the inclusion fee maps to today’s base fee, and the ordering fee maps to the existing priority fee. The more significant economic shift is that activity currently flowing through out-of-protocol landing services and off-chain fee arrangements should return to the protocol. Stake-weighted role selection means existing concentration dynamics carry over, and the net impact on individual validators cannot be modeled until Constellation’s eventual SIMD.
  • MCP increases sequence latency but decreases inclusion latency. The attester round, 50ms cycle window, and batch assembly all add time relative to today’s direct TPU submission path. Today, latency is higher for validators that immediately pack TPU transactions and lower for those who delay them. Under Constellation, valid transactions now have a bounded, protocol-enforced guarantee of inclusion.
  • Constellation is explicitly incompatible with Proposer-Builder Separation (PBS) models. Once the attestation record constrains the leader’s discretion, there is nothing left for a specialized builder to sell. This approach represents a fundamentally different philosophy from Ethereum’s current approach to MEV.
  • Empirical benchmarks under realistic network conditions do not yet exist. The single most important data point Anza can provide is comparative latency projections for 200ms slots under the current protocol versus 200ms slots under Constellation. Until this data is available, the community is debating tradeoffs it cannot quantify. 
  • Constellation builds on top of Alpenglow, which is targeting a Q3 2026 mainnet launch.

Introduction

Despite a stunning lack of agave plants, Brennan Watt, CEO of Anza, went to the California desert to unveil Constellation—a proposal to bring Multiple Concurrent Proposers (MCP) to Solana. This is the most structurally ambitious upgrade and, arguably, the most consequential protocol-level MCP proposal any production blockchain has put forward thus far. It seeks to resolve the leader’s temporary monopoly over transaction ordering and the extractable value it creates. Constellation is the democratization of blockspace on Solana.

This article is a critical analysis of Constellation: what it solves, what it knowingly defers, and what remains genuinely unresolved. We introduce a three-layer framework for assessing censorship resistance, compare Constellation against the current MCP landscape, and examine whether the tradeoffs it introduces are compatible with the performance identity Solana has built.

Prior knowledge of Alpenglow is assumed. 

The Problem Constellation Solves

Transactions are the lifeblood of Solana. They are grouped together and written permanently to the network in the form of blocks. But the process of deciding which transactions make it into those blocks, and in what order, is not a neutral process. 

The Leader’s Monopoly on Block Production

Block production rotates according to a leader schedule, in which one validator at a time is responsible for producing blocks in a given window.

During this time, transactions are forwarded directly to the leader’s Transaction Processing Unit (TPU), where the leader typically receives them before anyone else.

The leader occupies a position of unusual power. That is, they can observe incoming transactions before they are publicly visible.

The leader can decide not to include some transactions, reorder them arbitrarily, or introduce their own.

This is a structural feature of how single-leader consensus currently works, and is present to varying degrees in virtually every production Proof of Stake blockchain today.

Solana’s absence of a public mempool sharpens this asymmetry rather than reduces it. Ethereum’s public mempool gives participants some visibility into pending transactions, creating a level playing field of sorts among sophisticated actors competing to exploit transaction ordering.

On Solana, the leader’s informational advantage is less contestable given the nature of transaction forwarding. 

Maximal Extractable Value (MEV)

This power goes largely unexploited under normal conditions and with honest validators. However, the problem is that validators are rational economic actors. As Solana matures and financial activity continues to grow, the profit available from exploiting the leader’s temporary monopoly increases accordingly. A validator who chooses not to exploit this position is simply leaving money on the table. Correctly behaving nodes are left at an economic disadvantage—a disadvantage that incentivizes them to undermine the quality of the very system they participate in.

This extractable profit is known as Maximal Extractable Value (MEV), a term first formalized by Daian et al. in Flash Boys 2.0 as Miner Extractable Value before its application to Proof of Stake networks. It encompasses everything from arbitrage and frontrunning to sandwich attacks, selective censorship, and any strategy that exploits the leader’s informational and positional advantage over the users whose transactions they process.

The industry’s primary response to MEV has been the Proposer-Builder Separation (PBS) model, implemented on Ethereum via MEV-Boost. Under PBS, specialized builders compete to construct blocks that maximize extractable value, and proposers simply select the most profitable block to be produced. This is a pragmatic reframing of the issue to democratize access to MEV and redistribute its proceeds across the validator set, rather than concentrating them among the most sophisticated actors, because this model assumes MEV extraction is inevitable.

The issue with this framing is that PBS does not reduce harm to users—extraction still occurs, and only the beneficiaries have changed. PBS addresses some of the negative effects of MEV for network nodes, but it does not reduce harm to network users.

Solana has its own evolving relationship with MEV. The combination of fast block times, direct TPU submission, and a competitive validator set has produced a distinct MEV landscape characterized by spam, priority-fee auctions, and validator-level transaction reordering. Jito’s block engine can be seen as partially analogous to MEV-Boost, in that it provides an off-chain auction mechanism in which searchers bid on transaction ordering, with the proceeds shared between validators and stakers. That is, like PBS, Jito manages and redistributes MEV in a more democratic manner instead of eliminating it outright.

Constellation seeks to remedy this. Rather than accepting the leader’s monopoly and managing its consequences, it seeks to structurally contain the monopoly, making the most harmful forms of MEV impossible by design. Constellation’s whitepaper frames this ambition as the “infrastructure of Internet Capital Markets, a universal venue for economic activity in which users can trust that the market structure is fair.”

Traditional financial markets attempt to enforce similar protections through regulation and jurisdictional oversight. These protections are reactive and uneven, and have repeatedly been shown to be insufficient. Constellation’s ambition is to enforce fairness at the protocol level such that it cannot be circumvented or selectively applied. Constellation seeks to do this by implementing Multiple Concurrent Proposers (MCP) on Solana.

Multiple Concurrent Proposers

In a traditional single-leader blockchain, one validator is responsible for producing each block. This validator (i.e., the leader) temporarily holds exclusive control over transaction inclusion and ordering. While this validator is authorized to produce blocks, all other participants in the network are passive observers during that window. The leader ultimately decides what transactions are included, and in what order.

This design is appealing because of its simplicity. A single validator overseeing block production means no coordination overhead, no conflicting proposals to resolve, and a clean accountability model. However, it also means a single point of exploitation. The leader’s temporary monopoly is the root cause of MEV, and every major mitigation thus far has accepted this structure and seeks to manage its consequences.

Multiple Concurrent Proposers (MCP) is a class of protocol design that breaks this monopoly at the structural level. Rather than rotating to a single leader who holds exclusive block production rights, MCP allows multiple nodes to propose transactions simultaneously. No single proposer controls the full transaction set. Instead, their proposals are combined, typically by a constrained assembler role, in accordance with the protocol's rules.

A user who submits a transaction to several proposers at once is no longer dependent on a single node, as they now have multiple independent paths to inclusion. A leader who attempts to exclude their transaction must contend with the fact that other proposers have already seen it, and attesters have already attested for it. A single leader assembles the final block, but their discretion is tightly constrained.

MCP’s main tradeoff is coordination complexity. Allowing multiple nodes to propose transactions simultaneously raises questions that single-leader designs avoid entirely. How are conflicting transactions resolved when two proposers include the same transaction? How is ordering determined across proposals? How do you prevent a sophisticated proposer from gaming combination rules? This adds considerable protocol complexity—teams must navigate coordination challenges involving new node roles, scheduling logic, cryptographic assumptions, and failure modes, all of which require rigorous testing.

It is worth being precise here, because MCP is used loosely throughout the industry to refer to a range of designs with meaningfully different properties. At its low end, MCP provides probabilistic censorship resistance—a transaction submitted to multiple proposers is harder to censor, as doing so requires coordination among multiple nodes. At the higher end, MCP can provide structural censorship resistance—it becomes mathematically impossible for the leader to produce a block that censors a transaction attested to by a sufficient quorum. This is what Constellation targets, and the difference matters enormously for financial applications that require hard guarantees.

Constellation: How It Works

Constellation is a protocol for implementing MCP on Solana. It is complementary to Alpenglow: Alpenglow handles consensus (i.e., safety, liveness, and finality), whereas Constellation handles market structure—who gets to propose transactions, how those proposals are acknowledged, and what the leader is permitted to do with them. Constellation produces the payload that Alpenglow finalizes.

Architecture

Constellation introduces two new roles to Solana’s protocol stack, each with a distinct responsibility, while modifying the roles of leaders and validators.

Proposers are the entry point for transactions. At any given time, approximately 16 proposers are active concurrently, selected randomly by stake and rotated every 32 cycles (i.e., ~1.6 seconds). Users submit their transactions directly to one or more proposers of their choosing. A proposer is free to accept or reject any transaction, subject to the constraint that accepted transactions must be valid. There is no protocol rule that enforces transaction inclusion at this stage; the censorship resistance guarantee comes later in the pipeline. Each proposer operates on a 50-millisecond cycle. Within each cycle, a proposer assembles its accepted transactions into a structure called pslice—the “p” prefix is silent and used merely to differentiate it from Alpenglow’s slices. The pslice is erasure-coded into 256 smaller pieces, called pshreds, and one pshred is distributed to each of the 256 active attesters. The erasure coding uses a recovery threshold of 64, meaning that any 64 of the 256 pshreds are sufficient to reconstruct the full pslice. Each pshred contains a cryptographic hash commitment to the full transaction list, ensuring that the leader cannot substitute different transactions or alter the ordering within a pslice after attesters have signed off.

Attesters receive pshreds from a proposer and immediately forward them to the next ~2 leaders, to account for any faults or absences, and record the commitment hash of the pslice they received. At the end of each cycle, the attester signs an attestation—a cryptographically binding statement listing every pslice commitment hash it observed during that cycle. This attestation is sent to the leader and serves as the evidentiary record that constrains which transactions the leader may include. The record is stake-weighted and signed, meaning that it cannot be forged or silently ignored. 

The leader in Constellation is the same as Alpenglow’s leader—the node responsible for producing the final block that enters consensus. The difference under Constellation is that the leader’s discretion is tightly constrained by the attestation record. Constellation enforces two distinct thresholds. For the aggregate attestation to be valid, at least 60% of attesters must participate. If this threshold is not met, the block is skipped entirely. Within that, any pslice that has been attested to by at least 40% of attesters must be included by the leader. Failure to do so produces an invalid block that the network will reject. This two-threshold design separates block-level validity from per-proposer inclusion. That is, the leader can produce a valid block even if some proposers’ data did not reach enough attesters, but cannot selectively exclude proposers whose data did. Once the leader has compiled all attested pslices into a batch, it transmits that batch to validators via Alpenglow’s Rotor. 

Validators receive batches from the leader via Rotor and perform pipelined execution as the batches arrive. Once the full block is received, validators check it against the attestation record to confirm that every attested pslice has a corresponding submission in the block. Only if all checks pass does the validator vote to finalize it; if the checks fail, then validators will vote to skip the leader’s entire window via the TrySkipWindow call. 

Cycles and Blocks

A cycle is Constellation’s fundamental unit of time. It is a 50-millisecond window derived from UTC wall-clock time by dividing the Unix nanosecond timestamp by 50,000,000. Critically, cycles are not aligned with Alpenglow’s slots. A slot contains multiple cycles, and the batches produced across those cycles constitute the payload of the leader’s block. This distinction matters because the 50ms cycle is the economic tick (i.e., the window within which censorship resistance is enforced) while the slot remains Alpenglow’s unit of consensus.

The whitepaper specifies a tolerance for clock skew between proposers and attesters, and adjusts the attestation window accordingly. To illustrate why this matters, suppose a proposer’s clock runs 5ms ahead of the attester set. This proposer’s pshreds may arrive at attesters earlier than expected relative to the cycle boundary, giving transactions in that pslice a slightly larger window to accumulate attestations. Conversely, a proposer whose clock drifts behind may find its pshreds arriving late enough that they fall out of the attestation window entirely, even though the proposer submitted them “on time.” In data center environments that use tools such as chrony or GPS receivers, clock synchronization is routine, and drift is typically sub-millisecond, which is well within Constellation’s tolerance bounds. The concern is that Constellation introduces a new variable that did not exist under Alpenglow’s purely logical time model, and one that Constellation’s eventual SIMD should specify monitoring bounds for.

As Constellation approaches the end of an epoch, proposers and attesters may briefly be uncertain whether the next epoch has started. Constellation operates in both epochs concurrently during this window with two sets of proposers and attesters active simultaneously. Alpenglow’s consensus naturally resolves which cycles belong to which epoch.

Transaction Lifecycle and Fees

A transaction must pass through four gates before it executes:

  • It must be accepted and included in a pslice by a proposer.
  • That pslice must accumulate sufficient attestations to be included in the leader's batch.
  • The transaction must have a high enough bid to be selected for execution within the batch’s compute limit.
  • The block containing the batch must be confirmed by Alpenglow’s consensus.

Each transaction carries a bid (i.e., the execution fee per compute unit) that determines its ordering within the same batch. Higher bids are executed first.

Constellation splits the cost of a transaction into two distinct fees:

  • An inclusion fee.
  • An ordering fee.

The inclusion fee is a small, fixed charge based on the transaction’s size and number of signatures. It is paid to the proposer who included the transaction in their pslice, and is charged from the moment the transaction crosses the attestation threshold, regardless of whether it ultimately executes. This is analogous to the base fee in Solana’s current system, but with an important caveat: if a user submits the same transaction to three proposers for redundancy, they pay the inclusion fee three times (i.e., once per proposer), since each proposer independently performed the work of including it. Therefore, if a user submits the same transaction to n different proposers for redundancy, they pay the inclusion fee n times.

The ordering fee is the larger, priority-based component, which is the transaction’s total compute units multiplied by its bid. This is charged only once because the transaction can be executed only once, regardless of how many proposers include it. For example, a transaction requesting 200,000 compute units at a bid of 0.00001 SOL per compute unit pays an ordering fee of 2 SOL. If that same transaction was submitted to four proposers for redundancy, the user would pay four inclusion fees plus a single 2 SOL ordering fee. The ordering fee is returned to the ecosystem proportional to node stake—the whitepaper leaves the design of this mechanism for its eventual SIMD.

Every fee-payer account must maintain a minimum reserve balance of approximately 0.001 SOL to prevent fee manipulation among concurrent proposers. This ensures that inclusion fees can always be paid, even when multiple proposers concurrently include transactions that touch the same account.

Constellation and Alpenglow

Alpenglow is Solana’s consensus protocol. It determines which blocks are valid, the order in which they are finalized, and how the network recovers from faults. Its Votor and Rotor components replace Tower BFT and gossip-based vote propagation, delivering a significant reduction in finality. Alpenglow says nothing about who proposes transactions or how ordering is determined within a block.

Constellation is a market-structure layer that constrains what the Alpenglow leader is permitted to do with the blocks it assembles—it defines who proposes transactions and how ordering is determined within each block. The batches Constellation produces become the payload of Alpenglow’s blocks. Alpenglow’s Votor then notarizes those blocks in accordance with its predefined voting rules. The two protocols are composed such that Alpenglow provides safety and liveness, while Constellation provides order fairness.

This composability means that Constellation also inherits Alpenglow’s security assumptions without weakening them. Constellation does not touch Alpenglow’s guarantees. Instead, it introduces new guarantees and assumptions for the new proposer and attester roles, as well as UTC wall-clock synchronization, in its new cycle-based timing model.

Constellation is the first formal protocol-level proposal for implementing MCP on a scalable, production blockchain. It is a meaningful addition to introduce censorship resistance to Solana—the next chapter in a protocol roadmap that Alpenglow opens up.

What Censorship Resistance Actually Requires

The MEV literature has historically approached the problem through several distinct lenses that, taken together, map onto a more unified picture of what any censorship-resistance proposal actually needs to solve. Drawing on Eskandari et al.’s foundational taxonomy of front-running attacks, Garimidi et al.’s formal two-property framework for MCP protocols, and Landers and Marsh’s analysis of MCP-specific MEV channels, we propose organizing the attack surface into three distinct layers, each requiring a different class of solution. This framework is our own synthesis, introduced here to evaluate Constellation’s design.

Layer 1: Hard Censorship

Hard censorship refers to a leader or proposer's ability to refuse to include a transaction they have identified. This is the most legible form of manipulation and one that Constellation solves structurally. Under Constellation, it becomes cryptographically impossible for a leader to produce a valid block that excludes a fee-competitive transaction attested to by a sufficient quorum of attesters. Slashing is not required for this, as the enforcement is architectural.

Layer 2: Content-Visible Ordering

The second layer is harder to address. Although proposers cannot censor outright, they can still observe transaction content before the final ordering and attempt to exploit that visibility (e.g., sandwiching a large trade). This is what Garimidi et al. formalize as the hiding property—an adversary must not be able to see the contents of transactions before they are confirmed. Constellation implements partial hiding. That is, a transaction is visible only to the proposer who receives it—not all proposers—and the leader sees transaction content only after the cycle deadline has passed. This is better than full visibility but does not fully satisfy Garimidi et al.’s hiding property, which requires a transaction’s contents to remain invisible to all parties before confirmation. The receiving proposer can still observe and exploit the contents of transactions it receives.

The deeper concern with Constellation is that MCP with public transaction submission may amplify content-visible exploitation. This introduces a system in which each proposer observes the transactions it receives and can exploit that visibility within its own pslice. The attack surface differs from that of the single-leader model, as multiple entities can each see a subset. A user who submits to a single proposer exposes their transaction only to that proposer. But a user who submits to multiple proposers for redundancy broadens their exposure proportionally. Landers and Marsh formalize this dynamic: concurrent block production creates timing games, same-tick duplication opportunities, and a structural absence of a single-builder chokepoint that currently limits the number of extraction attempts that can land per victim transaction. Decentralizing the proposer set without addressing content visibility multiplies the MEV attack surface rather than reducing it.

Landers and Marsh’s analysis of MCP-specific MEV channels assumes broader content visibility than Constellation provides. Under Constellation’s partial hiding, the amplification of content-visible exploitation is a function of user submission strategy rather than an architectural inevitability. A user who submits to a single trusted proposer has roughly the same content-exposure profile as under today’s single-leader model. The tradeoff is that a single-proposer submission sacrifices the redundancy that censorship resistance depends on. 

Layer 3: Timing and Latency Manipulation

The subtlest and hardest-to-punish layer pertains to timing and latency manipulation. Under Constellation, a proposer can delay forwarding pshreds to attesters by just enough that a competitor’s transaction falls outside of the attestation window, or exploit UTC clock skew to manipulate which transactions accumulate sufficient attestations. This gap is acknowledged directly by the Constellation whitepaper: late message delivery “cannot be punished,” since it is indistinguishable from genuine network delay. This is the layer where slashing might become relevant, and the primary open question Constellation’s eventual SIMD will need to address. 

A fundamental assumption underlying Constellation’s design is that the proposer-user relationship is not anonymous—it is a repeated interaction in which trust is measurable and reputation matters. With approximately 16 proposers active at any given time, a user who experiences consistently poor treatment from a proposer can start submitting to one of the other 15. While this doesn’t create any onchain artifact that can be used to punish a bad actor directly, it creates economic consequences for proposers who exploit their position. The extent to which this reputational pressure is sufficient to deter timing manipulation, compared to a solution such as slashing, is an open question that will likely depend on how transparent proposer behavior becomes to users over time. 

Layer

Attack Type

Constellation Coverage

Solution Class

Hard Censorship (1)

Suppression attack (i.e., the leader or proposer outright blocks a transaction)

Fully solved (i.e., block validity rules and validator rejection)

Cryptographic enforcement

Content-Visible Ordering (2)

Frontrunning / sandwich (i.e., proposer sees transaction content and exploits ordering)

Partially addressed (i.e., transaction content is visible to the receiving proposer(s) and leader post-deadline)

Async execution or hiding

Timing and Latency Manipulation (3)

PoA-latency timing race (i.e., soft delay of pshreds, clock skew)

Open gap (i.e., unpunishable, and the paper acknowledges this)

Slashing for detectable cases, and hiding for the rest

Impact on Validators and Users

Validators

Constellation redistributes MEV opportunities among validators rather than eliminating them entirely. The most legible and directly extractable revenue source available to leaders today (i.e., hard censorship) is, by design, impossible to achieve. However, what replaces it is a set of subtler, harder-to-punish timing channels that favor validators with latency advantages, precise clock synchronization, and the sophistication to consistently exploit pshred forwarding windows. The net effect is a shift in how validators extract value, rather than a reduction in the total extraction surface. The only caveat is that it becomes significantly harder to extract.

Constellation does not introduce fundamentally new fee streams so much as restructure existing ones. The inclusion fee is analogous to today’s base fee, and the ordering fee maps to the existing priority fee. The splits are expected to look similar to what validators earn today. The difference is largely operational. That is, good operators should win more inclusion fees as proposers, creating a performance-based gradient within the existing economics rather than a separate revenue category. The attester role is not separately compensated in the current design. The rationale is that, like current participation in Turbine, it is expected to be performed because it is net-positive for the network. The more significant economic shift is the activity currently flowing through out-of-protocol landing services, market-based auctions, and off-chain fee arrangements, which should come back in-protocol to benefit validators more directly. However, until the SIMD specifies the exact mechanics, the net economic impact on individual validators—particularly smaller ones, where stake-weighted selection reduces proposer frequency and infrastructure overhead raises the cost floor—remains an open question.

Proposers and attesters are selected by stake weight, meaning the same concentration dynamics that shape validator economics also shape participation in these roles. If a small number of high-stake validators dominate proposer selection, the censorship-resistance guarantee remains formally intact, but the practical diversity of the proposer set narrows despite still being an improvement over what Solana currently has today (i.e., n choose 1 versus n choose 16). The independence assumption starts to weaken in practice, even if it holds in theory. This is a concern worth noting, given the emergence of Validator-as-a-Service (VaaS) offerings, such that a single entity may operate multiple high-staked validators. Whether the SIMD introduces any anti-concentration mechanisms or incentives for proposer selection is a design question with direct implications for the strength of the guarantees Constellation advertises. 

Users

For users, a fee-competitive transaction submitted to a sufficient number of proposers is, for the first time, protected by a hard protocol guarantee against selective exclusion. Financial applications can now be built with guarantees that simply did not exist before, changing what is only possible on Solana. 

High-frequency and price-sensitive users must now submit transactions to multiple proposers for redundancy. The concern is that decentralizing the proposer set without addressing content visibility may increase exposure to sandwich attacks—each proposer can observe and act on the transactions submitted to it, meaning that users who submit to multiple proposers for redundancy proportionally increase the number of parties who see their transaction content. Doing so inherently removes the single-leader chokepoint to broadcast transaction intent to a wider set of potential adversaries. The practical implication is that more sophisticated users will need to develop new submission strategies that balance redundancy against exposure, likely involving selective proposer targeting based on reputation or stake, for example, instead of broad, multi-submission strategies.

For market makers specifically, Constellation’s inclusion guarantee eliminates the adversarial risk that infrastructure determines execution quality. What remains is pure information asymmetry, which is the same risk profile that market makers face on the best traditional venues today. The convergence is what makes the pro-inclusion-latency-reduction argument concrete rather than aspirational. This convergence is discussed more thoroughly in our “Open Questions” section.

The net change in the average user's perceived experience is likely marginal, but the inclusion guarantee is a meaningful improvement in reliability. Pending future benchmarks, the net impact on sequence latency remains an open empirical question.

Landscape: How Constellation Compares

Sei Giga

Sei Giga is the closest analog to Constellation in the current MCP landscape. That is, a production-grade blockchain pursuing MCP as a first-order architectural priority rather than a future research goal. Comparing the two is instructive insofar as they make different tradeoffs at the same layer.

Giga’s consensus foundation is called Autobahn, a multi-proposer BFT protocol in which every validator operates its own continuous “lane” of proposals in parallel. Rather than relying on a single leader, every node continuously disseminates its own stream of data proposals in independent lanes, and the consensus layer periodically commits a “tip cut,” which is a compact snapshot aggregating the latest proposals from every lane. This is architecturally different from Constellation’s model, where approximately 16 selected proposers operate on a fixed 50ms cycle. Autobahn’s lane-based model allows any validator to maintain a continuous proposal lane, rather than being selected from a rotating, stake-weighted subset, thereby dramatically broadening participation in block production.

The most consequential difference is with respect to content-visible ordering. Autobahn enables asynchronous execution by decoupling transaction ordering from execution, a design choice that Constellation defers. As discussed in the following section, async execution narrows the attack surface of content-visible ordering by preventing proposers from simulating execution outcomes against a known final state at ordering time.

Giga offers probabilistic censorship resistance, whereas Constellation provides structural guarantees. The belief underlying Giga is that a transaction submitted to multiple proposers is harder to censor, since each proposer operates with incomplete information, and the utility of censoring may be lost if another proposer includes the transaction in the same tick. By comparison, a leader who excludes a sufficiently attested transaction produces an invalid block under Constellation. Probabilistic resistance raises the cost of censorship, whereas structural resistance makes censorship cryptographically impossible. For the financial applications both protocols are trying to enable, the difference matters

It is worth being candid about the difference in ambition between the two designs. Constellation is a protocol specification that proves correctness properties, defines fault conditions, specifies quorum thresholds, and is intended to be submitted as a formal proposal to a scalable, production network with billions in staked value. The Sei Giga whitepaper reads differently, oriented toward throughput claims and EVM compatibility, with MEV and censorship resistance treated as emergent benefits of the multiple-proposer architecture rather than as formally specified guarantees. Async execution is directionally correct, but Giga does not provide the same level of formal guarantees regarding ordering constraints, attester quorums, or fault conditions as Constellation does. This is not a criticism of Giga’s sequencing choices so much as a reflection of different contexts—Constellation is being proposed on top of the world's highest-throughput production blockchain, which demands and delivers a correspondingly higher standard of specification.

The Academic Ideal

The theoretical benchmark for MCP design is Multiple Concurrent Proposers: Why and How (2025) by Garimidi and Neu of a16z Crypto Research and Max Resnick of Anza. The paper proposes an MCP protocol that offers two properties it argues any censorship-resistant design must satisfy: selective-censorship resistance and hiding. The former guarantees that an adversary cannot selectively delay transactions, whereas the latter guarantees that transaction contents remain invisible before confirmation. It is the only MCP design in the current literature that formally achieves both simultaneously.

The mechanism that enables hiding is HECC—Hiding Erasure-Correcting Code. It is parameterized such that any T shreds reveal no information about the underlying transaction batch, while K + T shreds allow full reconstruction. The critical detail is that relays broadcast their stored shreds only after consensus has confirmed which batches are included. This prevents any pre-confirmation observation of transaction content, completely closing out the content-visible ordering attack surface as an information-theoretic guarantee. 

Mapping against the framework developed earlier, this protocol design is the only one that addresses Layer 1 with structural censorship resistance, Layer 2 with hiding, and bounds Layer 3 due to the censorship-resistance guarantees provided by hiding. Neither Constellation nor Giga achieves this in full.

What makes this comparison particularly interesting is that Resnick, a co-author of the theoretical ideal, is also a co-author of Constellation, a protocol that knowingly departs from it. This reflects a deliberate judgment that the full HECC-based design is not yet deployable on a production network at Solana’s scale, and that structural censorship resistance is the more urgent problem to solve first. The paper serves as Constellation’s north star: a formal specification of what the protocol is building toward, even if it cannot deliver it all at once. 

Ethereum Braid

Braid is Ethereum’s primary MCP proposal, introduced by Max Resnick, and is currently under consideration as part of Ethereum’s Scourge roadmap alongside the competing FOCIL inclusion list design. Its inclusion here is less about technical comparison and more about context; the entire industry is trying to work through the same structural problems but from different starting points.

Braid implements MCP by allowing multiple proposers to build blocks across parallel chains simultaneously within the same slot, with the execution layer aggregating, deduplicating, and sorting transactions according to predetermined rules. It does not introduce additional protocol roles. The most consequential difference is that Braid’s safety depends heavily on encrypted mempools, making hiding a prerequisite rather than a deferral. Braid remains an undeployed research proposal, and the Ethereum community has not yet reached consensus on whether to pursue it over FOCIL.

What Braid ultimately confirms is that the structural argument for MCP transcends any single chain. It is also worth noting that Resnick has worked on three of the four entries in this section, which is perhaps the clearest signal that Constellation is the product of sustained, cross-context, academically rigorous thinking about a problem that has resisted easy solutions.

A Note on PBS

Proposer-Builder Separation (PBS) is worth mentioning here as a foil, rather than a comparable design. Where every entry in this section attempts to structurally constrain the leader’s temporary monopoly, PBS accepts it and optimizes it around it to redistribute MEV proceeds. Constellation is explicitly incompatible with PBS—once a leader’s discretion is constrained by the attestation record, there is nothing left for a specialized builder to sell. The fact that PBS has become the dominant MEV mitigation on Ethereum, despite doing nothing to reduce harm to users, is precisely the failure mode that MCP is designed to avoid.

The Off-Protocol Precedent

Before Constellation ships, Solana’s ecosystem is already approximating aspects of MCP off-protocol. Harmonic, for example, is an open block-building aggregation layer that continuously collects and evaluates block proposals from multiple independent builders, presenting them to validators for competitive selection in real time. This is not MCP in the formal sense, as there is no protocol-enforced censorship resistance, attestation quorum, or cryptographic constraint on the leader’s discretion. However, validators running Harmonic are already choosing between multiple concurrent block proposals, which is the core mechanic that MCP seeks to enshrine. Together with BAM, the two represent the ecosystem’s attempt to solve market-structure problems without waiting for protocol-level enforcement. These off-protocol systems demonstrate that the demand for MCP-like properties is real enough that builders aren’t waiting for Constellation to ship.

Open Questions

Constellation’s whitepaper is a protocol specification. It proves correctness properties under the stated assumptions and correctly defers everything else, which is appropriate for a v0.9 proposal. What follows is not a list of Constellation’s failures, but a map of what its eventual SIMD and future iterations will need to address in order to effectively bring MCP to Solana. 

These questions are not equally hard. Some questions are specification work; design decisions that Anza can and should resolve through the normal SIMD process. Others are genuinely open problems that the broader MCP research community has not yet solved, but we must be cognizant of as we trailblaze toward becoming the first blockchain at scale to implement MCP. No SIMD alone can resolve these issues. The distinction matters because conflating the two risks either overstates Constellation’s gaps or underestimates how much work remains.

The straightforward SIMD work includes:

  • Fee Distribution: how priority fees are split between proposers, attesters, and the broader validator set is described, but not fully outlined. The whitepaper states that priority fees are returned to the ecosystem in proportion to stake, but the exact distribution mechanism between proposers, attesters, and validators isn’t defined.
  • Validator Reward Structure: how proposers are compensated relative to existing validator rewards, and whether the removal of vote transaction fees under Alpenglow changes the calculus for smaller validators.
  • Deployment Sequencing: Constellation depends on Alpenglow, which is slated for Q3 2026. The SIMD needs to specify the dependency explicitly and address what happens during the transition window from vanilla Alpenglow to Constellation+Alpenglow. Are there any prerequisite SIMDs that should hit mainnet first?
  • Governance of Role Parameters: the proposer count (p ≈ 16), attester count (q ≈ 256), cycle duration (△cycle = 50ms), and other parameters from Table 1 of the Constellation whitepaper are presented merely as suggestions. The SIMD needs to specify how these parameters should be set, governed, and potentially changed over time.

The harder questions (e.g., asynchronous execution, slashing, submission-layer privacy) are addressed in the following subsections. These are questions that the MCP research community is actively working on that we must consider, as Constellation’s design choices can either narrow or widen the path to certain eventual solutions.

Asynchronous Execution

Under synchronous execution, a proposer who receives a transaction in plaintext, or who can decode it early under a naive submission model, knows the transaction’s content and can simulate its outcome. The proposer can run the transaction against the current state to compute exactly what the execution result will be, including swap prices, account balance changes, and downstream arbitrage opportunities. This is what makes sandwich attacks mechanically precise. Attackers can see large swaps and calculate how much is needed to frontrun to maximize their profit.

Asynchronous execution removes the second half of that advantage, but only when combined with MCP. If consensus commits to transaction ordering before execution, a proposer who can see the transaction content cannot simulate execution outcomes against a known final state, because that state does not exist at ordering time. The information advantage effectively narrows from “I know what this transaction does and in what order it executes” to “I know what this transaction is, but not what it will do relative to the final ordered set.” This benefit depends on the proposer not controlling the final ordering. Under a single-leader model, async execution alone does not provide this protection, as the leader still has full discretion over ordering and can place their own transactions advantageously regardless of when execution occurs. It is a combination of constrained ordering and deferred execution that narrows the attack surface.

Note that async execution does not fully close Layer 2. A sophisticated proposer can still perform categorical inference. That is, a proposer can still see that a transaction touches a specific liquidity pool, for example, and may infer the likely direction without knowing the exact outcome. This raises the bar meaningfully against the most mechanical and profitable forms of exploitation, and represents the clearest architectural path to narrowing Layer 2 without requiring cryptographic hiding at the consensus layer. Notably, this is the approach Sei Giga chose, pursuing async execution as a first-order architectural priority alongside MCP.

This raises a natural question worth sitting with: Why wasn’t async execution pursued first? It narrows Layer 2 and could, in principle, have been pursued as a contained execution-layer change without introducing MCP’s additional protocol complexity (i.e., new node roles, UTC clock synchronization requirements, an unresolved slashing design, and a doubled shredding overhead). 

The strongest case for this sequencing is that async execution and MCP are solving different problems. MCP provides structural ordering constraints that async execution alone cannot—a validator that cannot simulate execution outcomes but can still see transaction content can exercise discretion over ordering within the protocol-allowed window. The case for pursuing MCP first is that it constrains Layer 1 at the structural level, and Layer 1 is the more legible and economically immediate threat. Retrofitting asynchronous execution into Solana’s existing synchronous execution model, given its composability assumptions and program architecture, is a harder engineering problem than building it into a new chain from scratch. Sei Giga has the luxury of designing for async execution from day one, while Solana does not. This practical asymmetry may matter as much as the theoretical priority argument in explaining why MCP was pursued first. Alpenglow’s architecture also makes MCP more tractable than it would have been under Tower BFT, which we explore in the following subsection on protocol complexity.

Whether that sequencing is correct is a reasonable open question. Constellation leaves Layer 2 intact, which is problematic given the new attack vectors MCP introduces in Layer 3. In turn, this can make Layer 2 attacks more profitable, since the two attack surfaces are complementary. For example, a proposer who can see a large DEX transaction can delay its pshreds to push it out of the current batch window while frontrunning it with their own transaction in the same window. Layer 2 visibility and Layer 3 timing games are integral weapons in the same attack surface and will only grow more sophisticated as Solana matures.

Slashing

Cryptographic enforcement works well when an actor’s misbehavior produces a verifiable artifact (e.g., conflicting signatures, failed validity checks, provably malformed commitments). Constellation solves Layer 1 censorship-resistance concerns so cleanly because a leader who excludes an attested transaction produces an invalid block, and that invalidity is mathematically demonstrable. Timing games and latency manipulation produce no such artifact. The difficulty is that kind of misbehavior is indistinguishable from honest network delay at the level of individual acts. The misbehavior exists as an absence, and this cannot be cryptographically proven. The only lever available is economic deterrence, which requires slashing

The challenge with slashing is that it traditionally requires a provable offense. Constellation’s fault witness mechanisms handle the case where a proposer who signs two conflicting pshreds can be identified and excluded. However, strategic latency manipulation produces no fault witness. No equivocation, no double-signing, no onchain fingerprint. A proposer who simply delays forwarding pshreds by a few milliseconds consistently and selectively leaves nothing to slash. 

If a proposer’s pshreds consistently arrive at attesters in the last few milliseconds of the cycle window—across many cycles, for transactions that happen to be competitors to the proposer’s own submissions—a well-specified slashing mechanism could treat this pattern as evidence of systematic manipulation without a single provably malicious act. Traditional slashing cannot address this directly. Slashing in its canonical form requires an unambiguous, self-contained proof, and no such proof exists for a proposer who simply delayed forwarding by a few milliseconds. What differs is the pattern.

This is where traditional finance may actually be instructive for decentralized finance regarding regulators’ approach to latency-based manipulation. Spoofing enforcement under the Dodd-Frank Act, for example, relies on detecting statistical patterns (e.g., cancel-to-fill ratios, cancellation timing distributions, price impact correlations) instead of proving intent for any individual order. No single instance is provably intentional. However, the pattern is. The same logic applies because statistical regularity is objective even when individual acts are not. Also, the economic incentive to manipulate is present in permissionless proposer sets as much as it is among high-frequency traders in traditional finance. Where the analogy breaks is in its enforcement. That is, Dodd-Frank relies on a regulator with subpoena power, whereas a trustless context requires that the detection and penalty mechanisms be enshrined in the protocol itself.

We propose adapting fisherman nodes as a candidate mechanism to address this gap. Originally introduced in Vitalik’s research on data availability, fisherman nodes could be adapted as a class of observers that watch attestation data across many cycles and submit statistical fraud proofs to an enshrined arbitration protocol. The individual late arrival is subjective. However, the pattern computed deterministically across n cycles is objective. This is the same insight that underlies fraud proofs in optimistic rollups, but applied to timing behavior instead of state transitions. Also, an enshrined arbitration protocol for statistical fraud proofs is not categorically different in kind from the new governance tooling currently under development, which would allow stakers to override their validator’s votes in future governance proposals. If Solana is prepared to have infrastructure for staker vote overrides, the technical foundations for a fisherman-based arbitration system may be closer than they appear.

This exploration into fisherman nodes represents a more credible direction than extending canonical slashing to cover acts that leave no single onchain fingerprint, and one that the protocol’s current development of governance infrastructure may already be positioned to support.
That said, three limitations would need to be addressed in any concrete specification. First, conducting a statistical analysis of attestation timing data across thousands of cycles is non-trivial and could easily raise validator hardware requirements. This raises operational costs and the potential that the fraud-detection role could be concentrated among a select group of sophisticated actors. Second, any threshold specification must be robust enough to distinguish genuine network variance from strategic manipulation without being overly conservative to produce false positives. Third, the arbitration protocol itself introduces a new attack surface through which this new fisherman-based system could be gamed via coordinated reporting. The design of any arbitration protocol would need to account for this, potentially through incentive mechanisms that make fisherman operation viable for smaller participants, or aggregation schemes to distribute compute across the fisherman set.

The concrete research question we are posing is this: Can a statistical fraud-proof framework be specified—defining threshold parameters, accounting for network variance, and how penalties scale—that is simultaneously robust enough to deter systemic latency manipulation, conservative enough to avoid penalizing honest variance, and simple enough to resist adversarial gaming by sophisticated operators? This is among the most technically demanding open problems in the MCP literature, and one that the broader research community has yet to resolve.

Hiding

Slashing is not the only solution to timing and latency-manipulation games. Hiding addresses both content-visible ordering and timing-latency manipulation issues, which neither asynchronous execution nor slashing can address on their own. Constellation implements partial hiding (i.e., transaction content is visible only to the receiving proposer(s) and to the leader after the cycle deadline), but does not achieve the full hiding property. The attack surface scales with the number of proposers a user chooses to submit to. While this partial hiding narrows the attack surface relative to full visibility, it does not close it. Full hiding, in which no party observes transaction content before confirmation, remains an open problem for Constellation.

The theoretical ideal is the approach outlined in Garimidi et al., which uses Hiding Erasure-Correcting Code (HECC) as a primitive. Unlike Constellation’s standard Reed-Solomon, HECC provides an information-theoretic guarantee that an adversary collecting fewer than a threshold of shreds learns nothing about transaction content. Constellation chose to continue with the erasure coding currently live on Solana via Turbine.

The most relevant recent development is Jito’s Block Assembly Marketplace (BAM), which uses Trusted Execution Environments (TEEs) to create an encrypted mempool where transactions remain private until execution. BAM demonstrates that there is a growing material demand for content privacy on Solana. However, TEE-based hiding is not without its limitations, shifting trust assumptions to hardware manufacturers, which is a meaningful constraint for a protocol that aspires to trustless operation. A path forward where threshold encryption is used to offer a more principled alternative should be thoroughly explored.

BAM is significant insofar as it represents an off-protocol attempt to solve the content-visibility problem that Constellation defers. Jito is operationally positioned to provide transaction privacy at scale via BAM before Constellation even ships. This raises the question of whether protocol-level hiding remains urgent if an application-layer solution can already provide it. The answer entirely depends on trust assumptions and whether hardware manufacturers are trusted “enough” compared to what the protocol could theoretically guarantee. Nonetheless, this cannot be treated as a permanent solution. The likely path is that BAM will provide near-term privacy while eventual iterations of Constellation will explore threshold encryption as a longer-term alternative.

For a protocol that aspires to be the infrastructure of Internet Capital Markets, partial hiding is a meaningful step forward, but not the destination. The remaining gap is the difference between a market structure that is fair and one that is merely less unfair than the one that currently exists today. 

Protocol Complexity

Constellation is the most structurally ambitious upgrade proposed to Solana since its inception. It introduces three new node roles, a new timing model that depends on UTC wall-clock synchronization, new erasure-coding passes, new message types, and new failure modes, all layered on top of Alpenglow, which is not live yet on mainnet. The question of whether this is the right moment to take on this complexity is a serious one that warrants more than mere optimism.

Solana has garnered a notorious reputation in the past for the various outages that plagued the network during 2021 and 2022. These outages share a common theme—they were caused by the inherent difficulty of reasoning about edge cases in a novel, high-throughput protocol under real-world load conditions. The over two years of uptime the network has achieved recently since then is a genuine milestone, one forged in the painful fires of iteration. That track record argues for confidence in Solana’s maturity. 

Now, any protocol-level change of Constellation’s magnitude requires simultaneous implementation in both Agave and Firedancer. This requires alignment between two independent development teams on protocol semantics, edge cases, and timing assumptions that are novel to both. The complexity of achieving this for Alpenglow alone is already significant, and Constellation will only compound this. This is not an argument against proceeding. Rather, it is an argument that Constellation’s eventual SIMD must include a clear plan for multi-client implementation.

Financial institutions are beginning to come onchain. The stakes of a significant outage are materially higher than they were in 2021, both in terms of reputational and economic harm. We, as a community, need to be honest about the complexity that Constellation introduces. The eventual SIMD must be approached with the rigor that a global financial system warrants. 

Naturally, the question of why now arises. Do we really want to take on the risk of such an upgrade? Are there not incremental upgrades we could make over time to soften this change? The aforementioned explorations of async execution and slashing, for example, suggest that an incremental alternative is possible. A version of Constellation’s roadmap in which the ecosystem benefits from staged deployments for various complementary upgrades is possible. Whether this is seen as prudent or deaccelerationist is as much a values question as a technical one, and reasonable people can disagree. We are as much building a system of meaning as we are building a system for finance.

This is already happening in practice. Anza has confirmed that 200ms slots and two-slot leader windows will ship before Constellation. This means that Solana will see meaningful performance improvements that address some of the sequence latency concerns the community has raised, which we will discuss in the following section, without requiring the full complexity of MCP. If 200ms slots bring Solana’s confirmation path close enough to Constellation’s projected overhead that the marginal cost of MCP is small, the political case becomes substantially easier to make. However, if 200ms are perceived as “good enough” by current trading incumbents, the urgency for Constellation diminishes. Comparative latency projections showcasing 200ms slots versus 200ms slots and Constellation can help the community evaluate the incremental cost against the incremental guarantee. Of course, we will need to wait and see for Constellation’s eventual SIMD and proposed implementation.

The strongest argument for proceeding is the window that Alpenglow creates. Constellation inherits from Alpenglow’s security model, uses Rotor as its data dissemination layer, and benefits from the removal of Tower BFT’s complexity. The marginal cost of adding MCP on top of Alpenglow is lower than the cost of starting from scratch with a future consensus design. Waiting isn’t cost-free, given our competitor’s explorations into bringing MCP onchain, and due to the fact that deferring censorship resistance to a future upgrade cycle will inevitably bring about its own complexity and political headwinds.

If not now, when?

The complexity is justified, but that justification needs to be earned through a rigorous specification, staged deployments, and empirical validation of the latency and bandwidth claims the community is currently debating on the basis of theory alone. 

Is Constellation IBRL-Aligned?

Sequence versus Inclusion Latency

The Solana community’s initial reaction to Constellation has been polarizing, to say the least. This reaction has surfaced an important debate that deserves a precise answer rather than a diplomatic one. The sharpest formulation came from Cavey’s tweet, stating that “MCP and IBRL are fundamentally incompatible.” He argues that MCP directly and unquestionably decreases bandwidth and increases latency in an attempt to improve market structure. Toly’s response to this was equally direct: “You are wrong. There is no way to reduce inclusion latency without MCP.”

Both are technically correct; they are measuring different things.

MCP decreases inclusion latency and increases sequence latency. They are not the same property, and conflating them is the source of most of the confusion in the current debate.

Sequence latency is the time from when a transaction is submitted to when it executes. This inherently increases under MCP, as the attester round, 50ms cycle window, and batch assembly step all add time that isn’t present in the current TPU submission path to a cooperative leader. The critique that rational agents sending to multiple proposers consumes more bandwidth is correct. The coalesce window adding latency is correct. These are all real costs that should be measured and presented to the community as trade-offs for removing hard censorship.

Inclusion latency is the guaranteed time window during which a valid, fee-competitive transaction will be included. This guarantee is essentially unbounded under today’s single-leader model. That is, a leader who wants to delay or exclude a given transaction can do so, and there’s no protocol mechanism to stop this. The latency users already experience includes all the friction from holding, scheduling, and timing games, as well as the selective ordering imposed by leaders. The counter that real-world confirmation times include latency from these games, so the net user experience could improve, is directionally correct under this framing and is supported by the community discussions currently unfolding on X. 

The real question is which latency to optimize for.

FIFO versus FCFS versus FBO

Before examining which latency to optimize for, it is worth understanding a related debate that the community has been working through simultaneously: whether MCP is compatible with FIFO.

FIFO (First In, First Out) is a general ordering principle where transactions are processed in the order in which they arrive. Umberto argued at length that the answer is inherently nuanced. MCP can produce what is called “probabilistic FIFO,” but only under specific infrastructure conditions. Essentially, if a user is geographically close to enough proposers to avoid censorship, and those proposers are close enough to attesters to rapidly hit the 40% attestation threshold for guaranteed inclusion, then the user pragmatically experiences FIFO inclusion. That is, their transaction is included before any competitor has time to observe and react to it. The race ends at inclusion rather than execution. MCP approximates FIFO under those conditions as an emergent property rather than as a protocol rule.

The problem is that Solana’s current infrastructure doesn’t meet those conditions. Stake is concentrated in a handful of regions, which means quorum formation is bottlenecked by the need to reach dense stake pockets. This concentration creates a window in which a “geographically advantaged” observer can frontrun a transaction in flight. Whether Constellation’s deployment is accompanied by the geographic distribution and attester density that probabilistic FIFO requires is as consequential as the protocol design itself. A protocol that guarantees censorship resistance but enables latency-based frontrunning due to sparsely located infrastructure will not deliver the market fairness its whitepaper promises.

A related but distinct question is whether Constellation could have implemented FCFS, but chose not to. While FIFO is an emergent infrastructure property, FCFS (First Come, First Served) is a specific protocol rule that ensures the first transaction to arrive is processed deterministically. It is worth noting that Constellation does order deterministically—transactions are sorted by priority fee within each batch—so the question is not whether ordering is protocol-enforced, but whether arrival time should govern that ordering relative to priority fees.

Recent debate has surfaced a more fundamental objection than initially raised—FCFS may be entirely unenforceable in a trustless context. Validators can simply misrepresent the order of transaction arrivals without leaving any onchain artifacts. This is the same absence-of-evidence problem that makes timing manipulation unslashable in the traditional sense, and may require more creative solutions (e.g., the statistical pattern-detection approach developed in the slashing subsection). A protocol rule that honest validators would follow and dishonest ones can silently ignore is not a meaningful guarantee. This reframes Constellation’s omission of FCFS from a design preference that warrants explanation to a recognition that FCFS, in a permissionless validator set, may not yet be implementable on Solana as a hard protocol property at all, at least under current assumptions. If FCFS is genuinely unenforceable in a permissionless validator set under the current assumptions, then the SIMD’s task is to state this constraint explicitly and justify priority-fee ordering as the correct design default. Leaving the community to debate FCFS as if it were a viable alternative that Constellation chose not to implement, rather than framing it as a property that may not be implementable, will generate even more friction within the community.

Priority-fee ordering within a fixed time is not a novel compromise. This is known as a frequent batch auction (FBA), a market microstructure design with significant academic support. Budish, Cramton, and Shim, for example, argue in The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response (2015) that discrete-time batch auctions with uniform clearing prices eliminate the arms race for speed that continuous-time markets create. This replaces latency-based competition with price-based competition. Constellation draws from this to introduce Fixed Batch Ordering (FBO) based on priority fees. Thus, Constellation’s 50ms cycle instantiates this, as transactions compete on fees rather than arrival time within each batch, and all transactions in the same batch receive the same ordering treatment. This is precisely the class of applications identified earlier as not yet existing at scale on Solana. 

The FIFO, FCFS, and FBO debates are largely about the same underlying concern: who controls ordering once censorship resistance is guaranteed, and whether the market structure Constellation seeks to create is actually fair or merely less unfair than what currently exists today. Constellation closes the most legible form of manipulation. What replaces it depends on the choices the whitepaper defers to its SIMD to make.     

So, What Are We Optimizing For?

Sequence latency matters most for existing trading applications (i.e., AMMs, prop desks, CLOBs). These applications are designed around the assumption that whoever is fastest and most competitive on fees wins, and they have built their infrastructure accordingly. Trading should be bound only by the laws of physics, delivering an unparalleled user experience on Solana. For some of these users, Constellation is a step backward in the metric that matters most to them. The concern is that Solana could be making the same fatal mistake Ethereum once made. That is, prioritizing market structure over performance, which could cause execution to leave the chain. This is a legitimate risk worth discussing.

For the financial applications Constellation is designed to enable (i.e., onchain auctions, order books with reliable inclusion guarantees, censorship-resistant DeFi protocols), inclusion latency is the right metric. A limit order that can be frontrun or selectively delayed provides weaker guarantees than an exchange-style order, regardless of how fast the confirmation nominally is. Solana can now support batch-auction trading applications with uniform clearing prices, where sequencing shouldn’t affect the execution price. This is a class of applications that largely doesn’t exist on Solana today, precisely because inclusion guarantees aren’t available. Constellation arguably over-indexes on a design that is optimized for users who don’t yet exist at scale on Solana, at the expense of users who do; an argument already being made by core contributors.   

The sequence latency concern is contextualized by the fact that Solana is currently losing meaningful ground on perpetuals trading to Hyperliquid—a purpose-built, centralized-sequencer perps exchange that makes no pretense of decentralization, but offers a compelling sub-millisecond execution experience that sophisticated traders and applications require. Hyperliquid made a deliberate choice to build a product that professionals actually want to use, sacrificing core tenets of what actually makes crypto “crypto.” The implicit risk with Constellation is that adding communication overhead and attestation rounds to the confirmation path is the same trade Hyperliquid is making, but in the wrong direction. Critics view sacrificing any potential performance edge that currently makes Solana competitive with centralized trading applications, without yet having the financial applications to justify the trade, negatively, and are quite vocal about it.

This is not a concern that should be dismissed so readily. We can reframe our earlier question of whether we should optimize for sequence or inclusion latency as whether Solana can afford to make that trade, given where its current competition is coming from.

There is a deeper issue here, though. The Hyperliquid comparison elucidates that the meaning of IBRL may have shifted over time. Toly and Raj’s original motivation for building Solana was censorship resistance: “In order to allow DeFi products to attract billions of users and devices, we need to scale censorship resistance….It is the single most important problem to be solving, and our entire motivation for building Solana.” IBRL emerged much later as the engineering articulation of that mission: build fast enough that a decentralized network could outcompete centralized infrastructure on the metrics that matter. Since then, IBRL has taken on a techno-optimist meaning of its own, becoming ubiquitous in Solana’s cultural zeitgeist. It is equal parts engineering diktat, cultural shibboleth, and secular prayer. For many, it has become the goal rather than the means, with sequence latency minimization as an end in itself, decoupled from the censorship-resistance objective it was designed to serve.

If this drift has occurred, which it seemingly has, Constellation will face staunch cultural resistance. This is not a new dynamic, given the community’s rejection of SIMD-228, a contentious inflation-reduction proposal that failed to pass governance voting. Even the most broadly beneficial proposals can fail when they conflict with entrenched community priors. Constellation is more nuanced because its bandwidth and latency costs are real, but their net effect on user experience is unquantified. Reaching firm conclusions in either direction is premature without the data. What the community lacks, and what Anza will need to provide for a convincing SIMD, is empirical data on what the confirmation path looks like under realistic network conditions. Alpenglow did not encounter this because of its IBRL alignment: a 100x reduction in time to transaction finality and streamlined consensus. Constellation’s case is harder to make instinctively, but no less real if future benchmarks support it.

It appears that the community will judge a censorship-resistant upgrade by a performance metric it was never designed to optimize. The more productive question is whether the trade makes it worth it. There is a real, measurable cost: some additional sequence latency and bandwidth in exchange for a hard, protocol-enforced guarantee that no leader can selectively exclude a transaction. This is a prerequisite for the financial applications Solana is currently trying to attract.

Our view is that Constellation is IBRL-aligned under the correct interpretation of what IBRL was always meant to aim for. Whether the community arrives at the same conclusion will depend less on technical merits than on whether the empirical case gets made and the design choices get explained.  

Conclusion

Constellation is the first formal protocol-level proposal to bring MCP to a production blockchain at scale. It solves hard censorship structurally such that fee-competitive transactions attested to by a sufficient quorum cannot be excluded from a valid block. This cryptographic guarantee changes what can be built on Solana. 

What Constellation knowingly defers is equally important. Content-visible ordering is partially mitigated by Constellation’s submission model, in which only the receiving proposer sees transaction content, but the residual attack surface scales with the number of proposers a user submits to. Timing and latency manipulation remain the largest unsolved problem that is currently unpunishable under the current design. The potential paths forward (i.e., asynchronous execution, slashing, hiding) are identified but unspecified, each introducing its own complexities. Constellation’s whitepaper is honest about these boundaries, and its eventual SIMD should be equally so.

The hardest question Constellation poses is whether the tradeoffs it introduces are ones Solana can afford to make. There is a real, measurable cost in sequencing latency and bandwidth that has yet to be quantified. Equally, there is a real, unmeasured benefit in inclusion guarantees that do not yet have an application base to justify them at scale. The community is being asked to invest in infrastructure for financial applications that largely do not exist on Solana today, at the potential expense of trading applications that do. Whether this is visionary or premature depends on data that the community does not yet have.

Constellation’s case will ultimately be made or broken by its future empirical benchmarks under realistic conditions. What the confirmation path looks like with 200ms slots alone versus 200ms slots under Constellation is the single most important number Anza can provide. Until then, the community is debating tradeoffs it cannot quantify.

Our view is that Constellation represents the correct next step in the protocol roadmap Alpenglow has opened. It is IBRL-aligned under the interpretation of IBRL that Solana was initially built to serve. However, this view is contingent on the SIMD earning the rigor that a global financial system warrants—in its specification, staged deployment, testing, and the empirical case it presents to the community it asks to adopt.

Additional Resources

Related Articles

Subscribe to Helius

Stay up-to-date with the latest in Solana development and receive updates when we post