Skip to main content
Intention’s network is organized as a set of stake-weighted validators that run consensus and execute state transitions, plus clients that submit transactions and consume state through RPC. This page describes the operator-level picture: who does what, how data flows, and how external consumers hook into it.

Participants

Validators. Intention is operated by a set of nn stake-weighted validators with total stake S=siS = \sum s_i. A validator combines four responsibilities that general-purpose chains often split across separate roles:
  • Consensus participation — voting in the HotStuff-family pipeline described in IntentionBFT.
  • Execution — running IntentionKernel against the canonical transaction order to produce the block’s outputs.
  • Price observation — subscribing to approved external venues and producing signed price claims for the in-consensus price quorum.
  • Data dissemination — propagating transaction batches to peers through the batch availability layer.
A validator that cannot perform all four does not qualify to propose blocks. Price-aware leader eligibility is a hard precondition. Clients. Clients submit transactions and consume state via RPC; they do not participate in consensus. A client can be a browser wallet, an API bot, an institutional custodian, an indexer service, or an autonomous agent subscribing to the Verifiable Execution Stream. Full nodes and light clients. Full nodes replay the chain against the same execution engine validators use, without voting, to serve high-volume RPC and archive queries. Light clients verify block headers and quorum certificates without replaying state.

Batch dissemination in the Narwhal / Bullshark family

In a naive consensus protocol, a leader proposes a block whose payload contains all of the transactions for the round, and validators must download the entire payload before they can vote. Under high throughput, this couples consensus message size to throughput and becomes a bottleneck. IntentionBFT decouples data dissemination from consensus ordering using a construction in the Narwhal / Bullshark / Quorum Store family. The mechanism works in two layers that run in parallel:
  1. Background batching. Validators continuously broadcast batches of transactions to one another. Each validator’s receipt of a batch is acknowledged so that the originator can prove the batch is held by a 2f+12f+1 stake-weighted set before it is referenced in a block proposal.
  2. Consensus over digests. When a leader proposes a block, the proposal references batches by their cryptographic digests rather than carrying the full transaction contents. Validators verify that the referenced batches are held by a 2f+12f+1 stake-weighted set and can vote without re-downloading the transactions.
The result is that consensus messages remain small and bounded regardless of throughput, while the actual transactions flow through the high-bandwidth dissemination layer in parallel with consensus.
The 2f+12f+1 availability requirement prevents the “unreplayable committed block” failure mode — in which only ff Byzantine validators hold a referenced batch and subsequently go offline. The honest majority can always reconstruct a committed block’s contents.

Transaction lifecycle

A transaction flows through a validator in five stages that pipeline across blocks:
  1. Submission and batching. A client submits the transaction to an entry node, which adds it to a batch and disseminates the batch to peer validators.
  2. Availability. Once 2f+12f+1 stake-weighted validators have acknowledged the batch, it is eligible for inclusion in a block proposal.
  3. Consensus ordering. A block leader selects available batches and proposes a block. The three-phase HotStuff pipeline certifies the block, committing simultaneously to the ordered transaction list (Canonical Sequencing Commitment) and to the round’s price quorum.
  4. Sequential execution. Once a block is finalized, validators execute it against their in-memory state using the byte-determinism discipline of OTD. Every validator produces the same per-transaction output array.
  5. Attribution and publishing. The per-transaction output array, the system-write channel, and the quorum certificate are published to the Verifiable Execution Stream, where clients, indexers, and autonomous agents consume them.
Batch dissemination, consensus ordering, sequential execution, per-transaction attribution, and ledger certification are independent stages that pipeline across blocks. A validator is simultaneously disseminating batch B+3B+3, running consensus on block B+2B+2, executing block B+1B+1, and publishing VES for block BB.

Client and indexer access

Clients subscribe to the Verifiable Execution Stream via RPC. Each block on the stream carries a 2f+12f+1 aggregate signature that any consumer can check independently, so VES is the authoritative data surface for every downstream consumer:
  • Trading clients and wallets read order-book state, account state, and fill events from VES.
  • Risk and compliance systems receive every state change already attributed to its originating transaction, with cryptographic provenance — no indexer reconstruction required.
  • Autonomous agents subscribe directly to VES and act at block cadence without trusting an intermediary.
  • Third-party indexers may exist as value-added layers (dashboards, analytics, historical queries) but they are no longer infrastructure-critical. The chain is the index.

Operational resilience

Each validator runs a snapshot manager that persists state to NVMe with double-buffering so the hot path does not stall on snapshot I/O. On restart, a validator loads the most recent snapshot and replays committed blocks to catch up before rejoining consensus. If a minority of validators are unavailable (below ff in stake), IntentionBFT continues producing blocks. The reputation heuristic demotes unresponsive validators from leader rotation until they recover, so the chain does not stall on a failed leader.