Introduction
On March 9th, 2026, zkSecurity was commissioned to perform a security audit of Aptos Labs’ encrypted mempool. The encrypted mempool allows users to encrypt their transactions before submission so that transaction contents remain private until they are finalized in a block, protecting against MEV (Maximal Extractable Value) attacks such as front-running and sandwich attacks.
The audit was conducted by two consultants over four weeks (40 engineering days). A number of observations and findings were reported to the Aptos Labs team and are detailed in the latter sections of this report.
Scope
The audit covered the Aptos encrypted mempool implementation in the Aptos Core repository (aptos-labs/aptos-core) and was divided into three phases, each targeting a distinct component.
Phase 1: Batch threshold encryption (commit ba538036b4d1ba44a9dcff7d3c17bc3d861981b5)
- Batched IBE threshold encryption scheme and its underlying components
- Hybrid encryption and key management (KEM and data encapsulation)
- Symmetric encryption, key derivation, randomness, and nonce handling
- Rust and TypeScript implementation of the scheme
Phase 2: Chunky PVSS and underlying components (commit a8b7a672fefd4fd55c72eccf2880ce158893ed26)
- Chunky PVSS scheme and its underlying cryptographic components
- DeKART zero-knowledge range proof
- Rust implementation of these schemes
Phase 3: Encrypted mempool integration (commit e582a7599c8a8880bda3fa887a6b89142fbfabf4)
- Integration of the batched IBE threshold encryption
- Integration of Chunky DKG
- Integration of the decryption secret key share derivation and aggregation
- Transaction encryption, proposal, validation, decryption, and execution with respect to the encrypted mempool
Gas metering for encrypted transactions is out of scope for this audit.
Security Goals
The audit evaluated the following security properties of the Aptos encrypted mempool.
Cryptographic soundness. The core cryptographic building blocks must be secure:
- The batched IBE threshold encryption scheme (BIBE) must provide CCA security.
- The Chunky PVSS scheme must correctly distribute secret shares among validators.
- The DeKART zero-knowledge range proof must be sound (and zero-knowledge).
Transaction confidentiality. A transaction’s content must remain private until it is finalized in a block and decrypted immediately before execution. Specifically:
- An unfinalized encrypted transaction must not be decryptable by any party.
- The decryption secret for a round must only be derivable after the round is finalized.
- A transaction encrypted in one epoch must not be decryptable using a key from a different epoch.
Transaction integrity. The encrypted mempool must prevent adversarial manipulation of transactions. Specifically:
- An encrypted transaction must not be decryptable to a different plaintext than what the sender intended.
- An adversary must not be able to copy, replay, or malleate another user’s encrypted transaction.
- A transaction that is decrypted should be executed. An adversary should not be able to cause a transaction to be decrypted without execution, leaking the sender’s intent without their consent.
Liveness. The integration must not introduce new liveness vulnerabilities. Specifically:
- A malicious validator must not be able to stall DKG, block secret share reconstruction, or prevent the network from progressing.
- A malicious validator must not be able to cause the DKG to deal an inconsistent secret, or cause the derived decryption key in a round to be inconsistent.
- A malicious user must not be able to submit a transaction that causes other decryptions to fail or crash validators.
- A malicious user or validator must not be able to consistently spam the network to prevent other encrypted transactions from being executed.
Overview
The Aptos encrypted mempool allows users to encrypt their transactions locally, keeping transaction contents encrypted while in the mempool. Transactions are only decrypted after being included in a finalized block, immediately before execution. The system uses a threshold encryption scheme in which only a threshold of validators (by stake) can decrypt a transaction. If a transaction is not finalized, it will not be decrypted under the honest-majority validator assumption. By keeping transaction contents private until finalization, the system aims to mitigate some MEV attacks such as sandwich attacks and front-running.
The two core cryptographic components of Aptos’s encrypted mempool are the batch threshold encryption scheme and the distributed key generation (DKG). We first describe the batch threshold encryption scheme, then the distributed key generation process. The DKG relies on DeKART range proofs and a -protocol signature of knowledge, before being integrated into consensus.
Overview of the Batch Threshold Encryption Scheme
The batch threshold encryption allows users to encrypt their transactions individually, while a threshold of validators can efficiently decrypt them together in a batch. The scheme is described in the TrX paper.
Threshold Encryption
To achieve threshold encryption, the scheme combines BLS threshold signatures with witness encryption. A BLS threshold signature requires out of validators to produce a valid aggregate signature. The user encrypts their transaction so that only someone who holds the BLS signature (the witness) over a specific message can decrypt it. To decrypt, each validator produces a partial signature; once partial signatures are collected, the aggregate signature is derived and the transaction is decrypted.
A naive implementation would have each user sample a random message and encrypt their transaction to the BLS threshold signature over that message. The transaction carries the message so that validators can sign it. However, this does not scale: different transactions require different signatures, so each validator’s signing work grows proportionally with the number of encrypted transactions.
An alternative approach is to have all users encrypt their transactions to the signature over the same message, so that validators only need to sign once. For example, users could encrypt to a signature over the target round number (or block height). However, there is no guarantee that a transaction is included in a specific round. If a transaction misses its target round but validators have already signed over that round number, the transaction would be decryptable even though it is never finalized.
Aptos mitigates this by having each user encrypt their transaction to the threshold signature over a KZG commitment to its random tag. Since a KZG commitment can commit to multiple tags at once, a single signature over one commitment can serve to decrypt an entire batch of transactions. As described in the TrX paper, a ciphertext is encrypted so that it can be decrypted if and only if the decryptor knows a signature under public key over a vector of tags such that its . Below we describe how this batch witness encryption scheme is constructed.
The Batch Witness Encryption
A batch of ciphertexts has tags . Define and the KZG commitment . The user wants to encrypt to the signature under public key , where commits to a polynomial that has as one of its roots.
The relationship between the signature and the commitment can be verified using two pairing product equations (PPEs):
PPE 1 (signature):
PPE 2 (KZG opening):
The witness for decrypting a ciphertext with tag has three elements:
- : threshold BLS signature (shared across batch).
- : KZG commitment (shared across batch).
- where : KZG opening proof (per-tag).
Building the Encryption
We want to encrypt a message so that only someone who possesses all three witness elements (, , and ) can decrypt it.
At encryption time, we have none of the witness. The only computable pairing value is in the left-hand side of PPE 1: . We sample a random field element and set
as the encapsulation key. The goal is that anyone with all three witness elements can derive the same encapsulation key from the two PPEs.
Releasing directly would allow anyone to compute . Instead, we release ciphertext elements (hints) that allow someone with the full witness to recover , but reveal nothing to anyone else.
Step 1: Using PPE 1 alone
Expand using PPE 1:
If we release and as hints, anyone with and can pair them and recover .
The problem: PPE 2 is not enforced. The decryptor does not need at all. This means anyone with a valid signature and any commitment (not necessarily one where ) could decrypt. We need to bind decryption to knowledge of a valid KZG opening proof.
Step 2: Entangling PPE 2
The idea is to hide the hint behind PPE 2, so that extracting it requires .
Sample a second random scalar and release instead of alone. Now the decryptor can pair with this combined hint, but gets an extra term that they need to cancel:
The second factor is unknown to the decryptor. But PPE 2 provides exactly the bridge to compute it if and only if the decryptor knows :
The first equality is just exponentiation. The second follows from PPE 2. So we release as a third hint. Anyone who knows can pair it with this hint and recover the missing term. Without , they’re stuck. Note that the user can compute this hint without knowing directly, given : .
Step 3: The ciphertext
There are three hints, matching the implementation:
The encapsulation key depends only on . The scalar is structural: it creates the entanglement with PPE 2 and cancels out completely during decryption.
We can verify that anyone with the witness can recover from the hints:
Expand using PPE 1:
Pull into the second argument:
Substitute to split the second term:
Apply PPE 2 to the last term: :
Every factor is a pairing of a witness element with a hint.
Decryption
We can split the decryption process into public precomputation (pipelined during voting) and a final step on the critical path.
Phase 1: Precomputation (public, per-ciphertext). This is where the decryptor “peels off” the blinding using and :
The two pairings correspond to the two sides of the entanglement from Step 2. No threshold signature is needed. This phase runs during voting rounds, once the list of tags is fixed, at which point and can be computed by the validator.
Phase 2: Final decryption (requires , critical path):
This is computed after the threshold signature is successfully generated.
3-Element vs 2-Element
The TrX paper also introduces a 2-element scheme that merges the two PPEs into one (raising PPE 2 by and substituting), eliminating as a witness element:
The implementation uses the 3-element scheme because it does not require as part of the encryption key, which simplifies the PVSS setup.
| 3-element | 2-element | |
|---|---|---|
| Ciphertext | 3 elements | 2 elements |
| Random scalars | 2 () | 1 () |
| Encryption key | ||
| Precomputation | 2 pairings / ct | 1 pairing / ct |
| Critical path | 1 pairing / ct | 2 pairings / ct |
Protection Against Linear Malleability Attacks
Because KZG commitments and BLS signatures are both linear, an adversary who obtains signatures for two batches can combine them to forge a valid witness for a tag that was never finalized. Given signatures on commitments , they can set and compute , which is a valid signature on . By choosing to make a target tag a root of the combined polynomial, the adversary forges a full witness for .
TrX fixes this with a per-batch randomizer baked into the CRS, so commitments become . Planting a root in the combined commitment now requires knowing the ratio , which the adversary cannot compute from the CRS group elements alone.
Earlier schemes bound each batch to a specific block height , making the attack fail because is not a valid hash for any block height. However, this forced users to encrypt to a specific block height at encryption time. If the transaction missed that block, the ciphertext had to be re-encrypted. On Aptos with sub-second blocks, this is impractical. TrX’s approach removes the block-height dependency entirely.
For cross-epoch and cross-chain separation, in the signature binds witnesses to the epoch’s aggregate public key, which changes each epoch, preventing replay across epochs or networks.
CPA-Secure to CCA-Secure
The witness encryption scheme above is only CPA-secure. TrX upgrades it to CCA security using the standard Boneh-Canetti-Halevi-Katz (BCHK) transformation, which turns any CPA-secure IBE into CCA-secure PKE by combining it with a one-time signature.
The tag is derived from a fresh ephemeral verification key and the associated data:
where is the sender address in the Aptos integration. The full encryption procedure is:
- Sample fresh
- Compute
- Produce the witness encryption ciphertext under
- Sign the full ciphertext:
- Output
This provides two properties. Non-malleability: the ephemeral key signs the whole ciphertext, so any modification is caught by verify_ct. If the adversary re-signs with their own key, the tag changes and the ciphertext encrypts their own message. Tag uniqueness: since each ciphertext samples a fresh , tags are distinct with overwhelming probability, preventing cross-batch reuse. Then an attacker will not be able to create the same tag after seeing a tag to front-run. A user who reuses their own ephemeral key across two transactions will derive the same tag for both, causing them to be decrypted together as a single batch. This is a protocol violation that only harms the sender.
Overview of the Distributed Key Generation (Chunky)
The batch threshold encryption scheme requires validators to jointly hold a secret field element so that only a coalition above a threshold can reconstruct it. The process of generating and dealing this shared secret is called Distributed Key Generation (DKG). The common process is:
- Each validator generates a random secret, splits it into shares, and distributes the shares to the other validators.
- Each validator sums the shares it received to get its final key share.
- Each validator now holds a share of the global secret key.
Since each validator only contributes one piece of the secret, no single validator knows or controls the final secret.
Note that Aptos already has a DKG for its randomness beacon, but that one operates over elliptic curve group elements. The batch threshold encryption scheme requires the secret to be a field element in order to support BLS signing, which is why a new DKG scheme (Chunky) is needed.
How to Share a Secret With a Threshold
This is weighted Shamir secret sharing. There are validators and validator has stake weight . We want any subset with combined weight to reconstruct the secret. The dealer constructs a degree- polynomial with as the secret, and gives validator exactly evaluations: , proportional to its stake. Any subset whose combined weight exceeds has enough points to interpolate and recover .
How to Generate the Secret So No One Knows It
If one validator generates the whole polynomial, it knows the secret. Instead, each validator generates its own random degree- polynomial and deals it independently. The final shared polynomial is , so the secret is . Each validator distributes evaluations of its own to all others, and each recipient sums what it receives to get its share of .
Not every validator needs to participate. A qualifying set with sufficient combined stake is enough. As long as contains at least one honest validator, the final polynomial is uniformly random and unknown to any single party.
Publicly-Verifiable Secret Sharing
In Aptos, DKG happens between epochs: current-epoch validators deal secrets to next-epoch validators over a public channel with no direct secret communication. This requires the dealing process to be public. The dealer will encrypt the secret key share to the receiver. To ensure the encrypted secret is valid, others should be able to verify it without decryption. Chunky is a Publicly-Verifiable Secret Sharing (PVSS) scheme that satisfies both requirements.
A dealing validator samples the secret , picks a random degree- polynomial with , and evaluates at each validator’s points to get shares .
Step 1: Low-Degree Test
The dealer must prove that the shares are evaluations of a degree- polynomial, without revealing them. It commits each share in as and publishes . The SCRAPE low-degree test then verifies that these commitments are consistent with a degree- polynomial using Reed-Solomon dual codewords. Anyone can run this check without learning the shares.
Step 2: Encryption
The dealer encrypts each share to its intended recipient. Each validator has a known encryption public key .
Since shares are full field elements (~255 bits for BLS12-381), decrypting ElGamal requires solving a discrete log, which is only feasible for small values. To handle this, each share is split into chunks of bits each (e.g., , ):
Each chunk is small enough for brute-force discrete log. The dealer ElGamal-encrypts each chunk:
To decrypt, validator computes , solves the small discrete log, and recombines the chunks to recover . Note that all the receivers share the same randomness values in the ciphertext.
Step 3: Validity Checks
We must ensure the ciphertexts actually encrypt the committed shares and not garbage. Three things are checked.
Check 1 — Correct encryption format. A -protocol signature of knowledge (ZKSoK) proves that the dealer knows the randomness and plaintext used in each ElGamal encryption ciphertext.
Check 2 — Chunks recombine to the committed share. The dealer picks randomness with a correlation constraint:
This ensures the randomness cancels when chunks are recombined:
So the recombined ciphertext is a deterministic commitment to the full share. Consistency with is then verified with a pairing:
This can be batched across all pairs via random linear combinations, collapsing into a single two-pairing check.
Check 3 — Chunks are in range. Each chunk must be in , otherwise a malicious dealer could encode values that fail to reconstruct. The dealer commits all chunks into a single hiding KZG commitment , and uses a DeKART range proof to batch-prove all chunks are -bit. The -protocol from Check 1 is extended to also prove that the values in match those encrypted in the ciphertexts. This is called the “ElGamal-to-KZG” relation.
Step 4: Non-Malleability
Without non-malleability, a malicious validator could take an honest validator ’s transcript (which deals secret ), modify it to deal for some known , sign it under their own identity, and submit it. The combined DKG secret would then be , which fully controls.
To prevent this, the -protocol is made into a zero-knowledge signature of knowledge (ZKSoK) that signs over the dealer’s public key and the epoch number. This binds each transcript to its specific dealer and substituting a different identity will invalidate the proof.
Step 5: Aggregation
The components of a subtranscript (, , , ) are all group elements and can be added pointwise across transcripts. This means individual transcripts can be aggregated: combining transcripts produces a single subtranscript of the same size, representing the combined secret . Each validator only needs to decrypt once from the aggregated subtranscript.
Putting It Together: the DKG
The process runs as below:
-
Dealing phase. Each validator picks a random secret , runs Chunky’s to produce a signed PVSS transcript, and broadcasts it.
-
Agreement phase. Validators agree on a qualifying set (with sufficient combined stake) of valid transcripts and aggregate them pointwise into a compact subtranscript for the combined secret.
-
Commit phase. A leader proposes . Once enough validators attest to it, the aggregated subtranscript is posted on-chain and each validator decrypts its final shares.
Overview of DeKART Range Proof
In the Chunky DKG, each Shamir share that a dealer distributes is a field element around 255 bits wide, which is too large to decrypt directly under ElGamal (decryption requires solving a discrete log). To work around this, the dealer splits each share into small chunks of bits each (, in the system) and ElGamal-encrypts each chunk separately. A chunk is only decryptable if it actually fits in : a malicious dealer who encodes an out-of-range chunk could make recombined shares fail to match the committed share, breaking reconstruction.
This is why the dealer must additionally prove that every chunk lies in . With validators each receiving chunks per share, the dealer needs to range-prove values at once. Aptos uses DeKART, a batched range proof, which produces a single short proof covering all chunks. A separate -protocol binds the values inside DeKART’s commitment to the ones encrypted in the chunked ElGamal ciphertexts; DeKART itself is only concerned with the range claim.
What Is Being Proved
The dealer has a single hiding KZG commitment
over a Lagrange basis of size at positions . Position carries the value and will later be filled with a blinder; the chunks occupy positions . DeKART convinces the verifier that every sits in , without revealing anything about the themselves.
The Polynomial Encoding
Let be the evaluation domain and the “data” positions. The prover works with two families of polynomials over .
- : the degree- polynomial whose evaluations over are the chunks , and whose value at is a fresh random blinder . Its commitment is defined below.
- for : degree- polynomials whose evaluations over are the -th bit of each chunk, with a fresh blinder at position .
Two polynomial identities must hold at every :
Radix decomposition. . Each chunk is the radix-2 recombination of its bits.
Bit constraint. for every . Each is on .
These identities together imply that every chunk is a sum of bits times powers of two, i.e. in .
Both identities are enforced over , not over all of . This is what makes zero-knowledge possible: the blinders and at position are free to be random, so the committed polynomials (and one KZG opening) carry no information about the beyond what is already in .
Collapsing to a Single Quotient
Instead of proving each identity separately, the verifier derives Fiat-Shamir challenges and the prover combines them into one polynomial identity. Let
be the vanishing polynomial of (it vanishes on every for but not at ). Define
By the two identities above, vanishes on every point of , so . The prover computes the quotient
and commits to it with another hiding KZG commitment . A valid exists if and only if both identities hold over .
Re-randomizing the Committed Polynomial
The original commitment (the one fed into the sigma protocol) puts zero at position . To give DeKART freedom to hide the data behind a random at that slot, the dealer samples fresh and publishes
This is the commitment of . To prove that is a legitimate re-randomization — and in particular that only the blinding slot changed — the dealer runs a two-term Okamoto -protocol for the statement
proving knowledge of . This proof is included in the DeKART transcript and verified against the two fixed base points (Lagrange basis at position ) and (KZG hiding base). It uses its own DST, separate from the outer sigma protocol.
Opening at a Random Point
Rather than check the polynomial identity everywhere, the verifier samples a random challenge (via Fiat-Shamir, resampled until it lands outside the roots of unity). By Schwartz-Zippel, if then the identity holds as polynomials with overwhelming probability.
The prover evaluates
sends these scalars to the verifier, and the verifier checks the identity in scalar form:
The verifier still has to be convinced that really are evaluated at . This is what the hiding KZG opening does.
Batching Openings Into One
A separate opening per committed polynomial would cost pairings. Instead, the verifier samples Fiat-Shamir challenges and the prover opens the random linear combination
at , with claimed value . The corresponding commitment is an MSM over the per-polynomial commitments:
A single hiding KZG opening proof (a pair of elements for the quotient polynomial and its hiding blinder) discharges all evaluations simultaneously.
Fiat-Shamir Transcript
The transcript is bound to a dedicated DST (APTOS_UNIVARIATE_DEKART_V2_RANGE_PROOF_DST) and proceeds through the protocol in order, with the verifier’s public inputs (the dimensions and the original commitment ) absorbed first. Challenges are derived in this sequence:
- Append ; run the Okamoto sub-protocol and append .
- Append the chunk commitments . Squeeze .
- Append . Squeeze , rejecting it if it collides with .
- Append the evaluations . Squeeze .
Binding the challenges in this order is what lets the combined check stand in for the per-point identities.
Proof Structure
The proof sent by the dealer is:
- : re-randomized commitment to .
- : Okamoto proof of for (one point and two scalars).
- : hiding commitments to the bit polynomials .
- : hiding commitment to the quotient .
- : evaluations at .
- : hiding KZG opening proof for (two points).
Total: group elements and field elements, independent of .
Verification
The verifier recomputes every Fiat-Shamir challenge and then runs three checks:
- Re-randomization. Verify against .
- Scalar identity. Check , where .
- Batched opening. Compute and by MSM over the proof commitments, then run one hiding KZG pairing check for .
If all three pass, every chunk committed in the original is guaranteed to be in .
How DeKART Plugs Into Chunky
DeKART’s is the same HKZG commitment that appears in the tuple homomorphism of the outer sigma protocol. The outer -protocol proves that the chunks committed in are the same chunks encrypted in the chunked ElGamal ciphertexts ; DeKART proves those chunks are in range. The two proofs share as their sole common handle and are otherwise verified independently, each with its own DST and Fiat-Shamir transcript.
Overview of the -Protocol (Signature of Knowledge)
The DKG overview described three validity checks on a dealing transcript: correct encryption format, chunk-to-share consistency, and chunk range. The first and third of these rely on a -protocol proof that ties these concerns together into a single non-interactive proof and, at the same time, prevents transcript malleability by binding the proof to the dealer’s identity. This section explains how that proof works.
What Is a -Protocol?
A -protocol is a three-move proof of knowledge. The prover knows a secret witness and wants to convince a verifier that a public statement satisfies for some known homomorphism , without revealing . The three moves are:
- Commit. The prover samples random and sends .
- Challenge. The verifier sends a random scalar .
- Respond. The prover sends .
The verifier accepts if . Because is a homomorphism, this equation holds if and only if the prover knew .
In Chunky, the protocol is made non-interactive using the Fiat-Shamir transform: the challenge is derived by hashing the protocol context, the homomorphism description, the public statement, and the prover’s commitment into a Merlin transcript. The resulting proof consists of .
The Tuple Homomorphism
Recall from the DKG overview that the dealer produces two kinds of public output from the same secret data: an HKZG commitment (used by the DeKART range proof) and the chunked ElGamal ciphertexts. The -protocol must prove that both outputs are consistent with the same underlying witness. This is achieved with a tuple homomorphism in the implementation that maps a single witness to a pair of outputs:
The witness has three parts:
- : the blinding scalar for the hiding KZG commitment.
- : the -bit chunks of each Shamir share (the same chunks encrypted in the ciphertexts ).
- : the correlated ElGamal randomness (satisfying as described earlier).
The first component of the tuple ignores and computes the HKZG commitment:
where is a hiding base from the SRS and are Lagrange basis evaluations at the SRS trapdoor. This is the same commitment that enters the DeKART range proof.
The second component ignores and computes the chunked ElGamal ciphertexts and randomness commitments:
Each component is a “lifted” homomorphism: a projection extracts the relevant fields from the full witness, then the inner homomorphism is applied. The tuple construction ensures that a single proof with a single Fiat-Shamir challenge covers both components, guaranteeing the same witness underlies both the KZG commitment and the ElGamal ciphertexts.
Non-Malleability via Signature of Knowledge
As described in Step 4 of the DKG overview, the -protocol must be non-malleable to prevent an adversary from re-purposing an honest dealer’s transcript. This is achieved by turning the proof into a Signature of Knowledge (SoK): the dealer’s identity is hashed into the Fiat-Shamir challenge, so the proof is bound to a specific dealer and session.
Concretely, the SoK context hashed into the transcript consists of:
- The dealer’s BLS12-381 signing public key
- The session/epoch identifier
- The dealer’s index in the validator set
- A domain-separation tag (DST)
If any of these fields change - for example, if an attacker substitutes their own public key - the recomputed challenge will differ and the proof becomes invalid.
Fiat-Shamir Transcript
The Fiat-Shamir challenge is derived via a Merlin transcript with the following binding order:
- DST: the protocol’s domain-separation tag, written as the Merlin transcript label.
- SoK context: the dealer’s public key, session ID, dealer index, and DST, serialized with BCS.
- Homomorphism bases: all MSM base points (generators , , the HKZG SRS elements, and all encryption keys ).
- Public statement: the tuple (the HKZG commitment and all ciphertexts and randomness commitments ).
- Prover’s commitment: the first message .
The challenge is squeezed as a full-size field element, sampled with the field bit-length for statistical uniformity.
Verification
Both components of the tuple homomorphism are multi-scalar multiplications (MSMs), so the verification equation reduces to checking that an MSM evaluates to the identity:
The two components of the tuple can be batched into a single MSM using a random scalar.
The DeKART range proof also contains its own inner -protocol (an Okamoto proof showing knowledge of the blinding randomness), which uses a separate DST and Fiat-Shamir transcript. Both the outer SoK and the DeKART proof are stored together in the dealing transcript and verified independently, but they share the same HKZG commitment as the link between them.
Overview of Consensus Integration
DKG Integration
At a high level, the consensus integration turns Chunky DKG into an epoch-handoff pipeline: during epoch reconfiguration, validators jointly produce one certified aggregated DKG output, commit it on chain, and carry it into epoch as the canonical encryption material for encrypted mempool operations. This work is intentionally outside the per-block consensus critical path, and its output is buffered in the reconfiguration flow and consumed when the next epoch starts.
Concretely, when the chain transitions from epoch to epoch , and the relevant on-chain feature flags are enabled (vtxn_enabled and chunky_dkg_enabled), Move starts both the existing randomness DKG and the new Chunky DKG. Each current-epoch validator runs a local ChunkyDKGManager, produces one signed Chunky PVSS transcript, and broadcasts it off-chain via ReliableBroadcast. Validators verify incoming transcripts independently and keep collecting them until the dealer set has enough voting power.
Once a validator reaches quorum, it pointwise-aggregates the accepted dealer transcripts into a single AggregatedSubtranscript. This compression step is important: instead of carrying one transcript per dealer into the next epoch, the protocol reduces the whole qualified dealer set into one compact object from which each validator can later recover its own final secret-share material.
The aggregation is only for the subtranscript, not including the proof. As a result, there is no direct way for other validators to verify the validity of AggregatedSubtranscript. To resolve this, the protocol adds a certification step. A validator that has formed an aggregate sends a ChunkyDKGSubtranscriptSignatureRequest containing the dealer list, the aggregate hash, and per-dealer transcript hashes. A recipient will fetch and verify any missing transcripts, recompute the aggregate, and sign only if all individual transcripts are valid. In effect, consensus is not certifying individual transcripts, but one exact aggregated output.
After a validator collects quorum signatures over that aggregate, it emits a ValidatorTransaction::ChunkyDKGResult. On chain, finish_with_chunky_dkg_result() verifies the quorum signature, stores the certified aggregated subtranscript, and publishes the derived encryption key for epoch . Reconfiguration only fully completes once both the randomness DKG path and the Chunky DKG path have finished, so the next epoch starts with one certified DKG result that all honest validators can consume consistently.
Round Decryption Key Derivation
At epoch start, each validator uses its private key to decrypt its own secret share from the certified Chunky transcript, obtaining a SecretKeyShare that it holds for the duration of the epoch.
Decryption keys are derived per round. Once a block reaches a quorum certificate, each validator computes the round digest, which is a commitment to the IBE tags of the encrypted transactions in that block. Using the digest and its SecretKeyShare, the validator produces a per-round decryption key share.
The key share is only released after the block is finalized. This ordering is critical: releasing a key share before finalization would allow the encrypted transactions to be decrypted while they are still in the mempool, breaking confidentiality. Once the block is finalized, each validator broadcasts its key share via SecretShareManager. When enough shares are collected to meet the reconstruction threshold, any validator can locally reconstruct the full decryption key. The threshold secret sharing scheme guarantees that all honest validators reconstruct the same key, regardless of which subset of shares they use.
SecretShareManager uses an optimistic verification strategy to reduce per-share verification overhead. Initially, it accepts incoming shares without cryptographic verification, assuming all validators are honest. When the reconstruction threshold is reached, it attempts to aggregate the collected shares. If aggregation fails, it falls back to verifying each share individually, evicts the invalid ones, and moves the offending validator to a pessimistic set. All future shares from a pessimistic validator are verified before acceptance for the rest of the epoch.
Once the decryption key is reconstructed, it is passed to the decryption pipeline to decrypt the encrypted transactions in the finalized block.
Encrypted Transaction Flow
When the certified Chunky DKG output settles at epoch start, the derived aggregate encryption key is published on chain by the Move decryption module and exposed through the ledger state. Clients fetch this key, encrypt their transaction payload locally, and submit a signed transaction whose payload variant is EncryptedPayload::Encrypted. The transaction signature covers the encrypted form, and the ciphertext is bound to the sender’s address as associated data, so a ciphertext cannot be replayed under a different sender after submission. Admission is gated both locally, by the node’s allow_encrypted_txns_submission flag, and globally, by the on-chain ENCRYPTED_TRANSACTIONS feature flag, and the API rejects any payload that arrives in a non-Encrypted state.
Inside QuorumStore, encrypted transactions flow through a parallel track from regular transactions. Batch V2 carries a BatchKind discriminator, Normal or Encrypted, and the batch generator buckets transactions by kind, applying gas-bucketing and per-sender size limits independently per kind. Encrypted batches are required to be homogeneous: an encrypted batch contains only encrypted transactions, and a normal batch contains none. This separation matters because every later stage treats the two kinds differently. In particular, proposal pulls apply a separate per-kind block limit encrypted_txn_limit, sourced from SecretShareConfig and defaulting to zero when no decryption configuration is available, so a validator that cannot participate in decryption is also prevented from packing encrypted batches into the blocks it proposes.
Once consensus orders a block, the pipeline materializes payload references and partitions transactions into encrypted and regular sets. Materialization and decryption precomputation start optimistically on the QC path, before final ordering completes; this overlaps the expensive eval-proof and ciphertext-preparation work with consensus latency rather than serializing them. The final decryption pass, however, waits on the aggregated per-round decryption key from SecretShareManager. This dependency is the structural enforcement of the “decrypt only after finalization” invariant: precomputation can run early, but the pairing that actually recovers each plaintext is gated on the aggregated key, which only becomes available once the ordered-block path has produced the block.
After decryption, each encrypted transaction transitions to one of two terminal states. Decrypted carries the recovered executable along with the optional claimed-entry-function check; FailedDecryption carries a reason, which is one of CryptoFailure, BatchLimitReached, ConfigUnavailable, DecryptionKeyUnavailable, or ClaimedEntryFunctionMismatch. The prepare stage concatenates the decrypted partition with the regular partition and hands the combined block to execution. At the VM, only BatchLimitReached is retryable; all other failure reasons run the failure epilogue, which charges gas and increments the sender’s sequence number, so a failed decryption is a final on-chain outcome rather than a free retry. This ensures that an encrypted transaction either executes or is permanently rejected, and that a sender cannot use ciphertext failures to obtain repeated admission at no cost.