Introduction

On March 9th, 2026, zkSecurity was commissioned to perform a security audit of Aptos Labs’ encrypted mempool. The encrypted mempool allows users to encrypt their transactions before submission so that transaction contents remain private until they are finalized in a block, protecting against MEV (Maximal Extractable Value) attacks such as front-running and sandwich attacks.

The audit was conducted by two consultants over four weeks (40 engineering days). A number of observations and findings were reported to the Aptos Labs team and are detailed in the latter sections of this report.

Scope

The audit covered the Aptos encrypted mempool implementation in the Aptos Core repository (aptos-labs/aptos-core) and was divided into three phases, each targeting a distinct component.

Phase 1: Batch threshold encryption (commit ba538036b4d1ba44a9dcff7d3c17bc3d861981b5)

  • Batched IBE threshold encryption scheme and its underlying components
  • Hybrid encryption and key management (KEM and data encapsulation)
  • Symmetric encryption, key derivation, randomness, and nonce handling
  • Rust and TypeScript implementation of the scheme

Phase 2: Chunky PVSS and underlying components (commit a8b7a672fefd4fd55c72eccf2880ce158893ed26)

  • Chunky PVSS scheme and its underlying cryptographic components
  • DeKART zero-knowledge range proof
  • Rust implementation of these schemes

Phase 3: Encrypted mempool integration (commit e582a7599c8a8880bda3fa887a6b89142fbfabf4)

  • Integration of the batched IBE threshold encryption
  • Integration of Chunky DKG
  • Integration of the decryption secret key share derivation and aggregation
  • Transaction encryption, proposal, validation, decryption, and execution with respect to the encrypted mempool

Gas metering for encrypted transactions is out of scope for this audit.

Security Goals

The audit evaluated the following security properties of the Aptos encrypted mempool.

Cryptographic soundness. The core cryptographic building blocks must be secure:

  • The batched IBE threshold encryption scheme (BIBE) must provide CCA security.
  • The Chunky PVSS scheme must correctly distribute secret shares among validators.
  • The DeKART zero-knowledge range proof must be sound (and zero-knowledge).

Transaction confidentiality. A transaction’s content must remain private until it is finalized in a block and decrypted immediately before execution. Specifically:

  • An unfinalized encrypted transaction must not be decryptable by any party.
  • The decryption secret for a round must only be derivable after the round is finalized.
  • A transaction encrypted in one epoch must not be decryptable using a key from a different epoch.

Transaction integrity. The encrypted mempool must prevent adversarial manipulation of transactions. Specifically:

  • An encrypted transaction must not be decryptable to a different plaintext than what the sender intended.
  • An adversary must not be able to copy, replay, or malleate another user’s encrypted transaction.
  • A transaction that is decrypted should be executed. An adversary should not be able to cause a transaction to be decrypted without execution, leaking the sender’s intent without their consent.

Liveness. The integration must not introduce new liveness vulnerabilities. Specifically:

  • A malicious validator must not be able to stall DKG, block secret share reconstruction, or prevent the network from progressing.
  • A malicious validator must not be able to cause the DKG to deal an inconsistent secret, or cause the derived decryption key in a round to be inconsistent.
  • A malicious user must not be able to submit a transaction that causes other decryptions to fail or crash validators.
  • A malicious user or validator must not be able to consistently spam the network to prevent other encrypted transactions from being executed.

Overview

The Aptos encrypted mempool allows users to encrypt their transactions locally, keeping transaction contents encrypted while in the mempool. Transactions are only decrypted after being included in a finalized block, immediately before execution. The system uses a threshold encryption scheme in which only a threshold of validators (by stake) can decrypt a transaction. If a transaction is not finalized, it will not be decrypted under the honest-majority validator assumption. By keeping transaction contents private until finalization, the system aims to mitigate some MEV attacks such as sandwich attacks and front-running.

The two core cryptographic components of Aptos’s encrypted mempool are the batch threshold encryption scheme and the distributed key generation (DKG). We first describe the batch threshold encryption scheme, then the distributed key generation process. The DKG relies on DeKART range proofs and a Σ-protocol signature of knowledge, before being integrated into consensus.

Overview of the Batch Threshold Encryption Scheme

The batch threshold encryption allows users to encrypt their transactions individually, while a threshold of validators can efficiently decrypt them together in a batch. The scheme is described in the TrX paper.

Threshold Encryption

To achieve threshold encryption, the scheme combines BLS threshold signatures with witness encryption. A BLS threshold signature requires t out of n validators to produce a valid aggregate signature. The user encrypts their transaction so that only someone who holds the BLS signature (the witness) over a specific message can decrypt it. To decrypt, each validator produces a partial signature; once t partial signatures are collected, the aggregate signature is derived and the transaction is decrypted.

A naive implementation would have each user sample a random message and encrypt their transaction to the BLS threshold signature over that message. The transaction carries the message so that validators can sign it. However, this does not scale: different transactions require different signatures, so each validator’s signing work grows proportionally with the number of encrypted transactions.

An alternative approach is to have all users encrypt their transactions to the signature over the same message, so that validators only need to sign once. For example, users could encrypt to a signature over the target round number (or block height). However, there is no guarantee that a transaction is included in a specific round. If a transaction misses its target round but validators have already signed over that round number, the transaction would be decryptable even though it is never finalized.

Aptos mitigates this by having each user encrypt their transaction to the threshold signature over a KZG commitment to its random tag. Since a KZG commitment can commit to multiple tags at once, a single signature over one commitment can serve to decrypt an entire batch of transactions. As described in the TrX paper, a ciphertext is encrypted so that it can be decrypted if and only if the decryptor knows a signature σ under public key pk over a vector of tags (tg1,,tgB) such that its tg{tg1,,tgB}. Below we describe how this batch witness encryption scheme is constructed.

The Batch Witness Encryption

A batch of B ciphertexts has tags tg1,,tgB. Define f(X)=j=1B(Xtgj) and the KZG commitment com=gf(τ). The user wants to encrypt to the signature σ under public key pk, where com commits to a polynomial that has tg as one of its roots.

The relationship between the signature and the commitment can be verified using two pairing product equations (PPEs):

PPE 1 (signature): e(H1(pk),pk)·e(com,pk)=e(σ,h)

PPE 2 (KZG opening): e(π,hτtg)=e(com,h)

The witness for decrypting a ciphertext with tag tg has three elements:

  • σ=(H1(pk)·com)sk𝔾1: threshold BLS signature (shared across batch).
  • com𝔾1: KZG commitment (shared across batch).
  • π=gq(τ)𝔾1 where q(X)=f(X)/(Xtg): KZG opening proof (per-tag).
Building the Encryption

We want to encrypt a message so that only someone who possesses all three witness elements (σ, π, and com) can decrypt it.

At encryption time, we have none of the witness. The only computable pairing value is in the left-hand side of PPE 1: e(H1(pk),pk). We sample a random field element r1𝔽p and set

K=e(H1(pk),pk)r1

as the encapsulation key. The goal is that anyone with all three witness elements can derive the same encapsulation key K from the two PPEs.

Releasing r1 directly would allow anyone to compute K. Instead, we release ciphertext elements (hints) that allow someone with the full witness to recover K, but reveal nothing to anyone else.

Step 1: Using PPE 1 alone

Expand K using PPE 1:

K=e(σ,h)r1·e(com,pk)r1=e(σ,hr1)·e(com,pkr1)

If we release hr1 and pkr1 as hints, anyone with σ and com can pair them and recover K.

The problem: PPE 2 is not enforced. The decryptor does not need π at all. This means anyone with a valid signature σ and any commitment com (not necessarily one where f(tg)=0) could decrypt. We need to bind decryption to knowledge of a valid KZG opening proof.

Step 2: Entangling PPE 2

The idea is to hide the hint pkr1 behind PPE 2, so that extracting it requires π.

Sample a second random scalar r0𝔽p and release pkr1·hr0 instead of pkr1 alone. Now the decryptor can pair com with this combined hint, but gets an extra e(com,hr0) term that they need to cancel:

e(com,pkr1)=e(com,pkr1·hr0released hint)·e(com,hr0)

The second factor e(com,hr0) is unknown to the decryptor. But PPE 2 provides exactly the bridge to compute it if and only if the decryptor knows π:

e(com,hr0)=e(com,h)r0=e(π,hr0(tgτ))

The first equality is just exponentiation. The second follows from PPE 2. So we release hr0(tgτ) as a third hint. Anyone who knows π can pair it with this hint and recover the missing term. Without π, they’re stuck. Note that the user can compute this hint without knowing τ directly, given hτ: hr0(tgτ)=hr0tg·(hτ)r0.

Step 3: The ciphertext

There are three hints, matching the implementation:

ct[0]=hr0·pkr1 ct[1]=hr0(tgτ) ct[2]=hr1

The encapsulation key depends only on r1. The scalar r0 is structural: it creates the entanglement with PPE 2 and cancels out completely during decryption.

We can verify that anyone with the witness can recover K from the hints:

K=e(H1(pk),pk)r1

Expand using PPE 1:

=e(σ,h)r1·e(com,pk)r1

Pull r1 into the second argument:

=e(σ,hr1)·e(com,pkr1)

Substitute pkr1=(pkr1·hr0)·hr0 to split the second term:

=e(σ,hr1ct[2])·e(com,hr0·pkr1ct[0])·e(com,hr0)

Apply PPE 2 to the last term: e(com,hr0)=e(com,h)r0=e(π,hr0(tgτ)):

=e(σ,hr1ct[2])·e(com,hr0·pkr1ct[0])·e(π,hr0(tgτ)ct[1])

Every factor is a pairing of a witness element with a hint.

Decryption

We can split the decryption process into public precomputation (pipelined during voting) and a final step on the critical path.

Phase 1: Precomputation (public, per-ciphertext). This is where the decryptor “peels off” the r0 blinding using π and com:

pairing_output=[e(π,ct[1])·e(com,ct[0])]

The two pairings correspond to the two sides of the entanglement from Step 2. No threshold signature is needed. This phase runs during voting rounds, once the list of tags is fixed, at which point com and π can be computed by the validator.

Phase 2: Final decryption (requires σ, critical path):

K=e(σ,ct[2])·pairing_output

This is computed after the threshold signature σ is successfully generated.

3-Element vs 2-Element

The TrX paper also introduces a 2-element scheme that merges the two PPEs into one (raising PPE 2 by sk and substituting), eliminating com as a witness element:

e(H1(pk),pk)=e(σ,h)·e(π,pkτtg)

The implementation uses the 3-element scheme because it does not require pkτ as part of the encryption key, which simplifies the PVSS setup.

3-element 2-element
Ciphertext 3 𝔾2 elements 2 𝔾2 elements
Random scalars 2 (r0,r1) 1 (α)
Encryption key (pk,hτ) (pk,hτ,pkτ)
Precomputation 2 pairings / ct 1 pairing / ct
Critical path 1 pairing / ct 2 pairings / ct

Protection Against Linear Malleability Attacks

Because KZG commitments and BLS signatures are both linear, an adversary who obtains signatures for two batches can combine them to forge a valid witness for a tag that was never finalized. Given signatures σ1,σ2 on commitments com1,com2, they can set a+b=1 and compute σ=σ1a·σ2b, which is a valid signature on com=com1a·com2b. By choosing a/b to make a target tag tg* a root of the combined polynomial, the adversary forges a full witness for tg*.

TrX fixes this with a per-batch randomizer κi baked into the CRS, so commitments become comi=gκi·fi(τ). Planting a root in the combined commitment now requires knowing the ratio κ1/κ2, which the adversary cannot compute from the CRS group elements alone.

Earlier schemes bound each batch to a specific block height r, making the attack fail because H1(r1)a·H1(r2)b is not a valid hash for any block height. However, this forced users to encrypt to a specific block height at encryption time. If the transaction missed that block, the ciphertext had to be re-encrypted. On Aptos with sub-second blocks, this is impractical. TrX’s κ approach removes the block-height dependency entirely.

For cross-epoch and cross-chain separation, H1(pk) in the signature binds witnesses to the epoch’s aggregate public key, which changes each epoch, preventing replay across epochs or networks.

CPA-Secure to CCA-Secure

The witness encryption scheme above is only CPA-secure. TrX upgrades it to CCA security using the standard Boneh-Canetti-Halevi-Katz (BCHK) transformation, which turns any CPA-secure IBE into CCA-secure PKE by combining it with a one-time signature.

The tag is derived from a fresh ephemeral verification key and the associated data:

tg=HF(vkSig,ad)

where ad is the sender address in the Aptos integration. The full encryption procedure is:

  1. Sample fresh (vkSig,skSig)Sig.KeyGen
  2. Compute tg=HF(vkSig,ad)
  3. Produce the witness encryption ciphertext under tg
  4. Sign the full ciphertext: φSig.Sign(skSig;vkSig,ad,ct(1),ct(2),ct(3))
  5. Output (ct(1),ct(2),ct(3),vkSig,φ)

This provides two properties. Non-malleability: the ephemeral key signs the whole ciphertext, so any modification is caught by verify_ct. If the adversary re-signs with their own key, the tag changes and the ciphertext encrypts their own message. Tag uniqueness: since each ciphertext samples a fresh vkSig, tags are distinct with overwhelming probability, preventing cross-batch reuse. Then an attacker will not be able to create the same tag after seeing a tag to front-run. A user who reuses their own ephemeral key across two transactions will derive the same tag for both, causing them to be decrypted together as a single batch. This is a protocol violation that only harms the sender.

Overview of the Distributed Key Generation (Chunky)

The batch threshold encryption scheme requires validators to jointly hold a secret field element so that only a coalition above a threshold can reconstruct it. The process of generating and dealing this shared secret is called Distributed Key Generation (DKG). The common process is:

  1. Each validator generates a random secret, splits it into shares, and distributes the shares to the other validators.
  2. Each validator sums the shares it received to get its final key share.
  3. Each validator now holds a share of the global secret key.

Since each validator only contributes one piece of the secret, no single validator knows or controls the final secret.

Note that Aptos already has a DKG for its randomness beacon, but that one operates over elliptic curve group elements. The batch threshold encryption scheme requires the secret to be a field element in order to support BLS signing, which is why a new DKG scheme (Chunky) is needed.

How to Share a Secret With a Threshold

This is weighted Shamir secret sharing. There are n validators and validator i has stake weight wi. We want any subset with combined weight >tW to reconstruct the secret. The dealer constructs a degree-tW polynomial f with f(0)=a0 as the secret, and gives validator i exactly wi evaluations: f(χi,1),,f(χi,wi), proportional to its stake. Any subset whose combined weight exceeds tW has enough points to interpolate f and recover a0.

How to Generate the Secret So No One Knows It

If one validator generates the whole polynomial, it knows the secret. Instead, each validator i generates its own random degree-tW polynomial fi and deals it independently. The final shared polynomial is f=iQfi, so the secret is a0=iQfi(0). Each validator distributes evaluations of its own fi to all others, and each recipient sums what it receives to get its share of f.

Not every validator needs to participate. A qualifying set Q with sufficient combined stake is enough. As long as Q contains at least one honest validator, the final polynomial is uniformly random and unknown to any single party.

Publicly-Verifiable Secret Sharing

In Aptos, DKG happens between epochs: current-epoch validators deal secrets to next-epoch validators over a public channel with no direct secret communication. This requires the dealing process to be public. The dealer will encrypt the secret key share to the receiver. To ensure the encrypted secret is valid, others should be able to verify it without decryption. Chunky is a Publicly-Verifiable Secret Sharing (PVSS) scheme that satisfies both requirements.

A dealing validator samples the secret a0, picks a random degree-tW polynomial f with f(0)=a0, and evaluates f at each validator’s points to get shares si,j=f(χi,j).

Step 1: Low-Degree Test

The dealer must prove that the shares are evaluations of a degree-tW polynomial, without revealing them. It commits each share in 𝔾2 as V~i,j=[si,j]G~ and publishes V~0=[a0]G~. The SCRAPE low-degree test then verifies that these commitments are consistent with a degree-tW polynomial using Reed-Solomon dual codewords. Anyone can run this check without learning the shares.

Step 2: Encryption

The dealer encrypts each share to its intended recipient. Each validator i has a known encryption public key eki=[dki]H.

Since shares are full field elements (~255 bits for BLS12-381), decrypting ElGamal requires solving a discrete log, which is only feasible for small values. To handle this, each share is split into m chunks of bits each (e.g., =32, m=8):

si,j=k=1mBk1·si,j,k,B=2

Each chunk si,j,k[0,B) is small enough for brute-force discrete log. The dealer ElGamal-encrypts each chunk:

Ci,j,k=[si,j,k]G+[rj,k]eki,Rj,k=[rj,k]H

To decrypt, validator i computes Ci,j,k[dki]Rj,k=[si,j,k]G, solves the small discrete log, and recombines the chunks to recover si,j. Note that all the receivers share the same randomness values rj,k in the ciphertext.

Step 3: Validity Checks

We must ensure the ciphertexts actually encrypt the committed shares and not garbage. Three things are checked.

Check 1 — Correct encryption format. A Σ-protocol signature of knowledge (ZKSoK) proves that the dealer knows the randomness and plaintext used in each ElGamal encryption ciphertext.

Check 2 — Chunks recombine to the committed share. The dealer picks randomness with a correlation constraint:

k=1mBk1·rj,k=0

This ensures the randomness cancels when chunks are recombined:

kBk1·Ci,j,k=[si,j]G+(kBk1·rj,k)=0·eki=[si,j]G

So the recombined ciphertext is a deterministic commitment to the full share. Consistency with V~i,j=[si,j]G~ is then verified with a pairing:

e(kBk1Ci,j,k,G~)=e(G,V~i,j)

This can be batched across all (i,j) pairs via random linear combinations, collapsing into a single two-pairing check.

Check 3 — Chunks are in range. Each chunk must be in [0,2), otherwise a malicious dealer could encode values that fail to reconstruct. The dealer commits all chunks into a single hiding KZG commitment C, and uses a DeKART range proof to batch-prove all chunks are -bit. The Σ-protocol from Check 1 is extended to also prove that the values in C match those encrypted in the ciphertexts. This is called the “ElGamal-to-KZG” relation.

Step 4: Non-Malleability

Without non-malleability, a malicious validator j could take an honest validator i’s transcript (which deals secret zi), modify it to deal zi+r for some known r, sign it under their own identity, and submit it. The combined DKG secret would then be zi+(zi+r)=r, which j fully controls.

To prevent this, the Σ-protocol is made into a zero-knowledge signature of knowledge (ZKSoK) that signs over the dealer’s public key and the epoch number. This binds each transcript to its specific dealer and substituting a different identity will invalidate the proof.

Step 5: Aggregation

The components of a subtranscript (V~0, V~i,j, Ci,j,k, Rj,k) are all group elements and can be added pointwise across transcripts. This means individual transcripts can be aggregated: combining |Q| transcripts produces a single subtranscript of the same size, representing the combined secret z=iQzi. Each validator only needs to decrypt once from the aggregated subtranscript.

Putting It Together: the DKG

The process runs as below:

  1. Dealing phase. Each validator i picks a random secret zi, runs Chunky’s Deal to produce a signed PVSS transcript, and broadcasts it.

  2. Agreement phase. Validators agree on a qualifying set Q (with sufficient combined stake) of valid transcripts and aggregate them pointwise into a compact subtranscript for the combined secret.

  3. Commit phase. A leader proposes (Q,hash_of_aggregated_subtranscript). Once enough validators attest to it, the aggregated subtranscript is posted on-chain and each validator decrypts its final shares.

Overview of DeKART Range Proof

In the Chunky DKG, each Shamir share that a dealer distributes is a field element around 255 bits wide, which is too large to decrypt directly under ElGamal (decryption requires solving a discrete log). To work around this, the dealer splits each share into m small chunks of bits each (=32, m=8 in the system) and ElGamal-encrypts each chunk separately. A chunk is only decryptable if it actually fits in [0,2): a malicious dealer who encodes an out-of-range chunk could make recombined shares fail to match the committed share, breaking reconstruction.

This is why the dealer must additionally prove that every chunk lies in [0,2). With n validators each receiving m chunks per share, the dealer needs to range-prove N=n·m values at once. Aptos uses DeKART, a batched range proof, which produces a single short proof covering all N chunks. A separate Σ-protocol binds the values inside DeKART’s commitment to the ones encrypted in the chunked ElGamal ciphertexts; DeKART itself is only concerned with the range claim.

What Is Being Proved

The dealer has a single hiding KZG commitment

C=ρ·[ξ]1+i=1Nsi·[i(τ)]1

over a Lagrange basis of size N+1 at positions {ω0,ω1,,ωN}. Position 0 carries the value 0 and will later be filled with a blinder; the N chunks occupy positions 1,,N. DeKART convinces the verifier that every si sits in [0,2), without revealing anything about the si themselves.

The Polynomial Encoding

Let H={1,ω,ω2,,ωN} be the evaluation domain and S=H{1}={ω,,ωN} the “data” positions. The prover works with two families of polynomials over H.

  • f^(X): the degree-N polynomial whose evaluations over S are the chunks si, and whose value at ω0=1 is a fresh random blinder r. Its commitment C^ is defined below.
  • fj(X) for j[0,): degree-N polynomials whose evaluations over S are the j-th bit of each chunk, with a fresh blinder rj at position ω0.

Two polynomial identities must hold at every XS:

Radix decomposition. f^(X)j=012j·fj(X)=0. Each chunk is the radix-2 recombination of its bits.

Bit constraint. fj(X)·(fj(X)1)=0 for every j. Each fj is 0/1 on S.

These identities together imply that every chunk is a sum of bits times powers of two, i.e. in [0,2).

Both identities are enforced over S, not over all of H. This is what makes zero-knowledge possible: the blinders r and rj at position ω0=1 are free to be random, so the committed polynomials (and one KZG opening) carry no information about the si beyond what is already in C^.

Collapsing to a Single Quotient

Instead of proving each identity separately, the verifier derives Fiat-Shamir challenges β,β0,,β1 and the prover combines them into one polynomial identity. Let

VS(X)=XN+11X1

be the vanishing polynomial of S (it vanishes on every ωi for i1 but not at X=1). Define

P(X)=β·(f^(X)j2jfj(X))+jβj·fj(X)(fj(X)1).

By the two identities above, P vanishes on every point of S, so VS(X)P(X). The prover computes the quotient

h(X)=P(X)/VS(X)

and commits to it with another hiding KZG commitment D. A valid h exists if and only if both identities hold over S.

Re-randomizing the Committed Polynomial

The original commitment C (the one fed into the sigma protocol) puts zero at position ω0. To give DeKART freedom to hide the data behind a random r at that slot, the dealer samples fresh (r,Δρ) and publishes

C^=C+r·[0(τ)]1+Δρ·[ξ]1.

This C^ is the commitment of f^. To prove that C^ is a legitimate re-randomization — and in particular that only the blinding slot changed — the dealer runs a two-term Okamoto Σ-protocol for the statement

C^C=r·[0(τ)]1+Δρ·[ξ]1

proving knowledge of (r,Δρ). This proof πPoK is included in the DeKART transcript and verified against the two fixed base points [0(τ)]1 (Lagrange basis at position 0) and [ξ]1 (KZG hiding base). It uses its own DST, separate from the outer sigma protocol.

Opening at a Random Point

Rather than check the polynomial identity everywhere, the verifier samples a random challenge γH (via Fiat-Shamir, resampled until it lands outside the roots of unity). By Schwartz-Zippel, if P(γ)=VS(γ)·h(γ) then the identity holds as polynomials with overwhelming probability.

The prover evaluates

a=f^(γ),ah=h(γ),aj=fj(γ) for j[0,),

sends these scalars to the verifier, and the verifier checks the identity in scalar form:

ah·VS(γ)=?β·(aj2jaj)+jβj·aj(aj1).

The verifier still has to be convinced that (a,ah,aj) really are f^,h,fj evaluated at γ. This is what the hiding KZG opening does.

Batching Openings Into One

A separate opening per committed polynomial would cost +2 pairings. Instead, the verifier samples Fiat-Shamir challenges μ,μh,μ0,,μ1 and the prover opens the random linear combination

u(X)=μ·f^(X)+μh·h(X)+jμj·fj(X)

at γ, with claimed value au=μ·a+μh·ah+jμj·aj. The corresponding commitment is an MSM over the per-polynomial commitments:

U=μ·C^+μh·D+jμj·Cj.

A single hiding KZG opening proof πγ (a pair of 𝔾1 elements for the quotient polynomial and its hiding blinder) discharges all +2 evaluations simultaneously.

Fiat-Shamir Transcript

The transcript is bound to a dedicated DST (APTOS_UNIVARIATE_DEKART_V2_RANGE_PROOF_DST) and proceeds through the protocol in order, with the verifier’s public inputs (the dimensions n, and the original commitment C) absorbed first. Challenges are derived in this sequence:

  1. Append C^; run the Okamoto sub-protocol and append πPoK.
  2. Append the chunk commitments C0,,C1. Squeeze β,β0,,β1.
  3. Append D. Squeeze γ, rejecting it if it collides with H.
  4. Append the evaluations (a,ah,a0,,a1). Squeeze μ,μh,μ0,,μ1.

Binding the challenges in this order is what lets the combined check stand in for the per-point identities.

Proof Structure

The proof sent by the dealer is:

  • C^𝔾1: re-randomized commitment to f^.
  • πPoK: Okamoto proof of (r,Δρ) for C^C (one 𝔾1 point and two scalars).
  • C0,,C1𝔾1: hiding commitments to the bit polynomials fj.
  • D𝔾1: hiding commitment to the quotient h.
  • a,ah,a0,,a1𝔽: evaluations at γ.
  • πγ: hiding KZG opening proof for u(γ)=au (two 𝔾1 points).

Total: +5 group elements and +4 field elements, independent of n.

Verification

The verifier recomputes every Fiat-Shamir challenge and then runs three checks:

  1. Re-randomization. Verify πPoK against (C^C,[0(τ)]1,[ξ]1).
  2. Scalar identity. Check ah·VS(γ)=β(aj2jaj)+jβjaj(aj1), where VS(γ)=(γN+11)/(γ1).
  3. Batched opening. Compute U and au by MSM over the proof commitments, then run one hiding KZG pairing check for u(γ)=au.

If all three pass, every chunk committed in the original C is guaranteed to be in [0,2).

How DeKART Plugs Into Chunky

DeKART’s C^ is the same HKZG commitment that appears in the tuple homomorphism of the outer sigma protocol. The outer Σ-protocol proves that the chunks committed in C^ are the same chunks encrypted in the chunked ElGamal ciphertexts Ci,j,k; DeKART proves those chunks are in range. The two proofs share C^ as their sole common handle and are otherwise verified independently, each with its own DST and Fiat-Shamir transcript.

Overview of the Σ-Protocol (Signature of Knowledge)

The DKG overview described three validity checks on a dealing transcript: correct encryption format, chunk-to-share consistency, and chunk range. The first and third of these rely on a Σ-protocol proof that ties these concerns together into a single non-interactive proof and, at the same time, prevents transcript malleability by binding the proof to the dealer’s identity. This section explains how that proof works.

What Is a Σ-Protocol?

A Σ-protocol is a three-move proof of knowledge. The prover knows a secret witness w and wants to convince a verifier that a public statement Y satisfies Y=Ψ(w) for some known homomorphism Ψ, without revealing w. The three moves are:

  1. Commit. The prover samples random r and sends A=Ψ(r).
  2. Challenge. The verifier sends a random scalar c.
  3. Respond. The prover sends z=r+c·w.

The verifier accepts if Ψ(z)=A+c·Y. Because Ψ is a homomorphism, this equation holds if and only if the prover knew w.

In Chunky, the protocol is made non-interactive using the Fiat-Shamir transform: the challenge c is derived by hashing the protocol context, the homomorphism description, the public statement, and the prover’s commitment A into a Merlin transcript. The resulting proof consists of (A,z).

The Tuple Homomorphism

Recall from the DKG overview that the dealer produces two kinds of public output from the same secret data: an HKZG commitment (used by the DeKART range proof) and the chunked ElGamal ciphertexts. The Σ-protocol must prove that both outputs are consistent with the same underlying witness. This is achieved with a tuple homomorphism in the implementation that maps a single witness to a pair of outputs:

Ψ(ρ,{si,j,k},{rj,k})=(C^HKZG,{Ci,j,k},{Rj,k}Chunked ElGamal)

The witness has three parts:

  • ρ: the blinding scalar for the hiding KZG commitment.
  • si,j,k: the -bit chunks of each Shamir share (the same chunks encrypted in the ciphertexts Ci,j,k).
  • rj,k: the correlated ElGamal randomness (satisfying kBk1·rj,k=0 as described earlier).

The first component of the tuple ignores rj,k and computes the HKZG commitment:

HKZG(ρ,{si,j,k})=ρ·[ξ]1+i,j,ksi,j,k·[i·m+j+1(τ)]1

where [ξ]1 is a hiding base from the SRS and [·(τ)]1 are Lagrange basis evaluations at the SRS trapdoor. This is the same commitment that enters the DeKART range proof.

The second component ignores ρ and computes the chunked ElGamal ciphertexts and randomness commitments:

Ci,j,k=si,j,k·G+rj,k·eki,Rj,k=rj,k·H

Each component is a “lifted” homomorphism: a projection extracts the relevant fields from the full witness, then the inner homomorphism is applied. The tuple construction ensures that a single proof with a single Fiat-Shamir challenge covers both components, guaranteeing the same witness underlies both the KZG commitment and the ElGamal ciphertexts.

Non-Malleability via Signature of Knowledge

As described in Step 4 of the DKG overview, the Σ-protocol must be non-malleable to prevent an adversary from re-purposing an honest dealer’s transcript. This is achieved by turning the proof into a Signature of Knowledge (SoK): the dealer’s identity is hashed into the Fiat-Shamir challenge, so the proof is bound to a specific dealer and session.

Concretely, the SoK context hashed into the transcript consists of:

  • The dealer’s BLS12-381 signing public key
  • The session/epoch identifier
  • The dealer’s index in the validator set
  • A domain-separation tag (DST)

If any of these fields change - for example, if an attacker substitutes their own public key - the recomputed challenge will differ and the proof becomes invalid.

Fiat-Shamir Transcript

The Fiat-Shamir challenge is derived via a Merlin transcript with the following binding order:

  1. DST: the protocol’s domain-separation tag, written as the Merlin transcript label.
  2. SoK context: the dealer’s public key, session ID, dealer index, and DST, serialized with BCS.
  3. Homomorphism bases: all MSM base points (generators G, H, the HKZG SRS elements, and all encryption keys eki).
  4. Public statement: the tuple Ψ(w) (the HKZG commitment and all ciphertexts Ci,j,k and randomness commitments Rj,k).
  5. Prover’s commitment: the first message A=Ψ(r).

The challenge is squeezed as a full-size field element, sampled with 2× the field bit-length for statistical uniformity.

Verification

Both components of the tuple homomorphism are multi-scalar multiplications (MSMs), so the verification equation Ψ(z)=A+c·Y reduces to checking that an MSM evaluates to the identity:

ibi·ziAc·Y=𝒪

The two components of the tuple can be batched into a single MSM using a random scalar.

The DeKART range proof also contains its own inner Σ-protocol (an Okamoto proof showing knowledge of the blinding randomness), which uses a separate DST and Fiat-Shamir transcript. Both the outer SoK and the DeKART proof are stored together in the dealing transcript and verified independently, but they share the same HKZG commitment C^ as the link between them.

Overview of Consensus Integration

DKG Integration

At a high level, the consensus integration turns Chunky DKG into an epoch-handoff pipeline: during epoch N reconfiguration, validators jointly produce one certified aggregated DKG output, commit it on chain, and carry it into epoch N+1 as the canonical encryption material for encrypted mempool operations. This work is intentionally outside the per-block consensus critical path, and its output is buffered in the reconfiguration flow and consumed when the next epoch starts.

Concretely, when the chain transitions from epoch N to epoch N+1, and the relevant on-chain feature flags are enabled (vtxn_enabled and chunky_dkg_enabled), Move starts both the existing randomness DKG and the new Chunky DKG. Each current-epoch validator runs a local ChunkyDKGManager, produces one signed Chunky PVSS transcript, and broadcasts it off-chain via ReliableBroadcast. Validators verify incoming transcripts independently and keep collecting them until the dealer set has enough voting power.

Once a validator reaches quorum, it pointwise-aggregates the accepted dealer transcripts into a single AggregatedSubtranscript. This compression step is important: instead of carrying one transcript per dealer into the next epoch, the protocol reduces the whole qualified dealer set into one compact object from which each validator can later recover its own final secret-share material.

The aggregation is only for the subtranscript, not including the proof. As a result, there is no direct way for other validators to verify the validity of AggregatedSubtranscript. To resolve this, the protocol adds a certification step. A validator that has formed an aggregate sends a ChunkyDKGSubtranscriptSignatureRequest containing the dealer list, the aggregate hash, and per-dealer transcript hashes. A recipient will fetch and verify any missing transcripts, recompute the aggregate, and sign only if all individual transcripts are valid. In effect, consensus is not certifying individual transcripts, but one exact aggregated output.

After a validator collects quorum signatures over that aggregate, it emits a ValidatorTransaction::ChunkyDKGResult. On chain, finish_with_chunky_dkg_result() verifies the quorum signature, stores the certified aggregated subtranscript, and publishes the derived encryption key for epoch N+1. Reconfiguration only fully completes once both the randomness DKG path and the Chunky DKG path have finished, so the next epoch starts with one certified DKG result that all honest validators can consume consistently.

Round Decryption Key Derivation

At epoch start, each validator uses its private key to decrypt its own secret share from the certified Chunky transcript, obtaining a SecretKeyShare that it holds for the duration of the epoch.

Decryption keys are derived per round. Once a block reaches a quorum certificate, each validator computes the round digest, which is a commitment to the IBE tags of the encrypted transactions in that block. Using the digest and its SecretKeyShare, the validator produces a per-round decryption key share.

The key share is only released after the block is finalized. This ordering is critical: releasing a key share before finalization would allow the encrypted transactions to be decrypted while they are still in the mempool, breaking confidentiality. Once the block is finalized, each validator broadcasts its key share via SecretShareManager. When enough shares are collected to meet the reconstruction threshold, any validator can locally reconstruct the full decryption key. The threshold secret sharing scheme guarantees that all honest validators reconstruct the same key, regardless of which subset of shares they use.

SecretShareManager uses an optimistic verification strategy to reduce per-share verification overhead. Initially, it accepts incoming shares without cryptographic verification, assuming all validators are honest. When the reconstruction threshold is reached, it attempts to aggregate the collected shares. If aggregation fails, it falls back to verifying each share individually, evicts the invalid ones, and moves the offending validator to a pessimistic set. All future shares from a pessimistic validator are verified before acceptance for the rest of the epoch.

Once the decryption key is reconstructed, it is passed to the decryption pipeline to decrypt the encrypted transactions in the finalized block.

Encrypted Transaction Flow

When the certified Chunky DKG output settles at epoch start, the derived aggregate encryption key is published on chain by the Move decryption module and exposed through the ledger state. Clients fetch this key, encrypt their transaction payload locally, and submit a signed transaction whose payload variant is EncryptedPayload::Encrypted. The transaction signature covers the encrypted form, and the ciphertext is bound to the sender’s address as associated data, so a ciphertext cannot be replayed under a different sender after submission. Admission is gated both locally, by the node’s allow_encrypted_txns_submission flag, and globally, by the on-chain ENCRYPTED_TRANSACTIONS feature flag, and the API rejects any payload that arrives in a non-Encrypted state.

Inside QuorumStore, encrypted transactions flow through a parallel track from regular transactions. Batch V2 carries a BatchKind discriminator, Normal or Encrypted, and the batch generator buckets transactions by kind, applying gas-bucketing and per-sender size limits independently per kind. Encrypted batches are required to be homogeneous: an encrypted batch contains only encrypted transactions, and a normal batch contains none. This separation matters because every later stage treats the two kinds differently. In particular, proposal pulls apply a separate per-kind block limit encrypted_txn_limit, sourced from SecretShareConfig and defaulting to zero when no decryption configuration is available, so a validator that cannot participate in decryption is also prevented from packing encrypted batches into the blocks it proposes.

Once consensus orders a block, the pipeline materializes payload references and partitions transactions into encrypted and regular sets. Materialization and decryption precomputation start optimistically on the QC path, before final ordering completes; this overlaps the expensive eval-proof and ciphertext-preparation work with consensus latency rather than serializing them. The final decryption pass, however, waits on the aggregated per-round decryption key from SecretShareManager. This dependency is the structural enforcement of the “decrypt only after finalization” invariant: precomputation can run early, but the pairing that actually recovers each plaintext is gated on the aggregated key, which only becomes available once the ordered-block path has produced the block.

After decryption, each encrypted transaction transitions to one of two terminal states. Decrypted carries the recovered executable along with the optional claimed-entry-function check; FailedDecryption carries a reason, which is one of CryptoFailure, BatchLimitReached, ConfigUnavailable, DecryptionKeyUnavailable, or ClaimedEntryFunctionMismatch. The prepare stage concatenates the decrypted partition with the regular partition and hands the combined block to execution. At the VM, only BatchLimitReached is retryable; all other failure reasons run the failure epilogue, which charges gas and increments the sender’s sequence number, so a failed decryption is a final on-chain outcome rather than a free retry. This ensures that an encrypted transaction either executes or is permanently rejected, and that a sender cannot use ciphertext failures to obtain repeated admission at no cost.