# Audit of Aptos Encrypted Mempool

- **Date**: March 13th, 2026
- **Tags**: Encrypted Mempool,Aptos,Witness Encryption,Threshold Encryption,BLS,KZG,DKG,Trusted Setup

## Introduction

On March 9th, 2026, zkSecurity was commissioned to perform a security audit of Aptos Labs' encrypted mempool. The encrypted mempool allows users to encrypt their transactions before submission so that transaction contents remain private until they are finalized in a block, protecting against MEV (Maximal Extractable Value) attacks such as front-running and sandwich attacks.

The audit was conducted by two consultants over four weeks (40 engineering days). A number of observations and findings were reported to the Aptos Labs team and are detailed in the latter sections of this report.

### Scope

The audit covered the Aptos encrypted mempool implementation in the Aptos Core repository ([aptos-labs/aptos-core](https://github.com/aptos-labs/aptos-core)) and was divided into three phases, each targeting a distinct component.

**Phase 1: Batch threshold encryption** (commit `ba538036b4d1ba44a9dcff7d3c17bc3d861981b5`)

- Batched IBE threshold encryption scheme and its underlying components
- Hybrid encryption and key management (KEM and data encapsulation)
- Symmetric encryption, key derivation, randomness, and nonce handling
- Rust and TypeScript implementation of the scheme

**Phase 2: Chunky PVSS and underlying components** (commit `a8b7a672fefd4fd55c72eccf2880ce158893ed26`)

- Chunky PVSS scheme and its underlying cryptographic components
- DeKART zero-knowledge range proof
- Rust implementation of these schemes

**Phase 3: Encrypted mempool integration** (commit `e582a7599c8a8880bda3fa887a6b89142fbfabf4`)

- Integration of the batched IBE threshold encryption
- Integration of Chunky DKG
- Integration of the decryption secret key share derivation and aggregation
- Transaction encryption, proposal, validation, decryption, and execution with respect to the encrypted mempool

Gas metering for encrypted transactions is out of scope for this audit.

### Security Goals

The audit evaluated the following security properties of the Aptos encrypted mempool.

**Cryptographic soundness.** The core cryptographic building blocks must be secure:

- The batched IBE threshold encryption scheme (BIBE) must provide CCA security.
- The Chunky PVSS scheme must correctly distribute secret shares among validators.
- The DeKART zero-knowledge range proof must be sound (and zero-knowledge).

**Transaction confidentiality.** A transaction's content must remain private until it is finalized in a block and decrypted immediately before execution. Specifically:

- An unfinalized encrypted transaction must not be decryptable by any party.
- The decryption secret for a round must only be derivable after the round is finalized.
- A transaction encrypted in one epoch must not be decryptable using a key from a different epoch.

**Transaction integrity.** The encrypted mempool must prevent adversarial manipulation of transactions. Specifically:

- An encrypted transaction must not be decryptable to a different plaintext than what the sender intended.
- An adversary must not be able to copy, replay, or malleate another user's encrypted transaction.
- A transaction that is decrypted should be executed. An adversary should not be able to cause a transaction to be decrypted without execution, leaking the sender's intent without their consent.

**Liveness.** The integration must not introduce new liveness vulnerabilities. Specifically:

- A malicious validator must not be able to stall DKG, block secret share reconstruction, or prevent the network from progressing.
- A malicious validator must not be able to cause the DKG to deal an inconsistent secret, or cause the derived decryption key in a round to be inconsistent.
- A malicious user must not be able to submit a transaction that causes other decryptions to fail or crash validators.
- A malicious user or validator must not be able to consistently spam the network to prevent other encrypted transactions from being executed.

<div style="page-break-after: always;"></div>

## Overview

The Aptos encrypted mempool allows users to encrypt their transactions locally, keeping transaction contents encrypted while in the mempool. Transactions are only decrypted after being included in a finalized block, immediately before execution. The system uses a threshold encryption scheme in which only a threshold of validators (by stake) can decrypt a transaction. If a transaction is not finalized, it will not be decrypted under the honest-majority validator assumption. By keeping transaction contents private until finalization, the system aims to mitigate some MEV attacks such as sandwich attacks and front-running.

The two core cryptographic components of Aptos's encrypted mempool are the batch threshold encryption scheme and the distributed key generation (DKG). We first describe the [batch threshold encryption scheme](#overview-batch-threshold-encryption), then the [distributed key generation](#overview-dkg) process. The DKG relies on [DeKART range proofs](#overview-dekart-range-proof) and a [$\Sigma$-protocol signature of knowledge](#overview-sigma-protocol), before being integrated into [consensus](#overview-consensus-integration).

<a id="overview-batch-threshold-encryption"></a>
### Overview of the Batch Threshold Encryption Scheme

The batch threshold encryption allows users to encrypt their transactions individually, while a threshold of validators can efficiently decrypt them together in a batch. The scheme is described in the [TrX paper](https://eprint.iacr.org/2025/2032).

#### Threshold Encryption

To achieve threshold encryption, the scheme combines BLS threshold signatures with witness encryption. A BLS threshold signature requires $t$ out of $n$ validators to produce a valid aggregate signature. The user encrypts their transaction so that only someone who holds the BLS signature (the witness) over a specific message can decrypt it. To decrypt, each validator produces a partial signature; once $t$ partial signatures are collected, the aggregate signature is derived and the transaction is decrypted.

A naive implementation would have each user sample a random message and encrypt their transaction to the BLS threshold signature over that message. The transaction carries the message so that validators can sign it. However, this does not scale: different transactions require different signatures, so each validator's signing work grows proportionally with the number of encrypted transactions.

An alternative approach is to have all users encrypt their transactions to the signature over the same message, so that validators only need to sign once. For example, users could encrypt to a signature over the target round number (or block height). However, there is no guarantee that a transaction is included in a specific round. If a transaction misses its target round but validators have already signed over that round number, the transaction would be decryptable even though it is never finalized.

Aptos mitigates this by having each user encrypt their transaction to the threshold signature over a KZG commitment to its random tag. Since a KZG commitment can commit to multiple tags at once, a single signature over one commitment can serve to decrypt an entire batch of transactions. As described in the [TrX paper](https://eprint.iacr.org/2025/2032), a ciphertext is encrypted so that it can be decrypted if and only if the decryptor knows a signature $\sigma$ under public key $\mathsf{pk}$ over a vector of tags $(\mathsf{tg}_1, \ldots, \mathsf{tg}_B)$ such that its $\mathsf{tg} \in \{\mathsf{tg}_1, \ldots, \mathsf{tg}_B\}$. Below we describe how this batch witness encryption scheme is constructed.

#### The Batch Witness Encryption

A batch of $B$ ciphertexts has tags $\mathsf{tg}_1, \ldots, \mathsf{tg}_B$. Define $f(X) = \prod_{j=1}^{B}(X - \mathsf{tg}_j)$ and the KZG commitment $\mathsf{com} = g^{f(\tau)}$. The user wants to encrypt to the signature $\sigma$ under public key $\mathsf{pk}$, where $\mathsf{com}$ commits to a polynomial that has $\mathsf{tg}$ as one of its roots.

The relationship between the signature and the commitment can be verified using two pairing product equations (PPEs):

**PPE 1** (signature): $\quad e(H_1(\mathsf{pk}),\; \mathsf{pk})  \cdot e(\mathsf{com},\; \mathsf{pk}) = e(\sigma,\; h)$

**PPE 2** (KZG opening): $\quad e(\pi,\; h^{\tau - \mathsf{tg}}) = e(\mathsf{com},\; h)$

The witness for decrypting a ciphertext with tag $\mathsf{tg}$ has three elements:

- $\sigma = (H_1(\mathsf{pk}) \cdot \mathsf{com})^{\mathsf{sk}} \in \mathbb{G}_1$: threshold BLS signature (shared across batch).
- $\mathsf{com} \in \mathbb{G}_1$: KZG commitment (shared across batch).
- $\pi = g^{q(\tau)} \in \mathbb{G}_1$ where $q(X) = f(X)/(X - \mathsf{tg})$: KZG opening proof (per-tag).

##### Building the Encryption

We want to encrypt a message so that only someone who possesses all three witness elements ($\sigma$, $\pi$, and $\mathsf{com}$) can decrypt it.

At encryption time, we have none of the witness. The only computable pairing value is in the left-hand side of PPE 1: $e(H_1(\mathsf{pk}), \mathsf{pk})$. We sample a random field element $r_1 \xleftarrow{} \mathbb{F}_p$ and set

$$K = e(H_1(\mathsf{pk}),\; \mathsf{pk})^{-r_1}$$

as the encapsulation key. The goal is that anyone with all three witness elements can derive the same encapsulation key $K$ from the two PPEs.

Releasing $r_1$ directly would allow anyone to compute $K$. Instead, we release ciphertext elements (hints) that allow someone with the full witness to recover $K$, but reveal nothing to anyone else.

#### Step 1: Using PPE 1 alone

Expand $K$ using PPE 1:

$$K = e(\sigma,\; h)^{-r_1} \cdot e(\mathsf{com},\; \mathsf{pk})^{r_1} = e(\sigma,\; h^{-r_1}) \cdot e(\mathsf{com},\; \mathsf{pk}^{r_1})$$

If we release $h^{-r_1}$ and $\mathsf{pk}^{r_1}$ as hints, anyone with $\sigma$ and $\mathsf{com}$ can pair them and recover $K$.

**The problem:** PPE 2 is not enforced. The decryptor does not need $\pi$ at all. This means anyone with a valid signature $\sigma$ and *any* commitment $\mathsf{com}'$ (not necessarily one where $f(\mathsf{tg}) = 0$) could decrypt. We need to bind decryption to knowledge of a valid KZG opening proof.

#### Step 2: Entangling PPE 2

The idea is to hide the hint $\mathsf{pk}^{r_1}$ behind PPE 2, so that extracting it requires $\pi$.

Sample a second random scalar $r_0 \xleftarrow{} \mathbb{F}_p$ and release $\mathsf{pk}^{r_1} \cdot h^{r_0}$ instead of $\mathsf{pk}^{r_1}$ alone. Now the decryptor can pair $\mathsf{com}$ with this combined hint, but gets an extra $e(\mathsf{com}, h^{r_0})$ term that they need to cancel:

$$e(\mathsf{com},\; \mathsf{pk}^{r_1}) = e(\mathsf{com},\; \underbrace{\mathsf{pk}^{r_1} \cdot h^{r_0}}_{\text{released hint}}) \cdot e(\mathsf{com},\; h^{-r_0})$$

The second factor $e(\mathsf{com}, h^{-r_0})$ is unknown to the decryptor. But PPE 2 provides exactly the bridge to compute it if and only if the decryptor knows $\pi$:

$$e(\mathsf{com},\; h^{-r_0}) = e(\mathsf{com},\; h)^{-r_0} = e(\pi,\; h^{r_0(\mathsf{tg} - \tau)})$$

The first equality is just exponentiation. The second follows from PPE 2. So we release $h^{r_0(\mathsf{tg} - \tau)}$ as a third hint. Anyone who knows $\pi$ can pair it with this hint and recover the missing term. Without $\pi$, they're stuck. Note that the user can compute this hint without knowing $\tau$ directly, given $h^\tau$: $h^{r_0(\mathsf{tg} - \tau)} = h^{r_0\mathsf{tg}} \cdot (h^{\tau})^{-r_0}$.

#### Step 3: The ciphertext

There are three hints, matching the implementation:

$$\mathsf{ct}[0] = h^{r_0} \cdot \mathsf{pk}^{r_1}$$

$$\mathsf{ct}[1] = h^{r_0(\mathsf{tg} - \tau)}$$

$$\mathsf{ct}[2] = h^{-r_1}$$

The encapsulation key depends only on $r_1$. The scalar $r_0$ is structural: it creates the entanglement with PPE 2 and cancels out completely during decryption.

We can verify that anyone with the witness can recover $K$ from the hints:

$$K = e(H_1(\mathsf{pk}),\; \mathsf{pk})^{-r_1}$$

Expand using PPE 1:

$$= e(\sigma,\; h)^{-r_1} \cdot e(\mathsf{com},\; \mathsf{pk})^{r_1}$$

Pull $r_1$ into the second argument:

$$= e(\sigma,\; h^{-r_1}) \cdot e(\mathsf{com},\; \mathsf{pk}^{r_1})$$

Substitute $\mathsf{pk}^{r_1} = (\mathsf{pk}^{r_1} \cdot h^{r_0}) \cdot h^{-r_0}$ to split the second term:

$$= e(\sigma,\; \underbrace{h^{-r_1}}_{\mathsf{ct}[2]}) \cdot e(\mathsf{com},\; \underbrace{h^{r_0} \cdot \mathsf{pk}^{r_1}}_{\mathsf{ct}[0]}) \cdot e(\mathsf{com},\; h^{-r_0})$$

Apply PPE 2 to the last term: $e(\mathsf{com}, h^{-r_0}) = e(\mathsf{com}, h)^{-r_0} = e(\pi, h^{r_0(\mathsf{tg} - \tau)})$:

$$= e(\sigma,\; \underbrace{h^{-r_1}}_{\mathsf{ct}[2]}) \cdot e(\mathsf{com},\; \underbrace{h^{r_0} \cdot \mathsf{pk}^{r_1}}_{\mathsf{ct}[0]}) \cdot e(\pi,\; \underbrace{h^{r_0(\mathsf{tg} - \tau)}}_{\mathsf{ct}[1]})$$

Every factor is a pairing of a witness element with a hint.

#### Decryption

We can split the decryption process into public precomputation (pipelined during voting) and a final step on the critical path.

**Phase 1: Precomputation** (public, per-ciphertext). This is where the decryptor "peels off" the $r_0$ blinding using $\pi$ and $\mathsf{com}$:

$$\mathsf{pairing\_output} = \bigl[e(\pi,\; \mathsf{ct}[1]) \cdot e(\mathsf{com},\; \mathsf{ct}[0])\bigr]$$

The two pairings correspond to the two sides of the entanglement from Step 2. No threshold signature is needed. This phase runs during voting rounds, once the list of tags is fixed, at which point $\mathsf{com}$ and $\pi$ can be computed by the validator.

**Phase 2: Final decryption** (requires $\sigma$, critical path):

$$K = e(\sigma,\; \mathsf{ct}[2]) \cdot \mathsf{pairing\_output}$$

This is computed after the threshold signature $\sigma$ is successfully generated.

#### 3-Element vs 2-Element

The TrX paper also introduces a 2-element scheme that merges the two PPEs into one (raising PPE 2 by $\mathsf{sk}$ and substituting), eliminating $\mathsf{com}$ as a witness element:

$$e(H_1(\mathsf{pk}),\; \mathsf{pk}) = e(\sigma,\; h) \cdot e(\pi,\; \mathsf{pk}^{\tau - \mathsf{tg}})$$

The implementation uses the 3-element scheme because it does not require $\mathsf{pk}^\tau$ as part of the encryption key, which simplifies the PVSS setup.

| | 3-element | 2-element |
|---|---|---|
| Ciphertext | 3 $\mathbb{G}_2$ elements | 2 $\mathbb{G}_2$ elements |
| Random scalars | 2 ($r_0, r_1$) | 1 ($\alpha$) |
| Encryption key | $(\mathsf{pk}, h^\tau)$ | $(\mathsf{pk}, h^\tau, \mathsf{pk}^\tau)$ |
| Precomputation | 2 pairings / ct | 1 pairing / ct |
| Critical path | 1 pairing / ct | 2 pairings / ct |

#### Protection Against Linear Malleability Attacks

Because KZG commitments and BLS signatures are both linear, an adversary who obtains signatures for two batches can combine them to forge a valid witness for a tag that was never finalized. Given signatures $\sigma_1, \sigma_2$ on commitments $\mathsf{com}_1, \mathsf{com}_2$, they can set $a + b = 1$ and compute $\sigma' = \sigma_1^a \cdot \sigma_2^b$, which is a valid signature on $\mathsf{com}' = \mathsf{com}_1^a \cdot \mathsf{com}_2^b$. By choosing $a/b$ to make a target tag $\mathsf{tg}^*$ a root of the combined polynomial, the adversary forges a full witness for $\mathsf{tg}^*$.

TrX fixes this with a per-batch randomizer $\kappa_i$ baked into the CRS, so commitments become $\mathsf{com}_i = g^{\kappa_i \cdot f_i(\tau)}$. Planting a root in the combined commitment now requires knowing the ratio $\kappa_1/\kappa_2$, which the adversary cannot compute from the CRS group elements alone.

Earlier schemes bound each batch to a specific block height $r$, making the attack fail because $H_1(r_1)^a \cdot H_1(r_2)^b$ is not a valid hash for any block height. However, this forced users to encrypt to a specific block height at encryption time. If the transaction missed that block, the ciphertext had to be re-encrypted. On Aptos with sub-second blocks, this is impractical. TrX's $\kappa$ approach removes the block-height dependency entirely.

For cross-epoch and cross-chain separation, $H_1(\mathsf{pk})$ in the signature binds witnesses to the epoch's aggregate public key, which changes each epoch, preventing replay across epochs or networks.

#### CPA-Secure to CCA-Secure

The witness encryption scheme above is only CPA-secure. TrX upgrades it to CCA security using the standard Boneh-Canetti-Halevi-Katz (BCHK) transformation, which turns any CPA-secure IBE into CCA-secure PKE by combining it with a one-time signature.

The tag is derived from a fresh ephemeral verification key and the associated data:

$$\mathsf{tg} = H_F(\mathsf{vk}_{\mathsf{Sig}},\; \mathsf{ad})$$

where $\mathsf{ad}$ is the sender address in the Aptos integration. The full encryption procedure is:

1. Sample fresh $(\mathsf{vk}_{\mathsf{Sig}}, \mathsf{sk}_{\mathsf{Sig}}) \leftarrow \mathsf{Sig.KeyGen}$
2. Compute $\mathsf{tg} = H_F(\mathsf{vk}_{\mathsf{Sig}}, \mathsf{ad})$
3. Produce the witness encryption ciphertext under $\mathsf{tg}$
4. Sign the full ciphertext: $\varphi \leftarrow \mathsf{Sig.Sign}(\mathsf{sk}_{\mathsf{Sig}};\; \mathsf{vk}_{\mathsf{Sig}}, \mathsf{ad}, \mathsf{ct}^{(1)}, \mathsf{ct}^{(2)}, \mathsf{ct}^{(3)})$
5. Output $(\mathsf{ct}^{(1)}, \mathsf{ct}^{(2)}, \mathsf{ct}^{(3)}, \mathsf{vk}_{\mathsf{Sig}}, \varphi)$

This provides two properties. **Non-malleability:** the ephemeral key signs the whole ciphertext, so any modification is caught by `verify_ct`. If the adversary re-signs with their own key, the tag changes and the ciphertext encrypts their own message. **Tag uniqueness:** since each ciphertext samples a fresh $\mathsf{vk}_{\mathsf{Sig}}$, tags are distinct with overwhelming probability, preventing cross-batch reuse. Then an attacker will not be able to create the same tag after seeing a tag to front-run. A user who reuses their own ephemeral key across two transactions will derive the same tag for both, causing them to be decrypted together as a single batch. This is a protocol violation that only harms the sender.

<a id="overview-dkg"></a>
### Overview of the Distributed Key Generation (Chunky)

The batch threshold encryption scheme requires validators to jointly hold a secret field element so that only a coalition above a threshold can reconstruct it. The process of generating and dealing this shared secret is called Distributed Key Generation (DKG). The common process is:

1. Each validator generates a random secret, splits it into shares, and distributes the shares to the other validators.
2. Each validator sums the shares it received to get its final key share.
3. Each validator now holds a share of the global secret key.

Since each validator only contributes one piece of the secret, no single validator knows or controls the final secret.

Note that Aptos already has a DKG for its randomness beacon, but that one operates over elliptic curve group elements. The batch threshold encryption scheme requires the secret to be a field element in order to support BLS signing, which is why a new DKG scheme ([Chunky](https://alinush.github.io/chunky)) is needed.

#### How to Share a Secret With a Threshold

This is weighted Shamir secret sharing. There are $n$ validators and validator $i$ has stake weight $w_i$. We want any subset with combined weight $> t_W$ to reconstruct the secret. The dealer constructs a degree-$t_W$ polynomial $f$ with $f(0) = a_0$ as the secret, and gives validator $i$ exactly $w_i$ evaluations: $f(\chi_{i,1}), \ldots, f(\chi_{i,w_i})$, proportional to its stake. Any subset whose combined weight exceeds $t_W$ has enough points to interpolate $f$ and recover $a_0$.

#### How to Generate the Secret So No One Knows It

If one validator generates the whole polynomial, it knows the secret. Instead, each validator $i$ generates its own random degree-$t_W$ polynomial $f_i$ and deals it independently. The final shared polynomial is $f = \sum_{i \in Q} f_i$, so the secret is $a_0 = \sum_{i \in Q} f_i(0)$. Each validator distributes evaluations of its own $f_i$ to all others, and each recipient sums what it receives to get its share of $f$.

Not every validator needs to participate. A qualifying set $Q$ with sufficient combined stake is enough. As long as $Q$ contains at least one honest validator, the final polynomial is uniformly random and unknown to any single party.

#### Publicly-Verifiable Secret Sharing

In Aptos, DKG happens between epochs: current-epoch validators deal secrets to next-epoch validators over a public channel with no direct secret communication. This requires the dealing process to be **public**. The dealer will encrypt the secret key share to the receiver. To ensure the encrypted secret is valid, others should be able to **verify** it without decryption. Chunky is a Publicly-Verifiable Secret Sharing (PVSS) scheme that satisfies both requirements.

A dealing validator samples the secret $a_0$, picks a random degree-$t_W$ polynomial $f$ with $f(0) = a_0$, and evaluates $f$ at each validator's points to get shares $s_{i,j} = f(\chi_{i,j})$.

#### Step 1: Low-Degree Test

The dealer must prove that the shares are evaluations of a degree-$t_W$ polynomial, without revealing them. It commits each share in $\mathbb{G}_2$ as $\widetilde{V}_{i,j} = [s_{i,j}]\widetilde{G}$ and publishes $\widetilde{V}_0 = [a_0]\widetilde{G}$. The SCRAPE low-degree test then verifies that these commitments are consistent with a degree-$t_W$ polynomial using Reed-Solomon dual codewords. Anyone can run this check without learning the shares.

#### Step 2: Encryption

The dealer encrypts each share to its intended recipient. Each validator $i$ has a known encryption public key $ek_i = [dk_i]H$.

Since shares are full field elements (~255 bits for BLS12-381), decrypting ElGamal requires solving a discrete log, which is only feasible for small values. To handle this, each share is split into $m$ chunks of $\ell$ bits each (e.g., $\ell = 32$, $m = 8$):

$$s_{i,j} = \sum_{k=1}^{m} B^{k-1} \cdot s_{i,j,k}, \quad B = 2^\ell$$

Each chunk $s_{i,j,k} \in [0, B)$ is small enough for brute-force discrete log. The dealer ElGamal-encrypts each chunk:

$$C_{i,j,k} = [s_{i,j,k}]G + [r_{j,k}]\,ek_i, \quad R_{j,k} = [r_{j,k}]H$$

To decrypt, validator $i$ computes $C_{i,j,k} - [dk_i]R_{j,k} = [s_{i,j,k}]G$, solves the small discrete log, and recombines the chunks to recover $s_{i,j}$. Note that all the receivers share the same randomness values $r_{j,k}$ in the ciphertext.

#### Step 3: Validity Checks

We must ensure the ciphertexts actually encrypt the committed shares and not garbage. Three things are checked.

**Check 1 — Correct encryption format.** A [$\Sigma$-protocol signature of knowledge](#overview-sigma-protocol) (ZKSoK) proves that the dealer knows the randomness and plaintext used in each ElGamal encryption ciphertext.

**Check 2 — Chunks recombine to the committed share.** The dealer picks randomness with a correlation constraint:

$$\sum_{k=1}^{m} B^{k-1} \cdot r_{j,k} = 0$$

This ensures the randomness cancels when chunks are recombined:

$$\sum_{k} B^{k-1} \cdot C_{i,j,k} = [s_{i,j}]G + \underbrace{\left(\sum_k B^{k-1} \cdot r_{j,k}\right)}_{= \, 0} \cdot ek_i = [s_{i,j}]G$$

So the recombined ciphertext is a deterministic commitment to the full share. Consistency with $\widetilde{V}_{i,j} = [s_{i,j}]\widetilde{G}$ is then verified with a pairing:

$$e\!\left(\sum_k B^{k-1} C_{i,j,k},\; \widetilde{G}\right) = e\!\left(G,\; \widetilde{V}_{i,j}\right)$$

This can be batched across all $(i,j)$ pairs via random linear combinations, collapsing into a single two-pairing check.

**Check 3 — Chunks are in range.** Each chunk must be in $[0, 2^\ell)$, otherwise a malicious dealer could encode values that fail to reconstruct. The dealer commits all chunks into a single hiding KZG commitment $C$, and uses a [DeKART range proof](#overview-dekart-range-proof) to batch-prove all chunks are $\ell$-bit. The [$\Sigma$-protocol](#overview-sigma-protocol) from Check 1 is extended to also prove that the values in $C$ match those encrypted in the ciphertexts. This is called the "ElGamal-to-KZG" relation.

#### Step 4: Non-Malleability

Without non-malleability, a malicious validator $j$ could take an honest validator $i$'s transcript (which deals secret $z_i$), modify it to deal $-z_i + r$ for some known $r$, sign it under their own identity, and submit it. The combined DKG secret would then be $z_i + (-z_i + r) = r$, which $j$ fully controls.

To prevent this, the $\Sigma$-protocol is made into a **zero-knowledge signature of knowledge (ZKSoK)** that signs over the dealer's public key and the epoch number. This binds each transcript to its specific dealer and substituting a different identity will invalidate the proof.

#### Step 5: Aggregation

The components of a subtranscript ($\widetilde{V}_0$, $\widetilde{V}_{i,j}$, $C_{i,j,k}$, $R_{j,k}$) are all group elements and can be added pointwise across transcripts. This means individual transcripts can be **aggregated**: combining $|Q|$ transcripts produces a single subtranscript of the same size, representing the combined secret $z = \sum_{i \in Q} z_i$. Each validator only needs to decrypt once from the aggregated subtranscript.

#### Putting It Together: the DKG

The process runs as below:

1. **Dealing phase.** Each validator $i$ picks a random secret $z_i$, runs Chunky's $\mathsf{Deal}$ to produce a signed PVSS transcript, and broadcasts it.

2. **Agreement phase.** Validators agree on a qualifying set $Q$ (with sufficient combined stake) of valid transcripts and aggregate them pointwise into a compact subtranscript for the combined secret.

3. **Commit phase.** A leader proposes $(Q,\, \text{hash_of_aggregated_subtranscript})$. Once enough validators attest to it, the aggregated subtranscript is posted on-chain and each validator decrypts its final shares.

<a id="overview-dekart-range-proof"></a>
### Overview of DeKART Range Proof

In the Chunky DKG, each Shamir share that a dealer distributes is a field element around 255 bits wide, which is too large to decrypt directly under ElGamal (decryption requires solving a discrete log). To work around this, the dealer splits each share into $m$ small chunks of $\ell$ bits each ($\ell = 32$, $m = 8$ in the system) and ElGamal-encrypts each chunk separately. A chunk is only decryptable if it actually fits in $[0, 2^\ell)$: a malicious dealer who encodes an out-of-range chunk could make recombined shares fail to match the committed share, breaking reconstruction.

This is why the dealer must additionally prove that every chunk lies in $[0, 2^\ell)$. With $n$ validators each receiving $m$ chunks per share, the dealer needs to range-prove $N = n \cdot m$ values at once. Aptos uses [DeKART](https://alinush.github.io/dekart), a batched range proof, which produces a single short proof covering all $N$ chunks. A separate [$\Sigma$-protocol](#overview-sigma-protocol) binds the values inside DeKART's commitment to the ones encrypted in the chunked ElGamal ciphertexts; DeKART itself is only concerned with the range claim.

#### What Is Being Proved

The dealer has a single hiding KZG commitment

$$C \;=\; \rho \cdot [\xi]_1 \;+\; \sum_{i=1}^{N} s_i \cdot [\ell_i(\tau)]_1$$

over a Lagrange basis of size $N + 1$ at positions $\{\omega^0, \omega^1, \ldots, \omega^{N}\}$. Position $0$ carries the value $0$ and will later be filled with a blinder; the $N$ chunks occupy positions $1, \ldots, N$. DeKART convinces the verifier that every $s_i$ sits in $[0, 2^\ell)$, without revealing anything about the $s_i$ themselves.

#### The Polynomial Encoding

Let $H = \{1, \omega, \omega^2, \ldots, \omega^N\}$ be the evaluation domain and $S = H \setminus \{1\} = \{\omega, \ldots, \omega^N\}$ the "data" positions. The prover works with two families of polynomials over $H$.

- $\hat{f}(X)$: the degree-$N$ polynomial whose evaluations over $S$ are the chunks $s_i$, and whose value at $\omega^0 = 1$ is a fresh random blinder $r$. Its commitment $\hat{C}$ is defined below.
- $f_j(X)$ for $j \in [0, \ell)$: degree-$N$ polynomials whose evaluations over $S$ are the $j$-th bit of each chunk, with a fresh blinder $r_j$ at position $\omega^0$.

Two polynomial identities must hold at every $X \in S$:

**Radix decomposition.** $\hat{f}(X) - \sum_{j=0}^{\ell-1} 2^j \cdot f_j(X) = 0$. Each chunk is the radix-2 recombination of its bits.

**Bit constraint.** $f_j(X) \cdot (f_j(X) - 1) = 0$ for every $j$. Each $f_j$ is $0/1$ on $S$.

These identities together imply that every chunk is a sum of $\ell$ bits times powers of two, i.e. in $[0, 2^\ell)$.

Both identities are enforced over $S$, not over all of $H$. This is what makes zero-knowledge possible: the blinders $r$ and $r_j$ at position $\omega^0 = 1$ are free to be random, so the committed polynomials (and one KZG opening) carry no information about the $s_i$ beyond what is already in $\hat{C}$.

#### Collapsing to a Single Quotient

Instead of proving each identity separately, the verifier derives Fiat-Shamir challenges $\beta, \beta_0, \ldots, \beta_{\ell-1}$ and the prover combines them into one polynomial identity. Let

$$V_S(X) = \frac{X^{N+1} - 1}{X - 1}$$

be the vanishing polynomial of $S$ (it vanishes on every $\omega^i$ for $i \geq 1$ but not at $X = 1$). Define

$$P(X) \;=\; \beta \cdot \Big(\hat{f}(X) - \sum_{j} 2^j f_j(X)\Big) \;+\; \sum_{j} \beta_j \cdot f_j(X) \big(f_j(X) - 1\big).$$

By the two identities above, $P$ vanishes on every point of $S$, so $V_S(X) \mid P(X)$. The prover computes the quotient

$$h(X) \;=\; P(X) \;/\; V_S(X)$$

and commits to it with another hiding KZG commitment $D$. A valid $h$ exists if and only if both identities hold over $S$.

#### Re-randomizing the Committed Polynomial

The original commitment $C$ (the one fed into the sigma protocol) puts zero at position $\omega^0$. To give DeKART freedom to hide the data behind a random $r$ at that slot, the dealer samples fresh $(r, \Delta\rho)$ and publishes

$$\hat{C} \;=\; C \;+\; r \cdot [\ell_0(\tau)]_1 \;+\; \Delta\rho \cdot [\xi]_1.$$

This $\hat{C}$ is the commitment of $\hat{f}$. To prove that $\hat{C}$ is a legitimate re-randomization — and in particular that only the blinding slot changed — the dealer runs a two-term Okamoto $\Sigma$-protocol for the statement

$$\hat{C} - C \;=\; r \cdot [\ell_0(\tau)]_1 \;+\; \Delta\rho \cdot [\xi]_1$$

proving knowledge of $(r, \Delta\rho)$. This proof $\pi_{\mathsf{PoK}}$ is included in the DeKART transcript and verified against the two fixed base points $[\ell_0(\tau)]_1$ (Lagrange basis at position $0$) and $[\xi]_1$ (KZG hiding base). It uses its own DST, separate from the outer sigma protocol.

#### Opening at a Random Point

Rather than check the polynomial identity everywhere, the verifier samples a random challenge $\gamma \notin H$ (via Fiat-Shamir, resampled until it lands outside the roots of unity). By Schwartz-Zippel, if $P(\gamma) = V_S(\gamma) \cdot h(\gamma)$ then the identity holds as polynomials with overwhelming probability.

The prover evaluates

$$a = \hat{f}(\gamma), \qquad a_h = h(\gamma), \qquad a_j = f_j(\gamma) \text{ for } j \in [0, \ell),$$

sends these scalars to the verifier, and the verifier checks the identity in scalar form:

$$a_h \cdot V_S(\gamma) \; \mathrel{\overset{?}{=}} \; \beta \cdot \Big(a - \sum_j 2^j a_j\Big) \;+\; \sum_j \beta_j \cdot a_j (a_j - 1).$$

The verifier still has to be convinced that $(a, a_h, a_j)$ really are $\hat{f}, h, f_j$ evaluated at $\gamma$. This is what the hiding KZG opening does.

#### Batching Openings Into One

A separate opening per committed polynomial would cost $\ell + 2$ pairings. Instead, the verifier samples Fiat-Shamir challenges $\mu, \mu_h, \mu_0, \ldots, \mu_{\ell-1}$ and the prover opens the random linear combination

$$u(X) \;=\; \mu \cdot \hat{f}(X) \;+\; \mu_h \cdot h(X) \;+\; \sum_{j} \mu_j \cdot f_j(X)$$

at $\gamma$, with claimed value $a_u = \mu \cdot a + \mu_h \cdot a_h + \sum_j \mu_j \cdot a_j$. The corresponding commitment is an MSM over the per-polynomial commitments:

$$U \;=\; \mu \cdot \hat{C} \;+\; \mu_h \cdot D \;+\; \sum_{j} \mu_j \cdot C_j.$$

A single hiding KZG opening proof $\pi_\gamma$ (a pair of $\mathbb{G}_1$ elements for the quotient polynomial and its hiding blinder) discharges all $\ell + 2$ evaluations simultaneously.

#### Fiat-Shamir Transcript

The transcript is bound to a dedicated DST (`APTOS_UNIVARIATE_DEKART_V2_RANGE_PROOF_DST`) and proceeds through the protocol in order, with the verifier's public inputs (the dimensions $n, \ell$ and the original commitment $C$) absorbed first. Challenges are derived in this sequence:

1. Append $\hat{C}$; run the Okamoto sub-protocol and append $\pi_{\mathsf{PoK}}$.
2. Append the chunk commitments $C_0, \ldots, C_{\ell-1}$. Squeeze $\beta, \beta_0, \ldots, \beta_{\ell-1}$.
3. Append $D$. Squeeze $\gamma$, rejecting it if it collides with $H$.
4. Append the evaluations $(a, a_h, a_0, \ldots, a_{\ell-1})$. Squeeze $\mu, \mu_h, \mu_0, \ldots, \mu_{\ell-1}$.

Binding the challenges in this order is what lets the combined check stand in for the per-point identities.

#### Proof Structure

The proof sent by the dealer is:

- $\hat{C} \in \mathbb{G}_1$: re-randomized commitment to $\hat{f}$.
- $\pi_{\mathsf{PoK}}$: Okamoto proof of $(r, \Delta\rho)$ for $\hat{C} - C$ (one $\mathbb{G}_1$ point and two scalars).
- $C_0, \ldots, C_{\ell-1} \in \mathbb{G}_1$: hiding commitments to the bit polynomials $f_j$.
- $D \in \mathbb{G}_1$: hiding commitment to the quotient $h$.
- $a, a_h, a_0, \ldots, a_{\ell-1} \in \mathbb{F}$: evaluations at $\gamma$.
- $\pi_\gamma$: hiding KZG opening proof for $u(\gamma) = a_u$ (two $\mathbb{G}_1$ points).

Total: $\ell + 5$ group elements and $\ell + 4$ field elements, independent of $n$.

#### Verification

The verifier recomputes every Fiat-Shamir challenge and then runs three checks:

1. **Re-randomization.** Verify $\pi_{\mathsf{PoK}}$ against $(\hat{C} - C,\; [\ell_0(\tau)]_1,\; [\xi]_1)$.
2. **Scalar identity.** Check $a_h \cdot V_S(\gamma) = \beta (a - \sum_j 2^j a_j) + \sum_j \beta_j a_j (a_j - 1)$, where $V_S(\gamma) = (\gamma^{N+1} - 1)/(\gamma - 1)$.
3. **Batched opening.** Compute $U$ and $a_u$ by MSM over the proof commitments, then run one hiding KZG pairing check for $u(\gamma) = a_u$.

If all three pass, every chunk committed in the original $C$ is guaranteed to be in $[0, 2^\ell)$.

#### How DeKART Plugs Into Chunky

DeKART's $\hat{C}$ is the same HKZG commitment that appears in the tuple homomorphism of [the outer sigma protocol](#overview-sigma-protocol). The outer $\Sigma$-protocol proves that the chunks committed in $\hat{C}$ are the same chunks encrypted in the chunked ElGamal ciphertexts $C_{i,j,k}$; DeKART proves those chunks are in range. The two proofs share $\hat{C}$ as their sole common handle and are otherwise verified independently, each with its own DST and Fiat-Shamir transcript.

<a id="overview-sigma-protocol"></a>
### Overview of the $\Sigma$-Protocol (Signature of Knowledge)

The DKG overview described three validity checks on a dealing transcript: correct encryption format, chunk-to-share consistency, and chunk range. The first and third of these rely on a $\Sigma$-protocol proof that ties these concerns together into a single non-interactive proof and, at the same time, prevents transcript malleability by binding the proof to the dealer's identity. This section explains how that proof works.

#### What Is a $\Sigma$-Protocol?

A $\Sigma$-protocol is a three-move proof of knowledge. The prover knows a secret **witness** $w$ and wants to convince a verifier that a public **statement** $Y$ satisfies $Y = \Psi(w)$ for some known homomorphism $\Psi$, without revealing $w$. The three moves are:

1. **Commit.** The prover samples random $r$ and sends $A = \Psi(r)$.
2. **Challenge.** The verifier sends a random scalar $c$.
3. **Respond.** The prover sends $z = r + c \cdot w$.

The verifier accepts if $\Psi(z) = A + c \cdot Y$. Because $\Psi$ is a homomorphism, this equation holds if and only if the prover knew $w$.

In Chunky, the protocol is made non-interactive using the **Fiat-Shamir transform**: the challenge $c$ is derived by hashing the protocol context, the homomorphism description, the public statement, and the prover's commitment $A$ into a Merlin transcript. The resulting proof consists of $(A, z)$.

#### The Tuple Homomorphism

Recall from the DKG overview that the dealer produces two kinds of public output from the same secret data: an HKZG commitment (used by the DeKART range proof) and the chunked ElGamal ciphertexts. The $\Sigma$-protocol must prove that both outputs are consistent with the same underlying witness. This is achieved with a **tuple homomorphism** in the implementation that maps a single witness to a pair of outputs:

$$\Psi(\rho,\; \{s_{i,j,k}\},\; \{r_{j,k}\}) \;=\; \big(\;\underbrace{\hat{C}}_{\text{HKZG}},\;\; \underbrace{\{C_{i,j,k}\},\; \{R_{j,k}\}}_{\text{Chunked ElGamal}}\;\big)$$

The witness has three parts:

- $\rho$: the blinding scalar for the hiding KZG commitment.
- $s_{i,j,k}$: the $\ell$-bit chunks of each Shamir share (the same chunks encrypted in the ciphertexts $C_{i,j,k}$).
- $r_{j,k}$: the correlated ElGamal randomness (satisfying $\sum_k B^{k-1} \cdot r_{j,k} = 0$ as described earlier).

The first component of the tuple ignores $r_{j,k}$ and computes the HKZG commitment:

$$\text{HKZG}(\rho, \{s_{i,j,k}\}) = \rho \cdot [\xi]_1 + \sum_{i,j,k} s_{i,j,k} \cdot [\ell_{i \cdot m + j + 1}(\tau)]_1$$

where $[\xi]_1$ is a hiding base from the SRS and $[\ell_{\cdot}(\tau)]_1$ are Lagrange basis evaluations at the SRS trapdoor. This is the same commitment that enters the DeKART range proof.

The second component ignores $\rho$ and computes the chunked ElGamal ciphertexts and randomness commitments:

$$C_{i,j,k} = s_{i,j,k} \cdot G + r_{j,k} \cdot ek_i, \qquad R_{j,k} = r_{j,k} \cdot H$$

Each component is a "lifted" homomorphism: a projection extracts the relevant fields from the full witness, then the inner homomorphism is applied. The tuple construction ensures that a single proof with a single Fiat-Shamir challenge covers both components, guaranteeing the same witness underlies both the KZG commitment and the ElGamal ciphertexts.

#### Non-Malleability via Signature of Knowledge

As described in Step 4 of the DKG overview, the $\Sigma$-protocol must be non-malleable to prevent an adversary from re-purposing an honest dealer's transcript. This is achieved by turning the proof into a **Signature of Knowledge (SoK)**: the dealer's identity is hashed into the Fiat-Shamir challenge, so the proof is bound to a specific dealer and session.

Concretely, the SoK context hashed into the transcript consists of:

- The dealer's BLS12-381 signing public key
- The session/epoch identifier
- The dealer's index in the validator set
- A domain-separation tag (DST)

If any of these fields change - for example, if an attacker substitutes their own public key - the recomputed challenge will differ and the proof becomes invalid.

#### Fiat-Shamir Transcript

The Fiat-Shamir challenge is derived via a Merlin transcript with the following binding order:

1. **DST**: the protocol's domain-separation tag, written as the Merlin transcript label.
2. **SoK context**: the dealer's public key, session ID, dealer index, and DST, serialized with BCS.
3. **Homomorphism bases**: all MSM base points (generators $G$, $H$, the HKZG SRS elements, and all encryption keys $ek_i$).
4. **Public statement**: the tuple $\Psi(w)$ (the HKZG commitment and all ciphertexts $C_{i,j,k}$ and randomness commitments $R_{j,k}$).
5. **Prover's commitment**: the first message $A = \Psi(r)$.

The challenge is squeezed as a full-size field element, sampled with $2\times$ the field bit-length for statistical uniformity.

#### Verification

Both components of the tuple homomorphism are multi-scalar multiplications (MSMs), so the verification equation $\Psi(z) = A + c \cdot Y$ reduces to checking that an MSM evaluates to the identity:

$$\sum_i b_i \cdot z_i - A - c \cdot Y = \mathcal{O}$$

The two components of the tuple can be batched into a single MSM using a random scalar.

The DeKART range proof also contains its own inner $\Sigma$-protocol (an Okamoto proof showing knowledge of the blinding randomness), which uses a separate DST and Fiat-Shamir transcript. Both the outer SoK and the DeKART proof are stored together in the dealing transcript and verified independently, but they share the same HKZG commitment $\hat{C}$ as the link between them.

<a id="overview-consensus-integration"></a>
### Overview of Consensus Integration

#### DKG Integration

At a high level, the consensus integration turns [Chunky DKG](#overview-dkg) into an epoch-handoff pipeline: during epoch $N$ reconfiguration, validators jointly produce one certified aggregated DKG output, commit it on chain, and carry it into epoch $N+1$ as the canonical encryption material for encrypted mempool operations. This work is intentionally outside the per-block consensus critical path, and its output is buffered in the reconfiguration flow and consumed when the next epoch starts.

Concretely, when the chain transitions from epoch $N$ to epoch $N+1$, and the relevant on-chain feature flags are enabled (`vtxn_enabled` and `chunky_dkg_enabled`), Move starts both the existing randomness DKG and the new Chunky DKG. Each current-epoch validator runs a local `ChunkyDKGManager`, produces one signed Chunky PVSS transcript, and broadcasts it off-chain via `ReliableBroadcast`. Validators verify incoming transcripts independently and keep collecting them until the dealer set has enough voting power.

Once a validator reaches quorum, it pointwise-aggregates the accepted dealer transcripts into a single `AggregatedSubtranscript`. This compression step is important: instead of carrying one transcript per dealer into the next epoch, the protocol reduces the whole qualified dealer set into one compact object from which each validator can later recover its own final secret-share material.

The aggregation is only for the subtranscript, not including the proof. As a result, there is no direct way for other validators to verify the validity of `AggregatedSubtranscript`. To resolve this, the protocol adds a certification step. A validator that has formed an aggregate sends a `ChunkyDKGSubtranscriptSignatureRequest` containing the dealer list, the aggregate hash, and per-dealer transcript hashes. A recipient will fetch and verify any missing transcripts, recompute the aggregate, and sign only if all individual transcripts are valid. In effect, consensus is not certifying individual transcripts, but one exact aggregated output.

After a validator collects quorum signatures over that aggregate, it emits a `ValidatorTransaction::ChunkyDKGResult`. On chain, `finish_with_chunky_dkg_result()` verifies the quorum signature, stores the certified aggregated subtranscript, and publishes the derived encryption key for epoch $N+1$. Reconfiguration only fully completes once both the randomness DKG path and the Chunky DKG path have finished, so the next epoch starts with one certified DKG result that all honest validators can consume consistently.

#### Round Decryption Key Derivation

At epoch start, each validator uses its private key to decrypt its own secret share from the certified Chunky transcript, obtaining a `SecretKeyShare` that it holds for the duration of the epoch.

Decryption keys are derived per round. Once a block reaches a quorum certificate, each validator computes the round digest, which is a commitment to the IBE tags of the encrypted transactions in that block. Using the digest and its `SecretKeyShare`, the validator produces a per-round decryption key share.

The key share is only released after the block is finalized. This ordering is critical: releasing a key share before finalization would allow the encrypted transactions to be decrypted while they are still in the mempool, breaking confidentiality. Once the block is finalized, each validator broadcasts its key share via `SecretShareManager`. When enough shares are collected to meet the reconstruction threshold, any validator can locally reconstruct the full decryption key. The threshold secret sharing scheme guarantees that all honest validators reconstruct the same key, regardless of which subset of shares they use.

`SecretShareManager` uses an optimistic verification strategy to reduce per-share verification overhead. Initially, it accepts incoming shares without cryptographic verification, assuming all validators are honest. When the reconstruction threshold is reached, it attempts to aggregate the collected shares. If aggregation fails, it falls back to verifying each share individually, evicts the invalid ones, and moves the offending validator to a pessimistic set. All future shares from a pessimistic validator are verified before acceptance for the rest of the epoch.

Once the decryption key is reconstructed, it is passed to the decryption pipeline to decrypt the encrypted transactions in the finalized block.

#### Encrypted Transaction Flow

When the certified Chunky DKG output settles at epoch start, the derived aggregate encryption key is published on chain by the Move `decryption` module and exposed through the ledger state. Clients fetch this key, encrypt their transaction payload locally, and submit a signed transaction whose payload variant is `EncryptedPayload::Encrypted`. The transaction signature covers the encrypted form, and the ciphertext is bound to the sender's address as associated data, so a ciphertext cannot be replayed under a different sender after submission. Admission is gated both locally, by the node's `allow_encrypted_txns_submission` flag, and globally, by the on-chain `ENCRYPTED_TRANSACTIONS` feature flag, and the API rejects any payload that arrives in a non-`Encrypted` state.

Inside QuorumStore, encrypted transactions flow through a parallel track from regular transactions. Batch V2 carries a `BatchKind` discriminator, `Normal` or `Encrypted`, and the batch generator buckets transactions by kind, applying gas-bucketing and per-sender size limits independently per kind. Encrypted batches are required to be homogeneous: an encrypted batch contains only encrypted transactions, and a normal batch contains none. This separation matters because every later stage treats the two kinds differently. In particular, proposal pulls apply a separate per-kind block limit `encrypted_txn_limit`, sourced from `SecretShareConfig` and defaulting to zero when no decryption configuration is available, so a validator that cannot participate in decryption is also prevented from packing encrypted batches into the blocks it proposes.

Once consensus orders a block, the pipeline materializes payload references and partitions transactions into encrypted and regular sets. Materialization and decryption precomputation start optimistically on the QC path, before final ordering completes; this overlaps the expensive eval-proof and ciphertext-preparation work with consensus latency rather than serializing them. The final decryption pass, however, waits on the aggregated per-round decryption key from `SecretShareManager`. This dependency is the structural enforcement of the "decrypt only after finalization" invariant: precomputation can run early, but the pairing that actually recovers each plaintext is gated on the aggregated key, which only becomes available once the ordered-block path has produced the block.

After decryption, each encrypted transaction transitions to one of two terminal states. `Decrypted` carries the recovered executable along with the optional claimed-entry-function check; `FailedDecryption` carries a reason, which is one of `CryptoFailure`, `BatchLimitReached`, `ConfigUnavailable`, `DecryptionKeyUnavailable`, or `ClaimedEntryFunctionMismatch`. The prepare stage concatenates the decrypted partition with the regular partition and hands the combined block to execution. At the VM, only `BatchLimitReached` is retryable; all other failure reasons run the failure epilogue, which charges gas and increments the sender's sequence number, so a failed decryption is a final on-chain outcome rather than a free retry. This ensures that an encrypted transaction either executes or is permanently rejected, and that a sender cannot use ciphertext failures to obtain repeated admission at no cost.

## Findings

### The Batch Pairing Check in Chunky Is Not Sound Due to Missing Random Linear Combination

- **Severity**: High
- **Location**: chunky/weighted_transcript.rs

**Description**. The `verify` function in Chunky performs two independent pairing checks: (1) the pairing check for the HKZG opening in DeKART, and (2) the pairing check to ensure the encrypted chunks sum to the secret share. These two checks are batched into a single `multi_pairing` call at the end of verification:

```rust
fn verify<A: Serialize + Clone, R: RngCore + CryptoRng>(
    &self,
    sc: &Self::SecretSharingConfig,
    pp: &Self::PublicParameters,
    spks: &[Self::SigningPubKey],
    eks: &[Self::EncryptPubKey],
    sid: &A,
    rng: &mut R,
) -> anyhow::Result<()> {
    let sok_cntxt = verify_weighted_preamble(
        sc,
        pp,
        &self.subtrs,
        &self.dealer,
        spks,
        eks,
        sid,
        <Self as traits::Transcript>::dst(),
    )?;
    [...]
    // Step 2: Verify the range proof
    let (g1_terms, g2_terms) = self.sharing_proof.range_proof.pairing_for_verify(
        &pp.pk_range_proof.vk,
        sc.get_total_weight() * num_chunks_per_scalar::<E::ScalarField>(pp.ell) as usize,
        pp.ell,
        &self.sharing_proof.range_proof_commitment,
        rng,
    )?;
    [...]
    // g1_terms and g2_terms are from DeKART range check
    // The others are for chunk check
    let res = E::multi_pairing(
        g1_terms.iter().copied().chain([
            combined_G1.into_affine(),
            *pp.get_encryption_public_params().message_base(),
        ]),
        g2_terms
            .iter()
            .copied()
            .chain([pp.get_commitment_base(), (-combined_G2).into_affine()]),
    );
    if PairingOutput::<E>::ZERO != res {
        bail!("Expected zero during multi-pairing check");
    }

    Ok(())
}
```

Here, `g1_terms`/`g2_terms` correspond to the DeKART range check, and `combined_G1`/`combined_G2` correspond to the chunk sum check. Both groups of terms are concatenated and passed to `multi_pairing` together. The result being zero only guarantees that the *sum* of all pairing outputs is zero, not that each individual check passes.

A malicious prover can exploit this by crafting a proof where the DeKART range check pairing evaluates to some non-zero value $X$ and the chunk sum check pairing evaluates to $-X$, so the combined output is zero and the verification passes despite both individual checks being invalid.

**Impact**. This is a soundness issue that allows a malicious prover to forge a valid-looking Chunky proof. An attacker can bypass both the range check and the chunk sum check simultaneously by making their pairing outputs cancel each other out.

**Recommendation**. The two pairing checks should be combined using a random linear combination. Before batching, sample a random scalar $r$ and scale one group of pairing terms by $r$. This ensures that a cancellation between the two checks is computationally infeasible.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19296.

### The Batch KZG Opening In DeKART Is Not Sound Due to Fiat-Shamir Issue

- **Severity**: High
- **Location**: dekart_univariate_v2.rs

**Description**. In Chunky, DeKART is used to verify that all chunks are within the correct range. The final step of DeKART evaluates KZG openings of multiple polynomials at the same point, using a batch KZG opening scheme to reduce multiple openings to one. Unfortunately, this batch KZG opening scheme is not sound due to a Fiat-Shamir bug.

DeKART requires the evaluation of committed functions ($h(X), \hat{f}(X), f_j(X)$) at a random point $\gamma$ and checks the relations among them. The current approach is:

1. Verifier samples random linear combination coefficients ($\mu_j$) to combine the commitments of these functions into $U$.
2. Verifier samples a random point $\gamma$.
3. Prover provides the evaluations of these functions at $\gamma$ (i.e., $h(\gamma), \hat{f}(\gamma), f_j(\gamma)$).
4. Prover provides the KZG opening proof of $U$ at $\gamma$.
5. Verifier verifies the KZG opening proof of $U$.
6. Verifier checks that the openings satisfy the required equations.

The issue is that the verifier does not absorb the prover's evaluation values at $\gamma$ into the Fiat-Shamir transcript. This allows an attacker to choose these evaluation values after observing $\mu_j$. Since the attacker only needs to satisfy two constraint equations but has multiple free variables, it is straightforward to find evaluation values that pass the check. The attacker can therefore submit forged openings of $h(\gamma), \hat{f}(\gamma), f_j(\gamma)$ that pass verification, bypassing the quotient check and ultimately forging a proof for the range check.

**Impact**. This Fiat-Shamir bug allows a malicious prover to forge range proofs in DeKART.

**Recommendation**. As a rule of thumb, all messages sent from the prover should be absorbed into the Fiat-Shamir transcript. To fix this issue, the evaluation values should be absorbed before sampling $\mu_j$, following this order:

1. Verifier samples $\gamma$.
2. Prover provides the evaluation values for these functions at $\gamma$.
3. Verifier absorbs these evaluation values into the Fiat-Shamir transcript.
4. Verifier samples the linear combination coefficients $\mu_j$ and computes the commitment $U$.
5. Prover provides the KZG opening proof for $U$ at $\gamma$.
6. Verifier verifies the KZG opening proof of $U$.
7. Verifier checks that the openings satisfy the required equations.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19135.

### Epoch Not Bound to `AggregatedSubtranscript` Signature Enables Stale Transcript Replay

- **Severity**: High
- **Location**: validator_txns/chunky_dkg.rs

**Description**. In ChunkyDKG, a validator collects a quorum of signatures on its `AggregatedSubtranscript` and submits a validator transaction to finalize the DKG result on-chain. The on-chain verification in `process_chunky_dkg_result_inner` checks that `metadata.epoch` matches the current epoch, then verifies the quorum signature over `trx`:

```rust
fn process_chunky_dkg_result_inner(
    &self,
    resolver: &impl AptosMoveResolver,
    module_storage: &impl AptosModuleStorage,
    log_context: &AdapterLogSchema,
    session_id: SessionId,
    dkg_output: CertifiedChunkyDKGOutput,
) -> Result<(VMStatus, VMOutput), ExecutionFailure> {
    let CertifiedChunkyDKGOutput {
        certified_transcript,
        encryption_key,
    } = dkg_output;

    let CertifiedAggregatedChunkySubtranscript {
        metadata,
        transcript_bytes,
        signature,
    } = certified_transcript;

    let config_resource = ConfigurationResource::fetch_config(resolver).ok_or(
        ExecutionFailure::Expected(ExpectedFailure::MissingResourceConfiguration),
    )?;
    if metadata.epoch != config_resource.epoch() {
        return Err(ExecutionFailure::Expected(ExpectedFailure::EpochNotCurrent));
    }
    [...]
    let trx: AggregatedSubtranscript = bcs::from_bytes(&transcript_bytes).map_err(|_| {
        ExecutionFailure::Expected(ExpectedFailure::TranscriptDeserializationFailed)
    })?;
    verifier
        .verify_multi_signatures(&trx, &signature)
        .map_err(|_| ExecutionFailure::Expected(ExpectedFailure::MultiSigVerificationFailed))?;
    [...]
}
```

The epoch check is applied to `metadata`, but the multi-signature is verified only over `trx` (the `AggregatedSubtranscript`), which does not include the epoch. Since `metadata` is not covered by the signature, a submitter can craft a `CertifiedAggregatedChunkySubtranscript` with `metadata.epoch` set to the current epoch while reusing an `AggregatedSubtranscript` and its quorum signatures collected in a previous epoch. The epoch check passes and the signature check passes, allowing a stale transcript to be accepted on-chain.

Two consequences follow depending on whether the validator set changed between epochs:

1. If the validator set changed, the stale transcript was produced for the previous validator set. New validators would receive invalid key shares, breaking decryption.
2. If the validator set is unchanged, the same key share is installed again, reusing the encryption/decryption key across epochs. In the batch threshold encryption scheme, reusing the encryption/decryption key will cause all the encrypted transactions in the epoch to be decrypted.

Additionally, the submitter's identity is not included in the signed content, so any validator can take a certified transcript produced by another validator and post it on-chain.

**Impact**. An attacker can replay a certified transcript from a previous epoch to either install invalid key shares when the validator set changes, or force encryption key reuse across epochs when the validator set is unchanged. Key reuse breaks the confidentiality guarantee of the encrypted mempool.

**Recommendation**. It is recommended to include the epoch in the signed `AggregatedSubtranscript` so that signatures are bound to a specific epoch and cannot be replayed across epoch boundaries. The submitter's identity should also be included in the signed content to prevent transcript theft.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19544.

### Encrypted Transactions Are Vulnerable to Malleability Attack via `TransactionAuthenticator`

- **Severity**: High
- **Location**: transaction/authenticator.rs

**Description**. In the Aptos network, `TransactionAuthenticator` carries the sender's signing key (`AuthenticationKey`) and signature. The `AuthenticationKey`-to-account binding is verified at the VM level, after decryption. The batch threshold encryption scheme uses `PayloadAssociatedData` to bind the ciphertext to the transaction sender, but it only includes the sender's `AccountAddress`:

```rust
impl EncryptedPayload {
    pub fn verify(&self, sender: AccountAddress) -> anyhow::Result<()> {
        let associated_data = PayloadAssociatedData::new(sender);
        self.ciphertext().verify(&associated_data)
    }
}
```

Since the `AuthenticationKey` is not included in the associated data, an attacker can copy a victim's encrypted transaction, replace the `TransactionAuthenticator` with their own key and a valid self-signature, and submit it. The ciphertext check passes (the sender address is unchanged), the signature check passes (the attacker signed with their own key), and the transaction is decrypted. Only then does the VM reject it because the attacker's `AuthenticationKey` does not match the account's stored key. By that point, the victim's plaintext transaction has already been revealed.

```rust
/// Each transaction submitted to the Aptos blockchain contains a `TransactionAuthenticator`. During
/// transaction execution, the executor will check if every `AccountAuthenticator`'s signature on
/// the transaction hash is well-formed and whether the sha3 hash of the
/// `AccountAuthenticator`'s `AuthenticationKeyPreimage` matches the `AuthenticationKey` stored
/// under the participating signer's account address.
#[derive(Clone, Debug, Eq, PartialEq, Hash, Serialize, Deserialize)]
pub enum TransactionAuthenticator {
    /// Single Ed25519 signature
    Ed25519 {
        public_key: Ed25519PublicKey,
        signature: Ed25519Signature,
    },
    /// K-of-N multisignature
    MultiEd25519 {
        public_key: MultiEd25519PublicKey,
        signature: MultiEd25519Signature,
    },
    /// Multi-agent transaction.
    MultiAgent {
        sender: AccountAuthenticator,
        secondary_signer_addresses: Vec<AccountAddress>,
        secondary_signers: Vec<AccountAuthenticator>,
    },
    /// Optional Multi-agent transaction with a fee payer.
    FeePayer {
        sender: AccountAuthenticator,
        secondary_signer_addresses: Vec<AccountAddress>,
        secondary_signers: Vec<AccountAuthenticator>,
        fee_payer_address: AccountAddress,
        fee_payer_signer: AccountAuthenticator,
    },
    SingleSender {
        sender: AccountAuthenticator,
    },
}
```

A second attack applies to accounts using keyless or account abstraction authentication schemes. For these schemes, the authentication check runs after decryption. An attacker can submit a malformed authenticator for such an account: the ciphertext integrity check passes and the transaction is decrypted, before the authentication failure is caught.

```rust
/// An `AccountAuthenticator` is an abstraction of a signature scheme. It must know:
/// (1) How to check its signature against a message and public key
/// (2) How to convert its public key into an `AuthenticationKeyPreimage` structured as
/// (public_key | signature_scheme_id).
/// Each on-chain `Account` must store an `AuthenticationKey` (computed via a sha3 hash of `(public
/// key bytes | scheme as u8)`).
#[derive(Clone, Debug, Eq, PartialEq, Hash, Serialize, Deserialize)]
pub enum AccountAuthenticator {
    /// Ed25519 Single signature
    Ed25519 {
        public_key: Ed25519PublicKey,
        signature: Ed25519Signature,
    },
    /// Ed25519 K-of-N multisignature
    MultiEd25519 {
        public_key: MultiEd25519PublicKey,
        signature: MultiEd25519Signature,
    },
    SingleKey {
        authenticator: SingleKeyAuthenticator,
    },
    MultiKey {
        authenticator: MultiKeyAuthenticator,
    },
    NoAccountAuthenticator,
    Abstract {
        authenticator: AbstractAuthenticator,
    }, // ... add more schemes here
}
```

**Impact**. An attacker can force decryption of any encrypted transaction by either replacing the `TransactionAuthenticator` with their own valid credentials or submitting a malformed authenticator (for keyless and account abstraction accounts). In both cases, the plaintext transaction is revealed before the invalid authentication is rejected, breaking the confidentiality guarantee of the encrypted mempool.

**Recommendation**. It is recommended to include the `AuthenticationKey` in `PayloadAssociatedData` so that the ciphertext is cryptographically bound to the sender's signing key. This prevents an attacker from substituting a different authenticator without invalidating the ciphertext check. Additionally, encrypted transactions should not be supported for keyless and account abstraction accounts if their authentication checks are deferred to the VM and cannot be enforced prior to decryption.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19460 and https://github.com/aptos-labs/aptos-core/pull/19546.

### Missing Check in `SecretKeyShare` May Cause Aggregation to Fail and Leave the Network Stuck

- **Severity**: High
- **Location**: secret_sharing/verifier.rs

**Description**. In each round, `SecretShareManager` collects `SecretKeyShare` values from validators and aggregates them to reconstruct the decryption key. It uses an optimistic verifier that skips individual share verification and instead tries to aggregate first, falling back to individual checks only when aggregation fails. A `SecretKeyShare` is defined as:

```rust
pub type WeightedBIBEDecryptionKeyShare = (Player, Vec<BIBEDecryptionKeyShareValue>);
```

In the weighted scheme, the `Vec` must have exactly `weights[player]` elements, and the `Player` field must correspond to the authenticated sender. Both invariants are unverified, leading to two independent issues.

**Issue 1: Share vector length is not validated.**
For validators in the optimistic set, the length of the share vector is not checked before aggregation. In `reconstruct`, each sub-share is mapped to a virtual player via `get_virtual_player(player, pos)`, which asserts `pos < weights[player.id]`:

```rust
pub fn get_virtual_player(&self, player: &Player, j: usize) -> Player {
    assert_lt!(j, self.weights[player.id]);
    let id = self.get_share_index(player.id, j).unwrap();
    Player { id }
}
```

If a malicious validator sends a vector longer than its assigned weight, `pos` will eventually exceed the weight bound, triggering a panic. Since aggregation runs inside `spawn_blocking!`, the panic does not crash the validator, but `try_aggregate` returns an error. Because aggregation never succeeds, `evict_bad_shares` is never called and the offending share is never removed, permanently stalling key reconstruction for that round.

**Issue 2: The `Player` field is not validated against the sender's identity.**
The `Player` field identifies the validator's position in the weighted scheme and is used to index shares during reconstruction. Even in the individual share verification path, the `Player` value is never checked against the authenticated sender. A malicious validator can send a share with a valid vector of secret share elements but an incorrect `Player` index. The individual check passes, but during aggregation the shares are assigned to wrong positions, producing an incorrect aggregated key. `verify_decryption_key` then fails, triggering the eviction path. However, since individual verification passes for each share, no bad share is evicted, and aggregation is permanently stuck.

**Impact**. A single malicious validator can permanently block decryption key reconstruction for a round by exploiting either issue. Since key reconstruction is required to decrypt the mempool, this is a liveness vulnerability: one adversarial validator can stall the entire network.

**Recommendation**. Before processing any share in the optimistic path, validate that the share vector length equals the weight of the player and that the `Player` field matches the expected index for the authenticated sender. Both checks should be applied regardless of whether the share is on the optimistic or individual verification path.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19475.

### Missing Subtype Check in `EncryptedPayload` Allows Malicious Validator to Crash All Validators

- **Severity**: High
- **Location**: types/src/transaction/mod.rs

**Description**. `EncryptedPayload` has three variants representing different stages of the transaction lifecycle:

```rust
pub enum EncryptedPayload {
    Encrypted {
        ciphertext: Ciphertext,
        extra_config: TransactionExtraConfig,
        payload_hash: HashValue,
        claimed_entry_fun: Option<ClaimedEntryFunction>,
    },
    FailedDecryption {
        ciphertext: Ciphertext,
        extra_config: TransactionExtraConfig,
        payload_hash: HashValue,
        claimed_entry_fun: Option<ClaimedEntryFunction>,
        eval_proof: Option<EvalProof>,
        reason: DecryptionFailureReason,
    },
    Decrypted {
        ciphertext: Ciphertext,
        extra_config: TransactionExtraConfig,
        payload_hash: HashValue,
        claimed_entry_fun: Option<ClaimedEntryFunction>,
        eval_proof: EvalProof,
        executable: TransactionExecutable,
        decryption_nonce: u64,
    },
}
```

Only `Encrypted` is a valid user-submitted variant. `FailedDecryption` and `Decrypted` are internal states set by validators during block processing and should never appear in an incoming transaction.

However, the check used to validate transactions in quorum store batches and block proposals only verifies that the transaction has an `EncryptedPayload`, without checking which variant it is:

```rust
pub fn is_encrypted_variant(&self) -> bool {
    matches!(self, Self::EncryptedPayload(_))
}
```

This allows `FailedDecryption` and `Decrypted` variants to pass validation and be included in a batch or block proposal. During the subsequent decryption step, `entry_fun_matches` asserts that the payload is in the `Encrypted` state:

```rust
pub fn entry_fun_matches(&self, decrypted: &DecryptedPayload) -> anyhow::Result<bool> {
    let Self::Encrypted { claimed_entry_fun, .. } = self
    else {
        bail!("Payload is not in Encrypted state");
    };
    [...]
}
```

This result is unwrapped with `.expect("must be encrypted")` at the call site. If the payload is `FailedDecryption` or `Decrypted`, `entry_fun_matches` returns an error and the `expect` call panics, crashing the validator.

**Impact**. A malicious validator can craft a transaction with a `FailedDecryption` or `Decrypted` payload and include it in a quorum store batch or block proposal. This passes the existing validation check and triggers a panic in all validators that attempt to decrypt the block, crashing them. This is a liveness attack that can be mounted by any single validator in the network.

**Recommendation**. It is recommended to add a subtype check in the batch and block proposal validation logic to reject any `EncryptedPayload` that is not in the `Encrypted` variant.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19546.

### No Dealer-Set Validation in DKG Signature Requests Enables Invalid Certified Outputs

- **Severity**: High
- **Location**: dkg_manager/mod.rs

**Description**. During Chunky DKG, validators receiving a `SubtranscriptSignatureRequest` are expected to validate the requested dealer set before signing the corresponding `AggregatedSubtranscript`. However, `handle_subtranscript_signature_request()` does not do so. The function converts `aggregated_subtrx_dealers` into validator addresses, checks only basic length consistency and transcript/hash availability, aggregates the referenced subtranscripts, and signs the result. It never validates that the dealer set itself is acceptable for certification.

The missing checks in `handle_subtranscript_signature_request()` are:

1. Duplicate-dealer rejection is missing.
The function does not enforce uniqueness of `aggregated_subtrx_dealers`. A malicious validator can include the same dealer multiple times, and the request still reaches aggregation and signing.

2. Dealer-set quorum/threshold validation is missing.
The function does not enforce that `aggregated_subtrx_dealers` has sufficient voting power before signing. A malicious validator can therefore use an undersized set, including a single-dealer set.

Passing `dkg_config.threshold_config` into `ChunkySubtranscript::aggregate(...)` does not close this gap. There are no checks inside `aggregate()` that the provided dealer set meets the threshold requirements.

As a result, honest validators can be induced to sign an `AggregatedSubtranscript` built from an invalid dealer set (duplicate or underpowered). Once quorum signatures are collected, downstream verification focuses on signature validity and signer quorum, not dealer-set validity, so the invalid certified transcript can still be accepted on-chain.

**Impact**. A single malicious validator can obtain honest quorum signatures over an invalid DKG output.

Consequences include:
- Violation of DKG integrity: the certified transcript may not represent a valid threshold aggregation of an authorized dealer quorum.
- On-chain acceptance of invalid transcript metadata: duplicate or low-power dealer sets can pass end-to-end if signatures are gathered.
- Potential confidentiality degradation: in the extreme single-dealer case, the final encryption key can be derived from attacker-controlled material, defeating the intended distributed trust assumptions of encrypted mempool key generation.

**Recommendation**. In `handle_subtranscript_signature_request()`, reject the request before aggregation/signing unless all dealer-set invariants hold:
- Uniqueness: `aggregated_subtrx_dealers` must contain no duplicate players.
- Threshold compliance: dealer-set voting power must satisfy the DKG threshold.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19544.

### Chunky Public Parameters Use Target-Epoch Validator Count for `max_aggregation` Instead of Dealer-Epoch Count

- **Severity**: Medium
- **Location**: consensus/src/epoch_manager.rs

**Description**. At each epoch transition, validators decrypt their secret shares from the Chunky transcript using Chunky public parameters (PP). These parameters are constructed from `ChunkyDKGSessionMetadata`, which contains validator committee information for both the dealer (current) epoch and the target (next) epoch.

In Chunky, `max_aggregation` represents the maximum number of subtranscripts that can be aggregated, which equals the number of dealers, i.e., the number of validators in the dealer epoch. However, `ChunkyDKGSession::new` derives `max_aggregation` from the target epoch's validator set instead:

```rust
pub fn new(dkg_session_metadata: &ChunkyDKGSessionMetadata) -> Arc<ChunkyDKGSession> {
    [...]
    let target_validators = dkg_session_metadata.target_validator_consensus_infos_cloned();
    let validator_stakes: Vec<u64> =
        target_validators.iter().map(|vi| vi.voting_power).collect();
    [...]
    let threshold_config = ChunkyDKGThresholdConfig::new(
        profile.reconstruct_threshold_in_weights as usize,
        profile.validator_weights.iter().map(|w| *w as usize).collect(),
    )
    .expect("Failed to create WeightedConfigArkworks");

    [...]
    let public_parameters = PublicParameters::new_for_testing(
        total_weight as usize,
        aptos_dkg::pvss::chunky::DEFAULT_ELL_FOR_DEPLOYMENT,
        threshold_config.get_total_num_players(), // should be dealer-epoch count
        G2Affine::generator(),
        &mut rng_aptos,
    );
    [...]
}
```

Since `max_aggregation` is used to compute the Dlog range bound in `get_dlog_range_bound`, passing the wrong value causes the bound to be computed over the wrong range, which leads to decryption failure.

**Impact**. Validators may fail to decrypt their secret shares when the validator count differs between the dealer and target epochs. This is a liveness issue that can silently break epoch transitions when validator set membership changes.

**Recommendation**. It is recommended to derive the threshold config from the dealer epoch's validator set rather than the target epoch's, so that `max_aggregation` correctly reflects the number of dealers.

**Client Response**. https://github.com/aptos-labs/aptos-core/pull/19291.

### Encrypted Transactions Can Be Decrypted Without Being Executed at Epoch Boundaries

- **Severity**: Medium
- **Location**: aptos-vm/src/aptos_vm.rs

**Description**. A security property of the encrypted mempool is that a valid encrypted transaction, once decrypted, should be executed. This property can be violated at epoch boundaries.

In the Aptos network, validators end an epoch immediately after seeing the `NewEpochEvent`. Transactions appearing after the `NewEpochEvent` are skipped by the VM, even though they have been ordered. Since the `NewEpochEvent` is emitted during execution, after the whole block has already been decrypted, an encrypted transaction that lands after this event can be decrypted but not executed.

This can happen naturally at an epoch boundary without any malicious intent, or a malicious block proposer can deliberately position a target encrypted transaction after the epoch-ending event. Since the decryption key rotates each epoch, the transaction cannot be replayed in the next epoch, so we do not see a direct MEV opportunity. However, the sender's intent is permanently revealed without the transaction being committed.

**Impact**. A sender's transaction intent can be leaked without execution. This can occur accidentally at an epoch boundary, with or without a malicious proposer.

**Recommendation**. It is recommended to disallow encrypted transactions from being decrypted after the epoch-end event, preventing any decrypted transaction from being skipped at an epoch boundary.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19634.

### 64-bit `decryption_nonce` Is Insufficient to Hide `payload_hash`

- **Severity**: Medium
- **Location**: transaction/encrypted_payload.rs

**Description**. The `payload_hash` field in `EncryptedPayload` commits to the plaintext transaction payload. To make this commitment hiding, a random `decryption_nonce` is mixed into the hash so that an attacker cannot determine the preimage from the commitment alone. However, the nonce is only 64 bits:

```rust
pub struct DecryptedPayload {
    executable: TransactionExecutable,
    decryption_nonce: u64,
}
```

A 64-bit nonce provides only $2^{64}$ possible values, which is below the standard 128-bit security level required for computational hiding. An attacker who observes the `payload_hash` can enumerate all $2^{64}$ nonce candidates, hash each one against the known transaction template, and recover the committed plaintext.

**Impact**. The `payload_hash` commitment does not achieve computational hiding at an acceptable security level. An attacker with sufficient resources can brute-force the `decryption_nonce` and recover the plaintext transaction from the commitment, breaking the pre-decryption confidentiality that `payload_hash` is intended to provide.

**Recommendation**. It is recommended to increase `decryption_nonce` to at least 128 bits, for example by changing its type to `u128` or a 16-byte array, to ensure the commitment is computationally hiding at the standard security level.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19461.

### Off-by-One in `get_dlog_range_bound` Can Cause Decryption Failure

- **Severity**: Medium
- **Location**: chunky/public_parameters.rs

**Description**. In Chunky, secret share chunks are encrypted using an ElGamal scheme, and decryption requires solving a discrete logarithm (Dlog) within a specific range. The `get_dlog_range_bound` function computes this range bound as the range of a single share multiplied by the maximum number of shares. Chunky can aggregate up to `max_aggregation + 1` transcripts. However, the implementation does not account for this off-by-one, computing the bound as if there are at most `max_aggregation` transcripts:

```rust
pub(crate) fn get_dlog_range_bound(&self) -> u64 {
    1u64 << (self.ell as u64 + log2(self.max_aggregation) as u64)
}
```

The correct bound should use `max_aggregation + 1`:

```rust
1u64 << (self.ell as u64 + log2(self.max_aggregation + 1) as u64)
```

The return value of `get_dlog_range_bound` is passed as `table_dlog_range_bound` in `decrypt_chunked_scalars` function. When the true aggregation count reaches `max_aggregation + 1`, the bound used for decryption is smaller than reality, causing decryption to fail.

Note that this bug is hard to trigger in practice. All of the following conditions must hold simultaneously:

1. `max_aggregation` is a power of two (so that `log2(max_aggregation + 1) > log2(max_aggregation)`).
2. All validators produce their subshares (i.e., aggregation reaches the maximum).
3. Each chunked share is close to $2^{32}$, so their sum exceeds the underestimated bound.

The scenario becomes more likely when the validator set is small.

**Impact**. Under the conditions above, `decrypt_chunked_scalars` will fail due to the lookup table being too small, causing decryption to produce incorrect results or abort. This is a liveness issue: a well-formed aggregation can fail to decrypt.

**Recommendation**. Replace `log2(self.max_aggregation)` with `log2(self.max_aggregation + 1)` in `get_dlog_range_bound` to correctly account for the full range of possible aggregation counts:

```rust
pub(crate) fn get_dlog_range_bound(&self) -> u64 {
    1u64 << (self.ell as u64 + log2(self.max_aggregation + 1) as u64)
}
```

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19295.

### BIBE ID Derivation Does Not Include Associated Data, Diverging From the Paper

- **Severity**: Low
- **Location**: shared/ids.rs

**Description**. In the batch IBE (BIBE) scheme, the paper specifies that the tag (ID) used during encryption should be derived as $tg = H(vk_{sig}, ad)$, binding the ID to both the verification key and the associated data. However, `Id::from_verifying_key` in `ids.rs` computes the ID as $tg = H(vk_{sig})$, omitting the associated data entirely.

**Impact**. Since the ID does not include the associated data, the ciphertext is not directly bound to the AD through the IBE tag. In principle, this could allow an attacker to swap or strip the AD without affecting the IBE decryption check. However, the practical impact is limited: the AD and ciphertext are also covered by an ephemeral signature, so a tampered AD would fail `verify_ct`. The finding is primarily a divergence from the paper's security model, which could undermine the formal security argument for the scheme.

**Recommendation**. It is recommended to update `Id::from_verifying_key` to include the associated data in the ID derivation, matching the paper's specification:

$$tg = H(vk_{sig}, ad)$$

This ensures the ciphertext is bound to the AD at the IBE layer, keeping the implementation consistent with the proven security model.

**Client Response**. https://github.com/aptos-labs/aptos-core/pull/19055.

### Ell range checked only via debug assert

- **Severity**: Low
- **Location**: chunky/hkzg_chunked_elgamal.rs

**Description**. The chunk bit-size parameter `ell` is validated with `debug_assert!((8..=63).contains(&pp.ell))`, which is stripped in release builds. Several crash sites exist for out-of-range `ell`:

| `ell` | Crash site | Expression | Result |
|---|---|---|---|
| 64 | `compute_powers_of_radix` | `1u64 << 64` | Overflow panic |
| ≥48 | `build_dlog_table` | `1u32 << table_size_exp` (exp ≥ 32) | Overflow panic |
| ≥56 | `get_dlog_range_bound` | `1u64 << (ell + ceil_log2(N))` | Overflow panic |

The upper bounding constraint is `build_dlog_table`, making the actual safe upper bound `ell ≤ 47` for typical parameters, not 63 as the `debug_assert` suggests: `ell=48` with `max_aggregation=150` panics in `build_dlog_table` because `table_size_exp = 4 + ((48 + 8) / 2) = 32`, and `1u32 << 32` overflows.

**Impact**. Production uses `ell = 32` (hardcoded). `PublicParameters` is never received from the network. `overflow-checks = true` in release ensures panics rather than silent undefined behavior.

**Recommendation**. Replace `debug_assert!` with a runtime `assert!` or return `Err` for out-of-range `ell` to prevent crashes in release builds. Consider adding unit tests for boundary values of `ell`.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19311.

### Encrypted Payload Does Not Bind to an Epoch, Causing Cross-Epoch Decryption Failure

- **Severity**: Low
- **Location**: transaction/encrypted_payload.rs

**Description**. Transactions are encrypted using the current epoch's encryption key, derived from the Chunky DKG output. However, `EncryptedPayload` does not include an epoch identifier, so a transaction encrypted in epoch $N$ may be included in a block during epoch $N+1$, at which point a different decryption key is in use.

When validators in epoch $N+1$ attempt to decrypt such a stale ciphertext, the AES-GCM authentication tag check will likely fail, since the epoch $N+1$ key is independently derived from epoch $N$'s key. Besides, the `payload_hash` commitment would reject any plaintext mismatch. The practical consequence is a liveness issue: a legitimately encrypted transaction that crosses an epoch boundary fails to decrypt, and the sender's nonce will be incremented and gas fees will be charged.

**Impact**. A sender whose encrypted transaction is not included before an epoch transition will have their transaction dropped at decryption time in the future epoch. The sender pays the gas cost and loses the nonce slot with no executed transaction.

**Recommendation**. It is recommended to add an `epoch_id` field to `EncryptedPayload`. Validators can then reject stale-epoch transactions before attempting decryption, providing a clear error and possibly avoiding unnecessary gas charges for the sender.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19461.

### Minor DoS and Robustness Issues in `ChunkyDKGManager` Signature Request Handling

- **Severity**: Low
- **Location**: dkg/src/chunky/mod.rs

**Description**. Three related issues affect the robustness and resource usage of `ChunkyDKGManager` when processing subtranscript signature requests.

**Issue 1: No rate limiting on signature requests per sender.**
`process_subtranscript_signature_request_rpc` spawns a task to fetch and cryptographically verify the transcript for each incoming request. The existing guard only skips re-spawning if the same sender's handler is still in-flight with the same `subtranscript_hash`:

```rust
if let Some((existing_hash, handle)) = self.rpc_handler_guards.get(&sender) {
    if *existing_hash == req.subtranscript_hash && !handle.is_finished() {
        response_sender.send(Err(anyhow!("handler already in-flight for sender {}", sender)));
        return Ok(());
    }
}
```

A malicious validator can bypass this guard by sending requests with different `subtranscript_hash` values, each of which spawns a new verification task. Since transcript verification is computationally expensive, this can exhaust task resources and delay DKG progress.

**Issue 2: `transcript.verify` runs on the async executor.**
The cryptographic transcript verification is CPU-intensive but is called directly on the async executor thread:

```rust
monitor!(
    "chunky_validate_transcript_verify",
    transcript.verify(
        &dkg_config.threshold_config,
        &dkg_config.public_parameters,
        signing_pubkeys,
        &dkg_config.eks,
        &dkg_config.session_metadata,
        rng,
    )
)
.context("chunky transcript verification failed")?;
```

Running heavy CPU work on the async executor blocks the runtime from scheduling other tasks, degrading overall responsiveness during DKG.

**Issue 3: Signature requests are rejected in all states except `AwaitAggregatedSubtranscriptCertification`.**
The handler only processes requests in one state and returns an error for all others:

```rust
let (aggregated_transcript, dkg_config) = match &self.state {
    InnerState::AwaitAggregatedSubtranscriptCertification { .. } => { [...] },
    _ => {
        response_sender.send(Err(anyhow!(
            "[ChunkyDKG] not ready for signature requests in state {:?}",
            self.state.variant_name()
        )));
        return Ok(());
    },
};
```

A validator in the `AwaitSubtranscriptAggregation` or `Finished` state will reject all incoming signature requests. Accepting requests in these states would make the certification process more robust against validators at different stages of DKG progress.

**Impact**. Issue 1 allows a malicious validator to trigger excessive verification work. Issue 2 can slow down the async runtime during DKG. Issue 3 reduces the resilience of the certification process when validators are not all in the same state.

**Recommendation**. For Issue 1, it is recommended to add a per-sender rate limit to bound the number of concurrent verification tasks spawned per validator. For Issue 2, it is recommended to wrap `transcript.verify` in `spawn_blocking` to move the heavy computation off the async executor. For Issue 3, it is recommended to accept signature requests in the `AwaitSubtranscriptAggregation` and `Finished` states to improve DKG robustness.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19544.

### Optimistic Aggregation Can Accept Invalid Individual Shares That Cancel in the Linear Combination

- **Severity**: Low
- **Location**: secret_sharing/verifier.rs

**Description**. `SecretShareStore` uses an optimistic verification strategy: it attempts aggregation first and only falls back to individual share verification when aggregation fails. The aggregation reconstructs the shared secret as a weighted sum of G1 elements using Lagrange coefficients, where the coefficients are determined by the positions and indices of the contributing shares:

```rust
impl<T: WeightedSum> Reconstructable<ShamirThresholdConfig<T::Scalar>> for T {
    type ShareValue = T;

    fn reconstruct(
        sc: &ShamirThresholdConfig<T::Scalar>,
        shares: &[ShamirShare<Self::ShareValue>],
    ) -> Result<Self> {
        [...]
        let (roots_of_unity_indices, bases): (Vec<usize>, Vec<Self::ShareValue>) = shares
            [..sc.t]
            .iter()
            .map(|(p, g_y)| (p.get_id(), g_y))
            .collect();

        let lagrange_coeffs = sc.lagrange_for_subset(&roots_of_unity_indices);

        Ok(T::weighted_sum(&bases, &lagrange_coeffs))
    }
}
```

Because the aggregated result is a linear combination, a malicious validator can craft a share with invalid individual G1 elements whose errors cancel out in the weighted sum, leaving the aggregated value unchanged and correct. This requires the attacker to know or predict which shares from other validators will be selected for reconstruction (the first `sc.t` shares). When aggregation succeeds with such a crafted share, the optimistic path skips individual verification entirely, so the malformed share is accepted without ever being detected.

**Impact**. No direct security impact was identified. However, the optimistic verifier provides a weaker soundness guarantee than expected: a validator's accepted share may not be individually valid even though the aggregate passes. Components that assume individual share validity after a successful aggregation may be vulnerable in the future.

**Recommendation**. It is recommended to document explicitly that the optimistic verifier does not guarantee individual share validity. If future components rely on individual share correctness, individual verification should be enforced regardless of whether aggregation succeeds.

**Client Response**. Acknowledged.

### Missing `payload_hash` Verification After Decryption

- **Severity**: Low
- **Location**: transaction/encrypted_payload.rs

**Description**. In `EncryptedPayload`, the `payload_hash` field is intended to serve as a commitment to the plaintext transaction payload:

```rust
pub enum EncryptedPayload {
    Encrypted {
        ciphertext: Ciphertext,
        extra_config: TransactionExtraConfig,
        payload_hash: HashValue,
        claimed_entry_fun: Option<ClaimedEntryFunction>,
    },
    [...]
}
```

However, after decryption, the implementation does not verify that the hash of the decrypted payload matches `payload_hash`. Because `payload_hash` is a separate field outside the ciphertext, any party can set it to an arbitrary value without affecting the ciphertext check. Any downstream code that relies on `payload_hash` as a trusted commitment to the payload will therefore receive an untrusted value.

Note that the confidentiality and authenticity of the payload itself are already protected by the CCA-secure batch threshold encryption scheme. This finding concerns only the integrity of the `payload_hash` field as a commitment.

**Impact**. Any component that depends on `payload_hash` as a binding commitment to the plaintext payload (such as deduplication, ordering, or pre-execution filtering logic) can be misled by an attacker who sets `payload_hash` to an arbitrary value.

**Recommendation**. It is recommended to verify that the hash of the decrypted payload matches the `payload_hash` field after decryption, and reject the transaction if the check fails.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19461.

### Discouraged Transaction Submission Patterns Can Cause Decryption Without Execution

- **Severity**: Low
- **Location**: aptos-vm/src/aptos_vm.rs

**Description**. In the normal Aptos transaction flow, the prologue check validates preconditions such as expiration time and sequence number before a transaction is executed. In the encrypted mempool flow, this check occurs after decryption. As a result, a transaction whose prologue would fail is still decrypted and its plaintext revealed before being rejected. This may lead to a few issues when users do not follow the recommended transaction-submission behavior.

**Issue 1: Expired transactions can be decrypted.**
A block proposer can include an already-expired encrypted transaction in a block. The transaction is decrypted before the expiration check runs, revealing the sender's intent for a transaction that would never execute. The reverse also applies: a sender who sets a future expiration time expecting their transaction to remain private until then will find it decrypted immediately, since the expiration bound is not enforced before decryption. Note that the future expiration transaction use case is not supported in the mempool level.

**Issue 2: No safe cancellation via nonce reuse.**
A user may try to replace a pending transaction by submitting a new transaction with the same sequence/nonce number. With encrypted transactions, this does not prevent the original transaction from being decrypted. A block proposer can still include the original encrypted transaction and decrypt it, even after the sender has submitted a replacement. The replacement takes effect for execution, but the original plaintext is already revealed.

**Issue 3: Sequence number reordering enables MEV.**
If a user submits two consecutive transactions with sequence numbers $N$ and $N+1$, a malicious block proposer can reorder them and place the $N+1$ transaction first. This transaction is decrypted but fails the prologue check because sequence number $N$ has not yet been executed. The proposer now knows the content of the $N+1$ transaction. Once $N$ is committed in a later block, the $N+1$ transaction becomes valid again and can be front-run using the previously revealed plaintext.

**Impact**. A malicious block proposer can force decryption of encrypted transactions that would fail the prologue check, revealing sender intent without execution. This enables MEV via sequence number reordering and removes the sender's ability to cancel an encrypted transaction by nonce replacement. The expiration time, which normally bounds transaction validity, also does not bound when a transaction is decrypted. Note that these are generally not recommended use cases for the Aptos network and thus should not occur when users follow the recommended transaction-submission flow.

**Recommendation**. It is recommended to clearly document that these discouraged behaviors may cause the transaction to be decrypted without execution. Alternatively, it is recommended to move as many prologue checks as possible to before decryption.

**Client Response**. Acknowledged.

### Encrypted Transactions Can Be Decrypted Without Being Executed When Block Limits Are Exhausted

- **Severity**: Low
- **Location**: aptos-vm/src/aptos_vm.rs

**Description**. A security property of the encrypted mempool is that a valid encrypted transaction, once decrypted, should be executed. This property can be violated due to block limit exhaustion.

In the Aptos network, a block may exhaust its execution limits, such as gas or [block output size](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-057-block-output-size-limit-and-conflict-aware-block.md). Once a limit is reached, the VM can skip the remaining transactions in the block. If those remaining transactions include encrypted transactions, they may already have been decrypted before being skipped for execution.

Due to the shuffle phase, encrypted transactions are not guaranteed to be ordered before regular transactions in a block. In normal operation, a block may exhaust its limits before reaching all encrypted transactions, causing them to be decrypted but discarded without execution. A malicious block proposer can also trigger this deliberately by filling the block with expensive transactions ahead of a target encrypted transaction. In either case, the plaintext is revealed without the transaction being executed.

Note that a deliberate attacker must find or create (and pay their gas costs) enough large filler transactions. Considering that the gas unit for encrypted transactions is higher and an attacker must pay the up-front cost to perform the attack, the attack remains practically constrained by transaction shuffling, higher encrypted-transaction gas costs, and the need to pay up-front cost without a guaranteed return.

**Impact**. A sender's transaction intent can be leaked without execution. If the plaintext remains actionable, it can be used for MEV. This can occur accidentally when block limits are exhausted, or deliberately when a malicious proposer can place enough expensive transactions before a target encrypted transaction.

**Recommendation**. It is recommended to treat an encrypted transaction that is decrypted and cannot be executed due to block limit exhaustion as a block-level failure rather than silently discarding it. Alternatively, it is recommended to monitor for this behavior on-chain and respond by adjusting encrypted-transaction gas parameters or other block-admission limits if it becomes a practical attack vector.

**Client Response**. Partially fixed in https://github.com/aptos-labs/aptos-core/pull/19562.

### total_weight as u32 silent truncation in generate_config()

- **Severity**: Low
- **Location**: types/src/dkg/chunky_dkg.rs

**Description**. `let total_weight: u32 = profile.validator_weights.iter().sum::<u64>() as u32;` silently wraps if the `u64` sum exceeds `u32::MAX`. The truncated value would create a mismatch between SRS sizing and actual total weight.

**Impact**. Not exploitable because with production parameters (~150 validators), total weight is approximately $3n + 12 \approx 462$, far below `u32::MAX`.

**Recommendation**. It is recommended to use `.try_into::<u32>().expect(...)`.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19291.

### Pairing Computations in BIBE Can Be Batched for Better Efficiency

- **Severity**: Informational
- **Location**: aptos-dkg/src/pvss/bibe/

**Description**. Several functions in the BIBE implementation compute multiple pairings sequentially, missing opportunities to use batched pairing APIs that are significantly more efficient. Pairing operations are expensive: batching multiple pairs avoids redundant final exponentiations by computing all Miller loop products first and applying a single final exponentiation.

**Case 1: `prepare_individual`.**
Two pairings are computed and added separately:

```rust
let pairing_output = PairingSetting::pairing(digest.as_g1(), self.ct_g2[0])
    + PairingSetting::pairing(**eval_proof, self.ct_g2[1]);
```

This can be replaced with a single `multi_pairing` call:

```rust
let pairing_output = PairingSetting::multi_pairing(
    [digest.as_g1(), **eval_proof],
    [self.ct_g2[0], self.ct_g2[1]],
);
```

**Case 2: `verify_shifted_bls`.**
Two pairings are computed separately for BLS signature verification:

```rust
if PairingSetting::pairing(digest.as_g1() + hashed_offset, verification_key_g2)
    == PairingSetting::pairing(signature, G2Affine::generator())
```

These can be batched by checking the product-of-pairings equals the identity, avoiding a redundant final exponentiation.

**Case 3: `verify_pf`.**
Two pairings are computed separately for evaluation proof verification:

```rust
PairingSetting::pairing(pf, self.tau_g2 - G2Projective::from(G2Affine::generator() * id.x()))
    == PairingSetting::pairing(digest.as_g1(), G2Affine::generator())
```

This can similarly be expressed as a single batched pairing check.

**Case 4: `verify_decryption_key_share`.**
Each weighted key share is verified individually in a loop, each requiring its own pairing:

```rust
self.vks_g2.iter()
    .zip(&dk_share.1)
    .try_for_each(|(vk, dk_share)| {
        vk.verify_decryption_key_share(digest, &(self.weighted_player, dk_share.clone()))
    })
```

This can be made significantly more efficient by using a random linear combination to collapse all individual pairing checks into a single multi-pairing verification.

**Impact**. This is an informational finding. There is no security impact. The current implementation is correct but suboptimal.

**Recommendation**. It is recommended to replace sequential pairing calls with `multi_pairing` in `prepare_individual`, `verify_shifted_bls`, and `verify_pf`. For `verify_decryption_key_share`, it is recommended to use a random linear combination to batch all individual pairing checks into a single multi-pairing verification.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19039.

### Missing Subgroup Check When Deserializing BLS12-381 Points in TypeScript Client

- **Severity**: Informational
- **Location**: typescript/bibe.ts

**Description**. The TypeScript functions `bytesToG1` and `bytesToG2` deserialize BLS12-381 curve points without verifying that the resulting points are torsion-free (i.e., in the correct prime-order subgroup):

```typescript
export function bytesToG1(bytes: Uint8Array): WeierstrassPoint<bigint> {
  return bls12_381.G1.Point.fromBytes(bytes);
}

export function bytesToG2(bytes: Uint8Array): WeierstrassPoint<Fp2> {
  return bls12_381.G2.Point.fromBytes(bytes);
}
```

These functions are called when clients fetch and deserialize `EncryptionKey` values. Without a subgroup check, a maliciously crafted point that lies on the curve but outside the prime-order subgroup could be accepted as a valid group element.

**Impact**. This is an informational finding. The risk is low if clients fully trust the source of the encryption key. However, if the source is not fully trusted or the key is fetched over an unauthenticated channel, a small-subgroup point could be used in a cofactor attack to leak information about the client's private inputs.

**Recommendation**. It is recommended to add a torsion check after deserialization in both functions.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19081.

### `h_denom_eval[0]` Is Used as a Hacky Way to Recover `num_omegas` During Serialization

- **Severity**: Informational
- **Location**: dekart_univariate_v2.rs

**Description**. In Chunky, `h_denom_eval[0]` is never used in any cryptographic computation. Instead, it is repurposed as an indirect way to recover `num_omegas` during serialization of `ProverPrecomputed`. The serialization code inverts `h_denom_eval[0]` to obtain a triangular number, then applies `floored_triangular_root` to recover `num_omegas`:

```rust
impl<E: Pairing> CanonicalSerialize for ProverPrecomputed<E> {
    fn serialize_with_mode<W: Write>(
        &self,
        mut writer: W,
        compress: Compress,
    ) -> Result<(), SerializationError> {
        self.powers_of_two
            .len()
            .serialize_with_mode(&mut writer, compress)?;
        let triangular_number = self.h_denom_eval[0]
            .inverse()
            .expect("Could not invert h_denom_eval[0]");
        let num_omegas = floored_triangular_root(
            arkworks::scalar_to_u32(&triangular_number)
                .expect("triangular number did not fit in u32") as usize,
        ) + 1;
        num_omegas.serialize_with_mode(&mut writer, compress)?;

        Ok(())
    }
}
```

This approach is fragile. The call to `scalar_to_u32` will panic with the message `"triangular number did not fit in u32"` if the triangular number exceeds `u32::MAX`, which is a silent assumption that is not enforced anywhere in the struct's construction.

**Impact**. This is a code quality issue. The logic is difficult to follow since `h_denom_eval[0]` appears to serve a dual, undocumented purpose. Additionally, if `num_omegas` grows large enough that its triangular number exceeds `u32::MAX`, serialization will panic at runtime.

**Recommendation**. It is recommended to add a dedicated `num_omegas` field to `ProverPrecomputed` and serialize it directly. Alternatively, if `num_omegas` always equals `h_denom_eval.len()`, that length can be used directly without any inverse or triangular root computation.

### Privacy Limitations of the Encrypted Mempool

- **Severity**: Informational
- **Location**: *

**Description**. The encrypted mempool provides transaction confidentiality by encrypting the transaction payload before submission. However, several pieces of information remain visible to observers and represent inherent limitations of the current design.

**Transaction metadata is public.** The `RawTransaction` struct exposes several fields outside the encrypted payload:

```rust
pub struct RawTransaction {
    sender: AccountAddress,
    sequence_number: u64,
    payload: TransactionPayload,
    max_gas_amount: u64,
    gas_unit_price: u64,
    expiration_timestamp_secs: u64,
    chain_id: ChainId,
}
```

The sender address, sequence number, gas limit, gas price, and expiration time are all visible before decryption. These fields can reveal the sender's identity, transaction urgency, and rough complexity even when the payload is encrypted.

**`claimed_entry_fun` leaks the target contract.** The `EncryptedPayload` includes a `claimed_entry_fun` field that optionally identifies the target module and function before decryption. When present, an observer can determine which contract a transaction is calling before it is finalized, partially defeating the confidentiality goal for contract interactions.

**Ciphertext length leaks transaction type.** The size of the encrypted payload is observable. Since different transaction types (e.g., simple transfers vs. complex contract calls) produce ciphertexts of characteristic sizes, an observer can infer the likely transaction type from the ciphertext length alone.

**Validators can censor encrypted transactions.** Block proposers have discretion over which transactions to include. A malicious or colluding set of validators can choose to exclude all encrypted transactions from their blocks. There is no consensus-level enforcement guaranteeing that encrypted transactions will be included, so the privacy guarantee is conditional on proposers not actively censoring them.

**Impact**. These are informational findings representing known, inherent limitations of the encrypted mempool's privacy model. They do not constitute bugs. Users should be aware that transaction confidentiality is partial: metadata, contract target, and ciphertext size remain observable, and inclusion of encrypted transactions depends on proposer cooperation.

**Recommendation**. It is recommended to clearly document these limitations in the user-facing documentation so that senders understand what information is and is not protected by the encrypted mempool.

**Client Response**. Acknowledged.

### Early Returns in `add_share_with_metadata` Leave `SecretShareItem` in a Corrupted State

- **Severity**: Informational
- **Location**: secret_sharing/secret_share_store.rs

**Description**. `add_share_with_metadata` uses `std::mem::replace` to move `self` into a local variable, temporarily leaving `self` as a dummy value `Self::new(Author::ONE)`. The function is expected to write the updated state back to `self` before returning:

```rust
fn add_share_with_metadata(
    &mut self,
    share: SecretShare,
    share_weights: &HashMap<Author, u64>,
) -> anyhow::Result<()> {
    let item = std::mem::replace(self, Self::new(Author::ONE));
    [...]
    let new_item = match item {
        SecretShareItem::PendingMetadata(mut share_aggregator) => {
            [...]
            SecretShareItem::PendingDecision { metadata, share_aggregator }
        },
        SecretShareItem::PendingDecision { .. } => {
            bail!("Cannot add self share in PendingDecision state");
        },
        SecretShareItem::Aggregating { .. } | SecretShareItem::Decided { .. } => return Ok(()),
    };
    let _ = std::mem::replace(self, new_item);
    Ok(())
}
```

The `PendingDecision` and `Aggregating | Decided` arms both return early before the final `std::mem::replace(self, new_item)` is reached. In both cases, `self` is left as the dummy `Self::new(Author::ONE)` instead of its original state, silently corrupting the item.

**Impact**. This is an informational finding. The corrupted state paths are not expected to be reachable under normal operation. However, if they are triggered, the `SecretShareItem` is silently left in a dummy state, which could cause incorrect behavior in subsequent operations on the item.

**Recommendation**. It is recommended to restore `self` before returning early in the non-`PendingMetadata` arms. The original `item` should be written back to `self` before calling `bail!` or returning `Ok(())`.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19476.

### Error Cases in `RequestShare` Handler Drop the RPC Sender, Causing Requester to Wait Until Timeout

- **Severity**: Informational
- **Location**: secret_sharing/secret_share_manager.rs

**Description**. In `handle_incoming_msg`, when a validator receives a `RequestShare` RPC, it looks up its own share and sends it back via `response_sender`. However, in both error cases (`Ok(None)` and `Err`), the handler only logs a warning and returns, dropping `response_sender` without sending any response:

```rust
SecretShareMessage::RequestShare(request) => {
    let result = self
        .secret_share_store
        .lock()
        .get_self_share(request.metadata());
    match result {
        Ok(Some(share)) => {
            self.process_response(
                protocol,
                response_sender,
                SecretShareMessage::Share(share),
            );
        },
        Ok(None) => {
            warn!(
                "Self secret share could not be found for RPC request {}",
                request.metadata().round
            );
        },
        Err(e) => {
            warn!("[SecretShareManager] Failed to get share: {}", e);
        },
    }
},
```

When `response_sender` is dropped without a reply, the requesting validator receives no response and must wait until the RPC times out (10 seconds) before it can retry. Since collecting enough secret shares to reconstruct the decryption key is on the critical path for transaction decryption and execution, this unnecessary delay extends the decryption phase for the round.

**Impact**. When a validator fails to look up its own share, the requesting validator stalls for the full RPC timeout (10 seconds) before retrying. This unnecessarily delays decryption key reconstruction and transaction execution for the round.

**Recommendation**. It is recommended to send an explicit error response via `response_sender` in both the `Ok(None)` and `Err` cases, so that the requester can fail fast and retry immediately rather than waiting for the timeout.

### Target List for Share Requests Is Computed Before the Delay, Missing Shares Received During the Wait

- **Severity**: Informational
- **Location**: secret_sharing/secret_share_manager.rs

**Description**. `SecretShareManager` proactively requests missing secret shares from other validators after a configurable delay. In `spawn_share_requester_task`, the target list is computed by filtering out validators whose shares are already present in `secret_share_store`. This list is then passed to `spawn_share_requester_for_targets`, which sleeps for `delay_ms` before broadcasting the request:

```rust
fn spawn_share_requester_task(&self, metadata: SecretShareMetadata) -> DropGuard {
    let existing_shares: Option<std::collections::HashSet<aptos_types::PeerId>> =
        secret_share_store.lock().get_all_shares_authors(&metadata);
    let targets: Vec<Author> = match existing_shares {
        Some(existing) => self
            .epoch_state
            .verifier
            .get_ordered_account_addresses_iter()
            .filter(|author| !existing.contains(author))
            .collect(),
        None => return DropGuard::new(AbortHandle::new_pair().0),
    };
    self.spawn_share_requester_for_targets(
        metadata,
        targets,
        self.secret_share_request_delay_ms,
    )
}
```

The purpose of the delay is to allow share responses to arrive passively before proactively requesting them. However, since the target list is computed before the delay, any shares received during the wait are not reflected. The node ends up sending redundant requests to validators from whom it already received shares during the wait period.

**Impact**. The node sends unnecessary share requests, wasting network bandwidth. This is an optimization issue with no security consequence.

**Recommendation**. It is recommended to move the target list computation to after the delay, so that shares received during the wait are excluded before broadcasting the request.

**Client Response**. Fixed in https://github.com/aptos-labs/aptos-core/pull/19476.

### panic!() in KZG open at root of unity

- **Severity**: Informational
- **Location**: src/pcs/univariate_hiding_kzg.rs

**Description**. The `open()` function panics if the evaluation point $x$ coincides with a root of unity in the SRS domain (division by zero in the quotient polynomial).

**Impact**. Unreachable in Chunky v1 because the evaluation point $\gamma$ is sampled via `get_gamma_challenge` (if sampled honestly), which loops until it is not a root of unity. Also, the collision probability per iteration is $m / |F|$ (negligible for 256-bit fields).

**Recommendation**. Return `Result::Err` instead of panicking.

### Ciphertext Integrity Check Missing for Inline Batch Transactions in Block Proposal

- **Severity**: Known Issue
- **Location**: consensus-types/src/payload.rs

**Description**. In the encrypted mempool scheme, validators must verify the integrity of every encrypted transaction in a block proposal before attempting decryption. If an invalid ciphertext is accepted and decrypted, the decryption operation can leak the tag-specific decryption key, allowing the attacker to decrypt other users' transactions that share the same tag.

A block payload is structured as follows:

```rust
pub struct OptQuorumStorePayloadV1<T: TBatchInfo> {
    inline_batches: InlineBatches<T>,
    opt_batches: OptBatches<T>,
    proofs: ProofBatches<T>,
    execution_limits: PayloadExecutionLimit,
}
```

Validators correctly check ciphertext integrity for transactions in `opt_batches` (Quorum Store batches) and `proofs`, but the same check is absent for `inline_batches`. A malicious block proposer can therefore include a crafted invalid ciphertext in the inline batch. When the network processes it, the decryption leaks the decryption key for the targeted tag, which the attacker can then use to decrypt other users' encrypted transactions associated with that tag.

**Impact**. A malicious block proposer can recover the tag-specific decryption key by submitting a crafted invalid ciphertext in an inline batch. This breaks the confidentiality of the encrypted mempool.

**Recommendation**. Apply the same ciphertext integrity check to all transaction sources in a block proposal, including `inline_batches`. Additionally, if `DirectMempool` is re-enabled in the future, the same integrity check should be applied there as well.

**Client Response**. This issue was identified and fixed by the client during the audit.

### Sigma Protocol Proof Dimensions Are Not Validated, Allowing Verification to Be Silently Truncated

- **Severity**: Known Issue
- **Location**: chunky/weighted_transcript.rs

**Description**. In Chunky, the proof for sigma protocol can contain multiple elements in nested vectors. However, there is no check for the dimensions of the proof. Then a malicious prover can submit invalid sized proof and cause the verification to be silently truncated.

For example, the sigma protocol verifier in `CurveGroupTrait::msm_terms_for_verify_with_challenge` (`sigma_protocol/traits.rs:221–234`) constructs the MSM verification equation by zipping three iterators:

```rust
let msm_terms = msm_terms_for_prover_response
    .into_iter()
    .zip(prover_first_message.clone().into_iter())
    .zip(public_statement.clone().into_iter())
    .map(|((term, A), P)| { ... })
    .collect::<Result<Vec<_>, _>>()?;
```

`Iterator::zip` in Rust silently truncates to the length of the shortest iterator. The number of MSM terms produced by `msm_terms(&prover_response)` is determined entirely by the dimensions of the proof response `z`, which is attacker-controlled and deserialized from the wire without any shape validation.

In the Chunky PVSS SoK proof, the witness type `HkzgElgamalWitness<F>` contains `chunked_plaintexts: Vec<Vec<Vec<Scalar<F>>>>` (one outer entry per player) and `elgamal_randomness: Vec<Vec<Scalar<F>>>`. A malicious prover can submit a proof with fewer player entries in `chunked_plaintexts` (e.g., 0 entries). `chunked_elgamal::Homomorphism::msm_terms` iterates `input.plaintext_chunks`, producing fewer MSM terms than there are entries in `prover_first_message` and `public_statement`. The zip then silently drops all constraints for the omitted players' ciphertexts.

The verifier in `verify_weighted_preamble` validates the shape of the statement (`subtrs.Cs`, `subtrs.Vs`, `subtrs.Rs`) but not the shape of the proof response `z`. There is no check that the dimensions of `z.chunked_plaintexts` or `z.elgamal_randomness` match the number of players and shares defined by the weighted config.

Note that the `two_term_msm` sigma protocol used in the DeKART range proof (`pi_PoK`) is not affected, because its witness is a fixed-size 2-scalar struct that always produces exactly one MSM term regardless of attacker input.

**Impact**. A malicious dealer can submit a PVSS transcript whose SoK proof response `z` contains an empty or truncated `chunked_plaintexts` vector. The verifier accepts the proof because the zip produces zero or fewer MSM equations than required, skipping the binding check for all omitted players. The prover is then no longer bound to a consistent relationship between the committed shares (`Cs`) and the sigma protocol witness. In effect, the SoK ceases to be a proof of knowledge for the omitted players' ciphertexts.

**Recommendation**. It is recommended to validate that the proof response `z` has the expected shape matching the homomorphism's parameters. Concretely, after `verify_weighted_preamble` returns the `SokContext`, add checks that:

- `z.chunked_plaintexts.len() == sc.get_total_num_players()`
- `z.chunked_plaintexts[i].len() == sc.get_player_weight(&player_i)` for each player `i`
- For each player `i`'s secret share `j`, the inner vec of `z.chunked_plaintexts[i][j]` has length `num_chunks_per_scalar(pp.ell)`
- `z.elgamal_randomness.len() == sc.get_max_weight()`
- Each inner vec of `z.elgamal_randomness` has length `num_chunks_per_scalar(pp.ell)`

**Client Response**. This issue was identified and fixed by the client during the audit.

---

This report was published on the [zkSecurity Audit Reports](https://reports.zksecurity.xyz) site by [ZK Security](https://www.zksecurity.xyz), a leading security firm specialized in zero-knowledge proofs, MPC, FHE, and advanced cryptography. For the full list of audit reports, see [llms.txt](https://reports.zksecurity.xyz/llms.txt).
