Still, why endorse and practically make everyone implement an algorithm that only the NSA wants, while there is a superset already standardised.
This is about the known bad actor NSA forcing through their own special version of a crypto building block they might downgrade-attack me to.
I pay like 1% overhead to also do ecc, and the renegotiation to the non-hybrid costs 2x and a round-trip extra. This makes no sense apart from downgrade attacks.
If it turns out ecc is completely broken, we can add the PQ only suite then.
tptacek 11 days ago [-]
Nobody has to implement the algorithm only NSA wants! That's not how RFCs work.
> Problem is PQ signatures are large. If certificate chain is small that could be acceptable, but if the chain is large, then it can be expensive in terms of bandwidth and computation during TLS handshake. That is the exchange sends many certificates which embed a signature and a large (PQ) public key.
> Merkle Tree Certificates ensures that an up to date client only needs 1 signature, 1 public key, 1 merkle tree witness.
> Looking at an MTC generated certificate they've replaced the traditional signing algorithm and signature with a witness.
> That means all a client needs is a signed merkle root which comes from an expanding Merkle Tree signed by the MTCA (Merkle Tree CA), which is delivered somehow out of band.
> The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a noticeable impact on the performance of TLS.
Are ML-KEM certs impractically large too?
durumcrustulum 12 days ago [-]
ML-KEM is a key establishment scheme, not a signature scheme.
> If you tried to make "ML-KEM Certificates" (using a newer mechanism called AuthKEM where you authenticate by proving you can decrypt a challenge rather than signing), you would replace the ~2.4 KB ML-DSA signature with a ~1 KB ML-KEM ciphertext. This saves about 50% of the bandwidth compared to ML-DSA, but it is still roughly 35x larger than a traditional ECC certificate chain.
> [With AuthKEM,] you would replace the ~2.4 KB ML-DSA signature with a ~1 KB ML-KEM ciphertext.
durumcrustulum 12 days ago [-]
What "the thing"? AuthKEM isn't being deployed anywhere.
westurner 11 days ago [-]
How much more complex is the difference than 2.4 KB w/ ML-DSA or ~1 KB w/ ML-KEM?
durumcrustulum 8 days ago [-]
I'm sorry I don't understand what you're asking
westurner 7 days ago [-]
Though there is a difference between a cert signature (ML-DSA) and a challenge (ML-KEM), ultimately and fundamentally, isn't real key size still a relevant metric for comparison.
(Everyone dnvoted this like -6/-7. I guess they didn't understand the relevance.)
Merkle-signing cert trust roots is a security/bytes-transferred efficiency tradeoff.
What is the difference in number of bytes seemed usefully relevant to me at least.
digitalPhonix 11 days ago [-]
> Well it turns out there is one customer who really really hates hybrids, and only wants to use ML-KEM1024 for all their systems. And that customer happens to be the NSA. And honestly, I do not see a problem with that.
Isn’t the problem (having only read a little about the controversy) that the non-hybrid appears to be strictly worse, except for the (~10%) decrease in transmission size; and that no one has articulated why that’s a desirable tradeoff?
On the face of it, I don’t see a problem with the tradeoff (both ways, that is) choice existing. I expect smarter people than me to have reasons one way or the other but I haven’t seen a reason for saving bandwidth that could articulate the concrete use case that it makes a difference.
> There is no backdoor in ML-KEM, and I can prove it. For something to be a backdoor, specifically a “Nobody but us backdoor” (NOBUS), you need some way to ensure that nobody else can exploit it, otherwise it is not a backdoor, but a broken algorithm
Isn’t a broken algorithm also a valid thing for NSA/whoever to want?
Them saying they want to use it themselves doesn’t actually mean much?
digitalPhonix 10 days ago [-]
Actually, thinking about this a bit more - saying that there's no "Nobody but us backdoor" to prove there's no backdoor is a poor argument.
As an example - if there's a weakness that affects 50% of keys (replace with whatever hypothetical number), NSA can make sure it doesn't use those affected keys but still retain the ability to decrypt 50% of everyone else's communications. And using the entropy analysis from this post, that would require 1 bit hidden in the parameters which is clearly within the entropy budget.
contact9879 12 days ago [-]
thanks sophie. now if only this would get as many eyeballs as the inciting one
sigh
sebstefan 12 days ago [-]
>much in line with my reasoning, 0x11EC is the default key exchange algorithm used by Chrome, Firefox, and pretty much all other TLS clients that currently support PQC. So what’s the point of MLKEM1024? Well it turns out there is one customer who really really hates hybrids, and only wants to use ML-KEM1024 for all their systems. *And that customer happens to be the NSA.* And honestly, I do not see a problem with that.
...Really, you don't? I can hardly imagine anything more suspicious
>the US plans to use ML-KEM themselves, [a “Nobody but us backdoor”] would be the only backdoor they could reasonably insert into a standard.
Is that really convincing
And secondly, would we really know in advance? They can say that and then just use X25519MLKEM768 exclusively for stuff that matters.
I'm convinced they would love a broken algorithm in the IETF standard.
Rendered at 04:21:28 GMT+0000 (Coordinated Universal Time) with Vercel.
This is about the known bad actor NSA forcing through their own special version of a crypto building block they might downgrade-attack me to.
I pay like 1% overhead to also do ecc, and the renegotiation to the non-hybrid costs 2x and a round-trip extra. This makes no sense apart from downgrade attacks.
If it turns out ecc is completely broken, we can add the PQ only suite then.
> Problem is PQ signatures are large. If certificate chain is small that could be acceptable, but if the chain is large, then it can be expensive in terms of bandwidth and computation during TLS handshake. That is the exchange sends many certificates which embed a signature and a large (PQ) public key.
> Merkle Tree Certificates ensures that an up to date client only needs 1 signature, 1 public key, 1 merkle tree witness.
> Looking at an MTC generated certificate they've replaced the traditional signing algorithm and signature with a witness.
> That means all a client needs is a signed merkle root which comes from an expanding Merkle Tree signed by the MTCA (Merkle Tree CA), which is delivered somehow out of band.
From "Keeping the Internet fast and secure: introducing Merkle Tree Certificates" (2025-10) https://blog.cloudflare.com/bootstrap-mtc/ :
> The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a noticeable impact on the performance of TLS.
Are ML-KEM certs impractically large too?
/? AuthKEM:
kemtls/draft-celi-wiggers-tls-authkem: https://github.com/kemtls/draft-celi-wiggers-tls-authkem
"KEM-based Authentication for TLS 1.3" https://kemtls.org/draft-celi-wiggers-tls-authkem/draft-celi... :
> Table 1. Size comparison of public-key cryptography in TLS 1.3 and AuthKEM handshakes.
"KEM-based pre-shared-key handshakes for TLS 1.3" > "2.2. Key Encapsulation Mechanisms", "3. Abbreviated AuthKEM with pre-shared public KEM keys": https://kemtls.org/draft-celi-wiggers-tls-authkem/draft-wigg...> [With AuthKEM,] you would replace the ~2.4 KB ML-DSA signature with a ~1 KB ML-KEM ciphertext.
(Everyone dnvoted this like -6/-7. I guess they didn't understand the relevance.)
IDK a terse analogy then:
MerkleCerts + ML-DSA : ML-DSA :: Challenge (ML-KEM,) : ____ (ML-DSA)
Merkle-signing cert trust roots is a security/bytes-transferred efficiency tradeoff.
What is the difference in number of bytes seemed usefully relevant to me at least.
Isn’t the problem (having only read a little about the controversy) that the non-hybrid appears to be strictly worse, except for the (~10%) decrease in transmission size; and that no one has articulated why that’s a desirable tradeoff?
On the face of it, I don’t see a problem with the tradeoff (both ways, that is) choice existing. I expect smarter people than me to have reasons one way or the other but I haven’t seen a reason for saving bandwidth that could articulate the concrete use case that it makes a difference.
> There is no backdoor in ML-KEM, and I can prove it. For something to be a backdoor, specifically a “Nobody but us backdoor” (NOBUS), you need some way to ensure that nobody else can exploit it, otherwise it is not a backdoor, but a broken algorithm
Isn’t a broken algorithm also a valid thing for NSA/whoever to want?
Them saying they want to use it themselves doesn’t actually mean much?
As an example - if there's a weakness that affects 50% of keys (replace with whatever hypothetical number), NSA can make sure it doesn't use those affected keys but still retain the ability to decrypt 50% of everyone else's communications. And using the entropy analysis from this post, that would require 1 bit hidden in the parameters which is clearly within the entropy budget.
sigh
...Really, you don't? I can hardly imagine anything more suspicious
>the US plans to use ML-KEM themselves, [a “Nobody but us backdoor”] would be the only backdoor they could reasonably insert into a standard.
Is that really convincing
And secondly, would we really know in advance? They can say that and then just use X25519MLKEM768 exclusively for stuff that matters.
I'm convinced they would love a broken algorithm in the IETF standard.