Score:1

Are lattice-based cryptography and error-correcting codes mathematically unsound?

cx flag

From Ronald de Wolf's The potential impact of quantum computers on society:

The first is so-called post-quantum cryptography. This is classical cryptography, based on computational problems that are easy to compute in one direction but hard to compute in the other direction even by quantum computers. Factoring does not fit this bill because of Shor’s quantum algorithm, but there have been proposals for using other computational problems, for instance based on lattices or on error-correcting codes. The problem with such schemes is that they have barely been tested. We are not able to prove that factoring is a hard problem for classical computers, but at least one good piece of evidence for such computational hardness is that many sharp mathematicians have tried for decades to find efficient factoring algorithms, and failed. The alternative computational problems that have been suggested for post-quantum cryptography, have not yet undergone such scrutiny and there may well exist an efficient quantum (or even classical!) algorithm for breaking them.

The author did not provide any references, nor could I find any. Does this mean that they are mathematically/cryptographically unsound? If so, could you please provide a reference?

kodlu avatar
sa flag
Please clarify your question. It's hard to provide evidence for a negative. In what context is the author making this statement, what schemes are "tested" according to him, is unclear from your question. Not sound and untested are not the same thing. The author is making an unsupported statement as far as I can tell. Schemes based on lattices for sure, and error correcting codes to a lesser extent have been around and subjected to scrutiny.
HarryFoster1812 avatar
cx flag
@kodlu the author makes this statement when discussing the replacement algorithms that follows shor's algorithm. The question that i am asking is whether the author is commenting on how these algorithms are not tested and therefore are not as secure to use than RSA where we have proof that they are secure
SAI Peregrinus avatar
si flag
When did RSA get proved secure? RSA's security relies on the assumption that the RSA problem is hard, but there's no proof of that. Indeed, with Shor's algorithm it's quite easy! We *assume* RSA is secure classically because no publicly known classical attacks exist, and people have been searching for years. So RSA's security is also just a tested assumption.
Score:1
ru flag

The updated question, containing the quote, is much less polemic than the original. Indeed, it's just an issue of the general impossibility of proving a negative (in this case: "can certain PQC problems be solved efficiently"?) It's easy to prove that yes, they can be solved efficiently, by just providing an algorithm that does solve them efficiently. Proving that one can't be designed is much more difficult. In certain cases, not even a proof that $P \neq NP$ would be sufficient, as it only proves the difficulty in the general case, but there may be special cases for which efficient algorithms exist.

So if we can't prove that efficient solutions don't exist, the best we can do is show how much we tried, and we weren't able to find them.

It is a fact that PQC schemes, including those based on lattice and code problems, have been less scrutinized than RSA. The Rainbow and SIKE breaks (both PQC schemes, but not based on the two mentioned problems: the first is based on multivariate polynomials, and the second on supersingular isogenies of elliptic curves) do not exactly inspire further confidence on PQC.

On the other hand, it bears mentioning that both lattice and code problems are not exactly new: McEliece (code-based) was proposed in 1978, NTRU (lattice-based) in 1996 and the learning with errors framework, used by Kyber and Dilithium, in 2005. For reference, Diffie-Hellman was published in 1976, RSA in 1977 and elliptic curves in 1985. Of course, because DH, RSA and ECC were actually adopted in the real world, they were much more scrutinized than their PQC counterparts.

The bright side is that PQC has been an active area of research for over 15 years, and has received considerably more attention since NIST launched their PQC contest in 2016. So it's not like cryptographers just came up with new schemes and are throwing them out there, untested (as much as Rainbow and SIKE look like exactly that). Unfortunately, there's no substitute for decades of scrutiny as the classical algorithms have received at this point. In 10 years we'll probably be more certain of the security of Kyber, Dilithium, NTRU, McEliece and other schemes. In 20 years, much more so.

Thus, if you are choosing between classical and PQC schemes today, you should ask yourself: what threat worries me most? Is it that my protocol will certainly be attacked within a few decades, when quantum computers should become practical (and that includes cyphertexts generated today and saved for later decryption)? Or that my protocol is subject to an overnight break, however unlikely that becomes day after day of further cryptoanalysis, and I don't care too much if my data is decrypted decades from now? If the former, go with PQC; if the latter, with classical cryptosystems.

Polytropos avatar
pl flag
It's worth noting that classical or PQC is not an either/or decision. I'd go hybrid as a default whenever handling any data that I want to remain confidential into the quantum computing era.
Score:0
ng flag

It is worth mentioning most cryptographic hardness assumptions (with the exception of ECC stuff imo) can be painted to be somewhat suspect. For example, one way to describe RSA is as follows.

RSA is a rather ad-hoc assumption, that proponents like to compare to the standard problem of factoring integers, despite no proof of equivalence being known half a century after the introduction of this assumption. This problem is known to be quite brittle, where even small parts of the secret key leaking can lead to devastating attacks on the scheme. While many number theorists working on the problem have not yet produced a polynomial-time algorithm, this is no fundamental obstacle to this --- analogues of factoring in other domains (polynomials) are quite easy, and problems closely related to factoring (small-characteristic DLP) have recently had massive algorithmic improvements. Even a relatively-small part of these results transferring to factoring itself would break most deployed RSA instances (cf Nadia Heninger). Other computational number theorists have similar doubts in the certainty of the hardness of RSA (cf Henry Cohn). Analytic number theorists additionally have their doubters (cf Peter Sarnak). Other experts (Coppersmith, Buhler) have not publicly stated their opinion --- as they have worked for the Center for Communications Research --- an NSA subcontractor --- for some decades at this point.

Therefore, the state of cryptographic usage of RSA is that it is a long-used problem that is tied to factoring mainly by the hopes and dreams of cryptographers, who justify its hardness by appealing to the expertise of number theorists, who themselves are not particularly convinced of its difficulty, and several of whom stopped publicly disseminating their research on the topic.

This has completely ignored the existence of efficient attacks against RSA using quantum computers. While such computers don't exist yet, in recent years world governments have funneled tens of billions into their development, and there are strong reasons to believe they are storing existing ciphertexts to decrypt later. Communications done know may be safe for now, but may also be able to be decrypted within a decade or two.

Being able to write a paragraph of the above form doesn't immediately mean RSA should be removed from cryptographic use. But all hardness assumptions are just that --- assumptions. And assumptions can fail. So while it is useful to be skeptical of new assumptions, old assumptions themselves can still fail (again, such as small-characteristic DLP did ~10-15 years ago).

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.