Security Cryptography Whatever
Security Cryptography Whatever
Attacking Lattice-based Cryptography with Martin Albrecht
Returning champion Martin Albrecht joins us to help explain how we measure the security of lattice-based cryptosystems like Kyber and Dilithium against attackers. QRAM, BKZ, LLL, oh my!
Transcript: https://securitycryptographywhatever.com/2023/11/13/lattice-attacks/
Links:
- https://pq-crystals.org/kyber/index.shtml
- https://pq-crystals.org/dilithium/index.shtml
- https://eprint.iacr.org/2019/930.pdf
- https://en.wikipedia.org/wiki/Short_integer_solution_problem
- Frodo: https://eprint.iacr.org/2016/659
- https://csrc.nist.gov/CSRC/media/Events/third-pqc-standardization-conference/documents/accepted-papers/ribeiro-saber-pq-key-pqc2021.pdf
- https://en.wikipedia.org/wiki/Hermite_normal_form
- https://en.wikipedia.org/wiki/Wagner%E2%80%93Fischer_algorithm
- https://www.math.auckland.ac.nz/~sgal018/crypto-book/ch18.pdf
- https://eprint.iacr.org/2019/1161
- QRAM: https://arxiv.org/abs/2305.10310
- https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm
- MATZOV improved dual lattice attack: https://zenodo.org/records/6412487
- https://eprint.iacr.org/2008/504.pdf
- https://eprint.iacr.org/2023/302.pdf
"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)
Hello, welcome to Security Cryptography Whatever. I'm Deirdre,
Thomas:I'm Thomas.
Deirdre:and we have a returning champion back on the pod today. We have Martin Albrecht back. Hi, Martin. How are you?
Martin:happy to be here.
Deirdre:Great! Thank you. We promise to send you your N timers club jacket in the mail. We have another one that we have to mail out in the future. FYI, myself and Martin both work for Sandbox AQ, at least for part of our time. And we invited Martin back to talk about one of his other areas of expertise, which is cryptography and how to analyze the attack security against them. Because we wanted to understand more about some of the new NIST schemes that have been selected for post quantum cryptography. And that includes a lattice-based KEM, key encapsulation mechanism called Kyber, and at least one of the signature schemes called Dilithium. Falcon's also lattice-based, right?
Martin:Correct.
Deirdre:But it's different for reasons and it uses floating point things and I don't like them. Um, but there, there's been some discussion about like, how do we analyze? The security levels of not just lattice schemes, but kind of like post quantum schemes in general that are supposed to be resilient about, against a classical attacker with a regular computer and theoretically a non noisy, there's like some There's some acronym about how it's, like, got enough logical qubits and enough error correction and enough, uh, you know, handling of noise that happens with quantum computers in the real world to be an actual threat against, uh, these cryptographic schemes and actually run things like Shor's algorithms or some of these other attacks efficiently to target, you know, some quantum scheme, uh, sorry, uh, cryptography scheme. So, we, we're here to pick Martin's brain, and maybe he can help us, and in the, the fun medium of... Not being able to use visual aids to show things like lattices and vectors and bases. So first off, Martin, can you kind of give us a high level of like, why we kind of like these schemes like Kyber and Dilithium that are based on lattice assumptions for post quantum cryptography?
Martin:Right. So after you do your intro that you don't really like them, let me tell you why you should like them. Okay. So like, I mean, one thing is like, they are fairly efficient, right? For post quantum schemes. So like, that's one of the reasons is like they are based on some problems that we believe to be hard, even on a quantum computer, as you just said. And like, given that, like their sizes, uh, at the computation time, um, somewhat good while. convincing at least many people that they are safe. So, and really, uh, we're here to talk about, I guess, is their security, right? So, um, what grounds this? And there's a, there's the first question of like structurally, why do we think it makes sense to base security on them? And then there's the question of parameters and like, you know, what is the security level? Which I think is the focus of today's episode, if I gather this correctly. So first, the structural thing is. Roughly speaking is if you can break Kyber, then you can solve a problem that is called module learning with errors.
Deirdre:We've heard about that
Martin:And we believe that this is a hard problem even for a quantum computer. So, let me unpack that, because the module learning with errors problem will also reappear at the lithium, so it makes sense to spend a bit more time on that. So, it's essentially, so you do linear algebra mod q, so, uh, nothing too fancy there, but you add some small noise. So, instead of having a proper solution to a linear system, You have a solution and that is offset by some noise and small here means it's not 0, it doesn't hit 0, but it hits 5 or it hits minus 3 or something like that. Like something small relative to the size of Q. And then the module in there means that we're actually not doing this just over the integers mod Q, but we do this as matrices over polynomial rings mod Q. and that kind of then gives you module learning with errors, um, and it's, um, assumed we have, uh, some reasons to believe that this problem is hard, also in quantum computer. And then the, the lithium, um, in addition to being based on this module LWM, LWE problem is also based on the module SIS problem. Which is simply, it's, it's, it's a very simple to stay problem. So I give you a wide matrix with many columns, but few rows, uh, modQ. It's uniformly random. And all you have to do is find me a short solution that sums it up to zero. So you sum up the columns to, to hit zero everywhere. And again, short is something, you know, like, you know, maybe the solution is between, uh, 20 and minus 20. Like, these are not dilithium parameters, but to give you a sense. And then Q is a few thousand. Of course, the problem is really easy if you have something that is quite big, a big solution, but it's considered to be a hard problem if the solution is small. So that's the structural reason. So these two problems are believed to be out on a quantum computer. If you can break either of those two schemes, you make, you can solve these problems in some parameter selection. So structurally, that's why we believe this is hard. It's the same thing as if we had a reduction that if you could solve Diffie Hellman, you could compute discrete blocks.
Thomas:So, like, the, the basic learning with errors problem, I, I imagined it as, like, you know, taking, like, the standard problem of, like, you know, using Gaussian elimination to find a solution for, you know, a, a system of linear equations, right? Which is, like, first week of first semester of linear algebra, easy problem to do, right? But if you mix a small error term in, With the solution you're trying to find, it's a hard problem. And that's because when you multiply out that whole matrix times the error, that error term blows up, right? That doesn't involve polynomials, that doesn't involve, you know, anything really complicated at all, maybe except for doing everything in modq, right? And you can build a cryptosystem off of just normal integer LWE. But nobody does, right? It's all modular LWE, it's all over polynomials at this point. I have, like, sort of a basic intuition for, like, how that kind of module LWE kind of works. I have no intuition for why we do that. Like, why do we complexify it in that manner?
Martin:Yeah, so first, I mean, there was Frodo that was submitted and some people really like Frodo because they're conservative and that is based just on the plain LWE problem. So, it's not quite right to say no one does it. But why do people like module LWE is, so, before module LWE came ring LWE, which is essentially you replace the matrix that you do your linear algebra on. You replace that by a polynomial. And then if you know this trick of that you can, you know, when you do a polynomial multiplication, multiply another polynomial, you can phrase this as a matrix multiplication. And then you can say like, so instead of really having a uniformly random matrix, what I have is one that is derived from this polynomial. So now instead of having to store n squared coefficients, Or entries, I now have to only stay a store ahead. So that is the reason that, you know, that immediately kind of, you know, like you get it from a quadratic, uh, size to a linear size or a quasi linear size. And then, so that's nice. And then there's some omas that are nice, some degrees that are nice and some that are not so nice. And you might also think like, oh, maybe it's a little bit much structure, but like mostly for performance reasons. What if, I hear you like matrices, I hear you like polynomials, what if I put matrices on top of your polynomials? So what you have is a 3x3 matrix, and in the case of Kyber768, And each of the kind of like, uh, nine blocks that you have is a little polynomial and where the, you know, the, and the matrix there is derived from this polynomial. So that still gives you a saving and some flexibility of parameter choices.
Deirdre:And so the nice and not so nice, is that both in terms of security or performance or size or all of the above?
Martin:So it's, it's very nice in terms of performance and it might actually be, it's faster, uh, more efficient typically to consider this module LWE, at least in the case, uh, here. Because we like powers of two for, for reasons that are a bit too boring to go into here, uh, for these rings, for these dimensions of these rings, and we really like... 7, 6, 8 as a dimension. It just hits a security parameter sweet spot. It's not a power of two, is it? And that's why actually you get, you know, nice security versus kind of performance trade off if you just believe parameters. The reason why you might not like it is, well, I'm adding more structure, right? And, you know, the rules of the game are more or less like the more structure you add to like a cryptographic primitive, the more you worry that maybe an adversary can exploit the structure. But I should say... When we think about like what, how do we pick parameters, we just ignore the structure because we know of no way of using the structure to even give us a factor that is linear in n. Like, it's just like, it seems like we don't know how to do much better, even for these more structured instances.
Deirdre:And so this is a bit of a tangent for the parameters are powers of two. They work nicely for these kind of polynomials you use in module LWE. That I, the first thing I hear when like, we like powers of two, I'm like, because our regular digital computers are all in binary, and they really like computing over powers of two, it seems that it is a quote, a happy accident, not quite an accident, that the crypto system based over module LWE He also likes those things and maybe that is why, partially why, they're quote so efficient on our modern digital computers.
Martin:Not quite, because
Deirdre:Cool, okay.
Martin:so the power of two here is a dimension of your matrix, right? So you're, so the 7, 6, 8 is like how big is your matrix? And there, you know. Not talking about bits. So there was a, there was a finalist for NIST that actually, because there's still what is Q? Everything is mod Q and what's Q? And you would typically choose a prime because, you know,
Deirdre:hmm.
Martin:makes sense. It's easy to reason about things are fields. But Saber was a finalist that said like, no, actually we use powers of two for the reason that you just mentioned.
Deirdre:Um.
Martin:if you use that, then actually our computers or actually our mathematics doesn't like it so much because if you pick the prime right, then you can use the NTT, the number theoretic transform, to do your polynomial multiplications. And then that gives you a better performance than just doing general polynomial multiplication over arbitrary modular.
Deirdre:Got it. Okay, so, ignore what I said. But, picking parameters, of course, is like this balance between performance It's maybe a little bit of like how these parameters affect complexity of both the structure but maybe also implementation complexity and what we know or what the designers know of the best attacks and It generally, if I say best attacks to people, they may think purely in terms of computational complexity of an attack algorithm parameterized by some security level parameter like lambda or whatever, but There's more dimensions to this, especially when you have a quantum attack where you're trying to design a scheme that is resilient against a quantum attack, you have to consider adversaries that have classical modern day computing capabilities and quantum adversaries that have access to a efficient large quantum computer and Apparently, there's more to consider. Well, not apparently, but there is more to consider than just the algorithmic complexity of ATT& CK. So can you tell us a little bit more about what is sort of in the space that you're considering that influences choosing parameters for these, like, LWE based schema, or module LWE and module SIS, whatever?
Martin:Yeah, so that is then kind of completing the circle because we know that if you find a magic way of breaking Kyber, then you can also solve some module LWE instance. So what is the best way that we know how to break Kyber? Actually, it's attacking the module LWE instance. So you really think of this, so you don't really think about the encryption scheme anymore, you think of this hard computational problem. Which is essentially, well, it's a noise linear system. Can you find a solution, please? And then you asked a question, um, how would you go about solving multi LWE? And then that is a thing that proceeds in, you know, you have three levels. One is an overall attack strategy that is known as the primal or the dual attack. Then in both of these, you run a lattice reduction algorithm, which more or less, uh, means people consider the BAK has Z algorithm. And this algorithm in turn now called the subroutine called, uh, shortest vector Problem Solver and this shortest vector problem solver, they're the fastest algorithm is a lattice S And then the, the key question is like, well, how expensive is it to run a ladder SIF in a given dimension? So the key question is. Because we have a pretty good idea of, you know, what parameters we need to run for this BKZ algorithm in order to make progress, you know, to have a solution with either of this primal or dual attack, um, that gives us something called a block size. And the block size essentially tells us the dimension in which we have to solve the actually hard problem.
Deirdre:Okay,
Martin:And so in the case of Kyber 512, the block size is something in the ballpark of 400. So you have to find shortest vectors in the lattice of dimension 400. And the question is like, how long does this take with a quantum computer, with considering memory even so. And then that's the key datum. And there you want to hit whatever magical threshold you pick for saying, you know, my, my estimate for this cost needs to be higher than And that's, and then maybe you take the polynomial overhead of running the BKZ algorithm and so on into account, and then that tells you how small you're allowed to make it.
Deirdre:cool. I'm writing down each of these steps and it goes deeper and deeper and deeper and deeper. It's like, okay, this is the, this is the hot path. This is the heavy hitter way down here, parameters by dimensions, approximately 400 and you just crank. Now this is. Is this, like, the best known, uh, attack, classical and quantum? Like, is this, like, where is the advantage for, uh, a quantum attacker to run, say, uh, lattice sieve versus on classical?
Martin:Yeah, so like, maybe it kind of makes sense to, so, to talk a little bit about the algorithm that we use to find short vectors and lattices. So it really is, so you're thinking of, so what is a lattice in the context here? Think of it really as a matrix, but a matrix over integers, so not mod q. And then you're saying, I'm going to do integer linear combinations, say, consider the rows of this matrix, and I'm going to add the marks. And what combination of them gives me a vector that is the shortest, which has the shortest, the smallest entries, or Euclidean long, but like, you know, like more or less, just something that makes me short entries. So that's the key computational task, right? That's all that's required of us. And so how do we go about this? Like, what's the best known algorithm, the most efficient algorithm that we know of?
Thomas:Before you, before you say that, just, just so I'm, just so I'm sure of what I think I understand here. On a normal episode of this podcast, I put on a, a delicate balancing act of play acting ignorance about all of these things. Um, in this case, I won't need to be acting so much. Actually, I'm, I'm, I'm never acting. But, um, we're talking here, when we're talking about finding short, uh, short vectors in a lattice, this is from a starting point of a random lattice. Where the basis of that lattice is, like, the row space of the matrix of the... It's the vectors that make up the lattice, right? And if it's a, if it's a random... lattice, then those are all going to be of like weird random big size or whatever. And we're looking for short vectors, which we don't immediately have in front of us. We have to somehow compute them from this random lattice. How much of that was crazy?
Martin:Uh, no, this is fine and like for, uh, so pedantically, it's a bit difficult to define what a random lattice is because it's an infinite space, but like that is maybe kind of not the thing that we should now kind of dive into. But I think like, so what you also want to distinguish, there's the question of which lattice you pick. So that is essentially the span of all these vectors of what, what can you produce? And then the question is, what's my input? How do I actually encode it? What's the basis of this lattice? And the key thing is that the input that you get is essentially a random basis of this lattice and that is a notion that can, it's a bit clearer kind of what we mean by random. So you more or less compute the Hermit normal form of the integer matrix and because that is canonical, that is the perfectly good input. Well, this is the thing that I had prepared and let's see, let's see if my analogy lands. So how do we go about finding these short vectors? And it's just Barton's algorithm, and I've just offended, uh, all my friends. So, how do we go about this? We essentially say like, well, let me just, I have my input, right, is some, some basis, and then I'm going to just kind of, Produce many more of them. I'm going to sample many vectors, kind of, that are just linear combinations of them, right? Pick some random linear combinations and you probably pick small coefficients of your rows to kind of make it not blow up because you all care about small stuff. But you sample, like, a lot of them. And then you just check them pairwise. Look, if I subtract these two, do I get something shorter? Oh yes, I do! Let me keep that. no, I don't. Let me not keep that. And you keep doing this pairwise comparison until you're done. And by the end, you have a new list, but they're all a little bit shorter. You do the same thing again. Oh, yes, these are shorter. Let me keep them. And you keep on going. Until you eventually, and under some heuristics, you hit the shortest thing you can make this way. Right? It's no different from... How do you kind of eliminate the hamming distance, kind of like, of a bunch of vectors? Well, if I add these, does it reduce the hamming distance? Oh yeah, let me keep them.
Thomas:This is amazing. I could code
Deirdre:I'd really love this. Yeah,
Martin:The whole thing is, uh, what I just described is, uh, is the Gauss NV CIF, and that is the, kind of like, the, the first, kind of, of the, heuristic CIFs. And by now we have a bit of, and this will be maybe relevant when we talk about memory, Slightly better. And the, but the slightly better idea also, I think, is fairly quickly explained. And that is, if I have to, and again, let me do Hemming distance, because I assume that kind of this is what maybe listeners here are familiar with. If I have three strings, and the first string is close in Hemming distance to kind of like some string that I shall call the center, And the second thing, a string is also close to that string that I call the center. Then probably the two strings that are close to the center are also somewhere close to each other. If they share many bits in common with the center, they will also share, you know, some number of bits with each other. That is the key trick for the asymptotically faster SIFs that we use. So we just dedicate some vectors and say, like, yeah, let's just pick some random ones, you know, that are kind of, like, somewhat nicely distributed. And then, what do we mean by somewhat nicely distributed? Oh, might as well be random. And then we say, like, okay, so now we're very generous. We say, like, oh, is it kind of close? You know, if I subtract these, is this kind of small? But I got this very loose definition of small. And then we put them in buckets. And then I do this pairwise comparison within these buckets. And then now I have this quadratic search only happens in, uh, in, you know, the things that are close to the center and, uh, I've improved the asymptotic complexity. So that is, roughly speaking, the BGJ1 SIF, which, uh, is implemented in Jessica, which is this open source implementation of SIVing.
Deirdre:Hmm.
Martin:And then if you want to go to the asymptotically fastest of these algorithms, instead of picking these random centers, You pick some centers that are essentially error correcting codes and you just use something that you can find, find out very easily which of them you're close to, right? So you specifically chose these centers and then instead of having to check all of them, am I close to them? There's an algorithm that is somewhat better that tells you if you're close to one of these centers.
Deirdre:Yeah, yeah. It was cool.
Martin:That is the BDGL SIDH and that is the state of the art for classical SIDHs. So classical meaning not on a quantum computer.
Deirdre:Right. So do we get any speedups on a quantum computer for any, any of it, part of it? I mean, the whole, like, comparing all these little vectors and seeing which one is shortest with a little bit of, like, ways of doing bucketing comparisons and stuff like that, it almost smells like you could get a quantum speedup on some of that stuff, but how much do you get?
Martin:So the quadratic search that I described, I hold on to one string and I compare it against all the other strings, right? That's an exhaustive search. And in an exhaustive search, there we have an improvement using Grover's algorithm. And so we can just use Grover's algorithm for this part of the, uh, quadratic search. So, uh, that means you can at best hope for a quadratic speedup, right? So, or a square root. The speedup is not that impressive because we have just reduced the quadratic part of the SIF dramatically by using these, uh, funky centers and only doing the quadratic pairwise search kind of in these smaller buckets. And so you get a quadratic speedup in the size of these buckets. Or the running time, which is quadratic in the size of the bucket. And then Grover more or less turns it down to linear in the size of these buckets. But these buckets now are not the whole dimension. You have like, they're smaller. And so the advantage that you have for kind of a quantum computer are much less than what you would expect for Grover speedup. Because you always assume like, well, AS128.
Deirdre:Mm
Martin:would hope that Grover gives you something of 2 64 quantum operations, whatever that means exactly, and then a quantum operation is much more expensive than a classical operation, but yeah, whatever. And then we are much further away from this kind of square root speedup in lattice setting.
Deirdre:I know Isogeny's the best. Before everything got broken in SIDH land, it was like, the best classical was like, fourth root speed up. And then if you throw, if you throw a Grovers on that, or you apply Grovers as part of what was called a claw attack, uh, you would get sixth root. And so some of the sizes were, were scaled for that reason. And yeah, it sounds like you wouldn't, you don't even have to worry about it in that. much in the lattice world, but then it got broken on the, uh, the construction basis of torsion points. And everyone was worried about the torsion points for a long time. So we don't talk about that anymore. Analysis thrown in the bin. Uh, not really.
Martin:another, there's another reason why you select what I've just described in the quantum setting, so this Grover speedup. So, the database that we're searching for in the Grover setting here is an exponentially large database of actual vectors.
Deirdre:Oh,
Martin:So, like, it's different from an AS key search where, you know, like, it's some uniform space over, like, some bit strings, but, like, no, it's not all vectors that exist in the world, but it's, like, an exponentially large collection of vectors. So, you actually need to produce these vectors first. And then you need in your Grover algorithm, you need something called a QRAM. So you need a quantumly accessible classical RAM. So you have RAM, but you now can carry it in superposition.
Deirdre:Do we have that yet? Has anyone built that yet?
Martin:No, like, so QRAM is equivalent to a full scale quantum computer. And there is some debate about that I'm not an expert on, about what is the cost actually of running QRAM. roughly speaking, like an intuition for why this is much more expensive than in a classical setting, you know, if I have a very large amount of data, I could, in principle, if I don't need some data for a while, kind of, I could, you know, Put it in stone or put it on some magnetic tape and bury that somewhere and then leave it there for 200 years and then come back to it. But I don't need to keep the power on for this, this thing for that while. But if I want to have superpositions over all my data, like I need to, to have it powered on entire time for my entire computation. Because I'm not varying like this particular memory cell, but I'm varying some superposition of all the memory cells. Again, quantum computing people are probably a bit upset with me about like how I butcher that analogy. But that is a reason like the, the question of memory cost is one for in the quantum setting that small speed up that we get from Grover, it would be very surprising if, if that would even hold
Deirdre:Ah,
Martin:you need this QRAM and that seems to be a very expensive resource. So all the costs that we have is that we just assume you get QRAM for free. Because otherwise there just is not, no, no advantage,
Deirdre:Now, this is kind of goes to the point of like, how do you model your adversary? And one kind of nice thing about designing cryptographic schemes that will theoretically be resilient against a quantum adversary is that. You ideally want to be resilient against a incredibly powerful quantum adversary. You want to give your adversary all the advantages that you can kind of get away with, because if you can stand up to that, then you're, you've got a lot of margin. But then, on the other side, you have to... If you want your thing to be adopted, you need to be able to make it performant and usable, and that means picking parameters that are resilient enough against your adversary that you have modeled, but small enough that make it performant and small and, you know, all that sort of stuff. So, I'm guessing the basic answer is, like, Okay, we need to, my sort of ideal world is like, if we modeled the quantum adversary running these, uh, attacks against these module LWE as having perfect QRAM, how does that totally bone the parameter sizes?
Martin:not at all.
Deirdre:Hey, then why don't we just assume they have, like, perfect tons of QRAM and then just... Just move on with their day.
Martin:Uh, yeah, and then this is what we do. I mean, the, we wrote a point where we tried to estimate that, but we had to set that QRAM as unit cost. And even then, I think if you, for the largest Kyber parameters, the speedup was something that you wouldn't care about. I'm trying to open it live while I'm talking to you, but like it was. It was very, very small in, in large dimensions of like the, the part that you have. And then, I don't know, maybe I'm leaning out of the window a bit too much, but I, I don't think that's going to be an issue. And then also the shouting matches that I, people are not shouting at each other about quantum adversaries at the moment. So that's not the issue that's at stake. I think more or less everybody's like at the moment, like, yeah, the, the costs, how we understand, um, what they can do is so far from something that we would consider a threat. Like quantum adversaries is for parameter selection, you can more or less ignore because, because of this kind of weird thing, you need to have exponentially large database, you produce that first, and then the, the speed up is in this quadratic part of the SIDH, and we learned how to make that a lot smaller, uh, using classical techniques.
Deirdre:Cool.
Thomas:So when we're like, when we're talking about like the strength of these algorithms, the security levels of these algorithms, we're talking about classical attacks. We're talking about like, if you swap out curve25519 or the NIST curves or RSA for an LWE problem, are we getting comparable levels of security against kind of classical attackers like The attackers that actually exist in the real world.
Martin:Yeah. So that's, uh, I think it's at stake. Mm
Thomas:so like, the lowest level of the stack of this attack is the sieving stuff, and it turns out the sieving is both cool and also not as hard to understand as I expected it to be, right? And the sieving that we're doing, we're using that as kind of like a plug in for the BKZ algorithm. I'm I'm right so far? All right. So I have a sort of intuition for lattice reduction because it comes up in some basic cryptanalytic attacks for public key stuff and for GCM and stuff, but like lattice reduction, LLL, I sort of get like I get it. Gram Schmidt because, you know, I did an undergrad linear algebra with my daughter, right? I mentally understand LLL as like, okay, LLL is like Gram Schmidt if you had Gram Schmidt to call as a subroutine, right? But it's comparably as, you know, as complicated as Gram Schmidt is. BKZ. In my head, I have like, okay, BKZ is like a better version of LLL. And that's as much as I know. So the idea of like a lattice reduction thing that has an SVP Oracle as a plugin, I have no intuition for. How complicated is the BKZ part of this attack?
Martin:I think it's, if you're comfortable with LLL, then it's quite easy. Um, so let me first kind of try to do it through the medium of LLL and then let me try to do it without the medium of LLL. So first through the medium of LLL. LLL is BKZ with block size 2. So, because what LLL does is, it does the Gauss reduction and so that means it looks at two vectors and then those span a little lattice, a little projected sub lattice, and you just find the shortest vector there. And so now instead of looking at two vectors and finding the shortest vector in there, you have some parameter beta and you look at that lattice and there your oracle finds the shortest
Thomas:Ah,
Martin:It proceeds exactly like the LLL algorithm. Okay, it doesn't quite exactly, but it's more or less the
Thomas:I'm the one person that's listening to this that got anything out of that probably, but that all made perfect sense to me. That's great.
Deirdre:ha! Yay!
Thomas:have at least benefited from this. This is awesome.
Martin:Let me try to do it without LLL. So LLL is a very famous algorithm for doing lattice reduction. And then You can, um, and that might not be the most intuitive, but you can think of this essentially as a, as a sand pile. And this is if you know Gram Schmidt Orf organization. You can do that for a matrix, and then you can plot the norms of these Gram Schmidt vectors. You probably want to take the logs of these and then what you get is essentially after LL reduction, something that is a downward sloping line. So the model that we use to analyze is literally called the sandpile model. So that's the analogy I'm going to use. So essentially what this algorithm does, you start with like a very steep sandpile. And what you're trying to do is you're trying to flatten this. Ideally at the end of the day, you would want something that's parallel, that's flat, right? So just a flat line. So, um, how do you do that? It's like, well, you pick a little part of your, your pile and it's like, I want to flatten that just locally. I'm going to locally kind of like make this flat. So I take the peak and I make it flat. And so that means you push the beginning down and that means because you know, the sand has to go somewhere. So the, the, the rest is pushed up and then let me move on, you know, next to the peak. Now I try to make that flat and you keep on going until the end. Oops. I have nothing left to do, you know, I've kind of started from the peak and I'm at the bottom. Let me go back to the peak and see what's happened there, right? And then you kind of flatten that again, that again pushes some sand out, and then that kind of modifies some stuff later. And you flatten, you keep on doing that. And so the flattening operation is more or less this SVP oracle, right? So it does that for you, one little block, then you do the flattening, and you just keep on doing that and going round and round and round, and keep flattening it in little, in little steps. And then suddenly, like, all of these steps together give you something that is a lot flatter. And, but as far, as much progress as you can make is as much, you know, stuff you kind of look at in the, uh, in each little step. And so the bigger the block size, The more you flatten it. One goal.
Deirdre:Awesome. Ha ha ha ha ha! Thomas, this is, this is
Thomas:I agree. No, it's not. I agree that that was awesome.
Deirdre:Oh, that's
Thomas:Okay, so I'm still, I'm back to trying to chart my position in the whole attack as a whole, right? So I have like a, I have an idea of what we've done with BKZ to the original lattice. I have an idea of how we use the sieving thing as kind of a plug in to BKZ to kind of locally sort out the shortest vector from a block of I've lost track of where we are in the overall attack.
Martin:Yeah. So, okay, so now let's say we have ABKE algorithm. So how do I use this kind of in the most straightforward way of solving module WE. And I already mentioned that like we just ignore all the prolonging structure. We just pretend it's just like some matrix, right? Because that's actually how we cost this algorithms.'cause we don't know how to do better. So then what you do is like, so an LWE is essentially have a matrix, A as a random matrix, and I multiply it by some vector S and then I add some more error term e. And let's call the result A times S plus E, C. It's some vector. And so what I'm really like, how would I go about solving it? Well, one thing I can do is I can, you know, try all possible S's to see if I multiply them by A and subtracting them from C, I get something small. Right. That would be kind of like, you know, take me, give all integer, you know, linear combinations of A and C. And when I subtract them from, from C, give me something small.
Thomas:We're trying to find, like, the specific error term there, then. And we know the error term is small, because, like, it's a parameter of the algorithm. Like, the rate of the, uh, of whatever the construction is, is... Like, by design, it's a small band of possible errors.
Martin:yeah, and then the parameters are chosen. There is no other S that also gives you something small. So, like, if you found it, then it's unique. And that already should smell a little bit like lapse reduction because I have some, I have some vectors and I do some integer combination of them. And if I do the correct integer combination of them, then it's very small. Then suddenly everything collapses and everything is very small. And that is known as the unique shortest vector problem. So it's a, it's a situation where like, Ooh, I have this thing, this matrix here over the integers. I do some linear, integer linear combination to them. And there's one thing in there that is really quite small. So this BKZ algorithm is really good at finding kind of these short vectors. So I more or less take my LWB instance and apply my BKZ algorithm to it and I have the primal attack. So the primal attack is in a way a very direct, it's a direct way of you tackling the problem head on. There exists a linear combination of the rows of this matrix that gives something very small. Well, we know a class of algorithms that do exactly that. So let's, let's call that. That's it.
Thomas:Got it. Okay. And that's actually, that feels kind of similar to the way, like, LLL gets applied to existing crypto problems, right? It's like, it's set up so, like, the smallest vector is going to be, is like the likely candidate solution or whatever, right?
Martin:So one intuition for, you know, like maybe other people also, uh, you know, heard of Breitenbach's attack where you solve the hidden number problem kind of for poor randomness kind of ECDSA. So the hidden number problem is one dimensional LWE. So if you understand the hidden number problem and, you know, have an idea how LL solves that, it's literally the same thing as the primal attack on LWE.
Thomas:Neat. Okay.
Deirdre:I really love this. Just sort of like, Hey, you know, this other attack that, you know, from like 30 years of classical asymmetric cryptography, it is in fact, a short dimension or small dimension instance of these like bigger ones, that's going to help me a lot to learn these a little bit more in depth.
Thomas:I'm assuming that the reason everyone talks about the dual attack is because the primal attack is not the way you would really do this.
Martin:Yeah, I would say broadly, the prime attack should be the way to do this because it really solved the underlying problem. So like the dual attack, but like, uh, that is the segue, I guess, is you actually solving the sys problem. So that's what we mentioned the module SIS or module sys problem. And that is finding a short solution kind of to get sums to zero. That is actually how the dual attack proceeds. So we know that A times S plus E equals C. And now, say, I have a lot of rows of A. So it's not just a square matrix, but I have, like, quite a few rows. So that means there exists a short linear combination of the rows that sums to zero because these are linearly dependent. And so all I have to do is to find a short linear combination that sums to zero. So now if I apply this short linear combinations of the rows of A, well, that makes it zero point. And so if I now apply it to C, then this is a short linear combination. I've just killed A, so all that's left to do is a short linear combination of the error term. But the error term itself is small, so I can, after I've done that, I get an element out that is much smaller than Q. So what I do is I do this many times and then I say like, well, what I got out is that a short element all the time or is that a uniform element? That's the dual attack. And then how do I find short linear combinations kind of of A? Well, the BKZ algorithm finds short vectors in a lattice and it turns out the kernel of A is a lattice. So what I do is I compute the kernel of A. And then that gives me an integer matrix. And then I say, like, BKZ, would you please be so kind and give me some short vectors in this lattice? I would really like to multiply them by my C.
Deirdre:hm, uh, he,
Martin:And then you actually do something slightly more complex called the normal form of the dual attack. The reason why I'm mentioning this is the lattice that you fundamentally reduce in the dual and the primal attack. is one has A in it, which is the LWEA, and the other one has A transpose in it. So, like, more or less the difference in the BKZ tests that you're running is whether you will reduce A or A transpose, which, reminder, A is a random matrix. The only difference is, in the one case, you'll, at some point, you unlock the secret E, because, like, you know, you do a linear combination and suddenly everything collapses. In the other case, there is nothing to collapse. You just do that and later on, take your short vector and multiply it by E, and then it says something's wrong. And then it turns out, so it was, and then the, the dual attack is in a way, it's a quite roundabout way of attacking it, right? So you're saying like, oh, you know, I find something short in A that doesn't have anything to do with A times S plus E. I just, uh, and then I multiply it by A times S plus E. And suddenly I can distinguish now. It feels like a less direct way. So, like, morally you would say, like, I don't think this should be better than just, like, tackling the problem head on. You're even running the same BKZ algorithm on it. But it turns out, as far as we understand these attacks, you can play some tricks there of, uh, how you go about it. And that kind of makes these attacks currently kind of a few, um, a little bit faster than the private attack, at least kind of as far as our estimates are concerned. Right, so like, it seems like, yeah, indeed, these algorithms perform slightly better. And if that's an artifact of like, we just haven't really... found out how to analyze either of them really properly or that's actually there's something, something deeper happening with this roundabout way is more efficient. So
Thomas:Okay. So. That mostly made sense to me, and I won't torture you by, uh, having you explain that further. Um, I'll torture you a different way, which is, so we have, like, for the PQC stuff, we have target security levels, right, which is, like, roughly, like, matching the strength of the asymmetric algorithm, the key exchange, or whatever, to whatever, um, like, bulk encryption we're doing, like, 128 bits of security, whatever it is, right? So, like, I guess there was news last year. I think it was 20, it was 2022, right? where, like, the Israeli equivalent of the NSA published a technical report, right, where they had brought cost of, I think, the dual attack down to below the target security levels. I don't know how meaningful that reduction of security was. It was, like, on the order of, like, 10 or 15 bits or something like that, which the number of bits we had was already higher than AES or something like that. I don't know, right? But I'm more interested just in what was the meaning of that technical report. Like, what was the interesting bit of that?
Martin:they, they did two things. So let me compare with the bit adaptations that they did. Yeah, so the target security level was meant to be something in the ballpark of 143 and they reduced it to 138 or so.
Deirdre:mm hm, so five bits.
Martin:Uh, something for the smallest permit and it's bigger for 1,024, it went from 2 7 2 to 2 5 7, so that's, that's a bit better. Right? And then what they did is two things. On the one hand, they improved, so they build on some model of how you cost the sub exponential part of this algorithm. You know, I mentioned this decoding in the sitting earlier, and that is a sub exponential part, so that's great. But it's still, it's expensive. And then, and so in this paper that I put in the meeting notes, we gave like some costs for that. And they said like, actually, you can do better than this. And so they improved the sub exponential factor. That is one of the big savings. The other one is they are building, or maybe I think they, it might be independent, might have been independent work, but there was a paper slightly earlier by Guo and Johansson at AsiaCrypt. And they, uh, essentially introduced an FFT based approach to do a part of guessing in the dual attack. So let me unpack that. So the way I've described the dual attack now, it was a distinguishing attack. Like, do I get something small or large? But, um, what you can do is you can say like, how about I guess some components of the secret. And then if I guessed incorrectly, then I assume that the distribution of what I'm left over is uniform. If I guess correctly, I get a smaller dimensional LW instance. If I can distinguish those two, I can confirm or deny my guess. And, uh, what they did is they, uh, and the, and this is, this is useful because now the last problem that you have to solve is smaller, right? And the nice thing about this attack from an attacker perspective that the costs are more or less additive. So you can run the lattice part of the attack once and then you do the guessing stage kind of like on this preprocessed data. And, uh, and they kind of showed a way of, like, making this guessing step cheaper using some FFT based approach by essentially targeting only kind of some lower order bits. And that allowed them to increase the dimension of the stuff that you guess, which decreased the dimension of the lattice problem, and then all parameters, uh, kind of, you know, like, attack parameters were a bit smaller and everything was a bit faster. And that's kind of, they have, uh, in this Matzov paper, they have a slightly different approach to kind of like also using an FFT, which also allows them to generalize this and not just, uh, for bits, but like any, any small prime. And then that flexibility allows them to reduce this further still. This then also opens the door for like the kind of the back and forth, because there's a paper that is joyously, uh, joyously, uh, titled, does a dual attack even work?
Deirdre:Mm hmm.
Martin:Uh, by Leo and Ludo, and the problem that you might run into is at some point you have to distinguish between, is this thing random or is this thing from an LWE distribution?
Deirdre:Mm
Martin:But now, if we guess a lot of secrets, we actually, the correct secret has to compete with potentially an exponential number of false solutions. And then you have to take into account, like, maybe some of them randomly behave even better than your correct solution. And then there's this paper by Leon Duda, as I mentioned. And then there's a paper by Yixin and her co author, where they said, like, okay, where's the regime where we can prove this? But it's essentially the question of... We're doing some statistics. We're more or less making some independence assumptions here that this is fine, but these things are not necessarily independent is actually sound. So there's a, it's a bit of a back and forth of like, do you actually get the claim complexity or not?
Thomas:But here we're talking about not whether the particular MatSav attack worked, but whether the dual attack approach works at all.
Martin:So the dual attack is, it has been known to work, but like the, so what the question really means, does the attack when you do this key guessing work? So can you, can you do this step? And then, and what is the regime in which you can do this step? And you still have some guarantees like, no, no, no, my correct solution will be the one that I identify.
Deirdre:mm
Martin:And what they kind of, uh, showed in this paper said like, like the analysis is kind of, uh, theoristic, like we have some, some, some counter examples. Yeah. And so I think there's still, if the attack has this complexity or not, at this point is a bit up in the air is my understanding. I haven't followed the latest on this in detail, but like when we're talking about, yeah, like ballpark of do you gain five bits or not in the lowest attack, right? So like, we're not talking about like, is the whole principle doomed? What we're talking about. Do you really get this complexity that, that pushes you below this kind of magical number of one, four,
Thomas:Gotcha.
Deirdre:hmm. Awesome. So, all of these, even including the, the Matzov attack or some of these improved, uh, sieving attacks, these are all classical. So, we talked a little bit about how... In the quantum attack model, you have to consider how, how good or how much QRAM you have, but it seems that for a lot of schemes, you don't even have to consider that because even if you consider them having all the QRAM all the time, and it's perfect and never has errors, it doesn't really affect the security margins for those attacks. What about for these, like, best in class, if the dual attack actually gets the speed up and formats off and all these things like that? What do we have to worry about in terms of RAM and storage and all that sort of stuff? Is that a consideration for classical? It seems like it's not, not so much a consideration for quantum attacks.
Martin:it definitely is a consideration for classical. So the, uh, the sieving algorithm, so the key routine that we kind of argued about here is an algorithm that runs in exponential time and requires exponential memory.
Deirdre:Oh, okay.
Martin:And so, and that
Thomas:sounded so easy!
Deirdre:heh heh
Martin:the key thing is you need exponentially, many of your strings in order to do this Paris comparison, actually get a reduction. Right. That's the reason. So the numbers here, the memory that you need for sif is roughly oh 0.2 times the dimensional of your sif. So if you have block size beta, it's oh 0.2 times.
Deirdre:Okay. Mm hmm.
Martin:So we had block size, roughly 400 for the smallest cargo parameters. So you're looking at a memory of roughly to the 80 vectors. And then I had already mentioned that the way we actually do this attack is that we actually kind of put them in these small buckets. And it's only in these buckets that you do a pairwise comparison.
Deirdre:hmm. Within each pocket.
Martin:Yeah, within each bucket you do this pairwise comparison. And that, uh, a bucket is roughly the square root of your entire database. That's how you parameterize it. So you're looking at something like 2 to the 40 vectors. And that's where you do the quadratic SIF. And so like some, some work that's been kind of, we have recently done that is kind of hopefully put on Eprint soon is to say like, okay, can you sift on multiple machines?
Deirdre:heh.
Martin:and then the, and turns out you can, because you, um, the thing that you really need to be fast, the memory to be kind of, because you, you, you're looping through your database and find something that is close to your vector that you're currently holding onto, right? And there you need memory to be fast. That is something of 0. 1 of your dimension, right? So like, it's, it's quite a bit smaller. And so if you have local RAM that is kind of in that ballpark, then you can essentially put one bucket per machine and there enjoy all the benefits of like fast handling, right? So that was the thing that we're focusing. And there's been some designs of like, you can even, you know, do a range computers in a ring structure and so on, so that you kind of minimize this, like. But like, more or less, yes, you need exponential memory, but it's not like you're jumping around randomly in this exponential memory all the time, but you can hide a lot of, uh, load latency by essentially running through this inner circle and just, uh, touching the next vector. So in that sense... There is some cost that is associated with kind of accessing all of these, uh, exponentially many vectors, but it's not like each call now means like I now have to go out in an exponentially large database and find the random element in there and, you know, I have no, you can, caches and so on make sense.
Deirdre:This smacks of, not quite map produce, but like you can just break it up into chunks and you can farm it out and pretty much just bucket it into parallel processes, get some results, and compare them, and then kind of work on it. Work on. I work on it as opposed to every. sieving bucket, having to see all the other data that the other sieving processes are also working on or something like that.
Martin:and there's, of course, some synchronization that you need to do, but like, I mean, in our, you know, like, somewhat limited experiments, but like, you know, like a bit free servers, like we saw like a pretty tidy linear speedup. So it seems like, yeah, you can scale this across multiple servers. And then of course, like the, really what you want to do is like have networks of FPGAs doing this sort of thing. Uh, but like, you know, we leave that for future work
Deirdre:So does this affect any of the current estimates of the hardness of the different security parameters? Compared to, because it's supposed to be like, well, level one is approximately AES 128, and level three is supposed to be AES 256, and so on and so on. Do you think it affects that?
Martin:because at the moment, all of these estimates that we're talking about, assume that RAM is free.
Deirdre:Uh huh.
Martin:So, so if anything, they push the cost up, not down.
Deirdre:Cool. All right.
Martin:And so the dispute is over, does it push it up enough to regain the security level that is, was originally claimed or not? So, if you think that kind of these numbers that are, for example, just take, you know, say, like, uh, we find out actually Matzov behaves exactly, the Matzov attack behaves exactly as they claimed in their paper, despite problems in the analysis identified, but somehow it turns out, you know, yeah, yeah, yeah, we made some independence assumptions. It's not true, but it seems to work in practice sort of thing. Let's, let's say that. And if you're happy with that, like, yeah, it's not quite AES256, it's, it's a little bit lower than that, but like, I accept that then, yeah, the RAM, taking RAM costs into account will only push that up. So, it's, the, the own, the whole dispute is over if you're uncomfortable with these numbers right now. Does taking memory into account allow you to be comfortable again?
Deirdre:see. Okay.
Thomas:And you're gonna have to take memory into account, somehow. But like, you might not take the worst case assumption about how much complexity or how much time it's gonna, or effort it's gonna take. Like having to have every bit be connected to every other bit because of like, we can, if we can bucket things and blah, blah, blah. Okay.
Martin:Yes, at the moment we're just assuming this is free, right? So like I, every, every cell I can, I can just address and like immediately, like it costs, it's, it's a no op, right? So like, I want to compare a vector, then, you know, I can just do that. And that is, of course, unrealistic, but the question is like, how well does different levels of cache, how much do they mask the true cost? And how much do you, should you actually care? I think is a, is a, is an open research question. So
Deirdre:On the flip side, like, has there anything changed with, like, analysis of how hard AES 128 256 are against quantum attackers with, you know, perfect QRAM or classical attackers with perfect all access and it's free? Like, has any anything evolved on that? Or is it just purely like, Model groves with perfect QRAM with everything everywhere all at once.
Martin:no, there has been, so all the lattice schemes, let me try to phrase this the right way around, more secure
Deirdre:Yeah,
Martin:there was a revised cost estimate for breaking AS on a quantum computer.
Deirdre:neat.
Martin:So there's a paper, I can put it in the meeting notes by Fernando Verde and co authors. I apologize to the co authors, but Fernando was my student, so I know his name best. Meeting notes, and they kind of revise the costs, uh, for, uh, for AES quantum circuits. still get, like, so the... This target of as hard as AES on a quantum computer was a moving target, like it makes sense why they picked this, but like we, it's not that it is settled of like how many operations does it take to solve AES on a quantum computer. And even what is the right cost model for a quantum computer isn't clear. So what are the operations that you should count? Because we don't really know yet which of the architectures will, uh, will be dominant and then what are the relative costs of various kind of quantum gates.
Deirdre:Thank you because like, I know that there's like discussion when you're looking at specific new crypto schemes about All these attacks and how to measure and that sort of stuff, but like the other side of it is like the thing you're trying to categorize against, it also may move because it's also subject to analysis and, you know, crypt analysis and modeling of, you know, whatever. And so I was, I was very curious if like these, if these things were moving at all, or if, if AES 128, you know, complexity of attacks kind of stayed stable, and I'm glad that you pointed out that I haven't read that paper. So, yeah. Cool! Thomas, do you have anything else?
Thomas:No, my brain is leaking out of both my ears and my
Deirdre:hee hee
Thomas:but that was, that was incredible. That was awesome. So I'm, I know a little bit more than I did before, but it's so difficult to get me to learn new things that you should be very, very impressed with yourself for accomplishing that. That was awesome. Thank you so much.
Deirdre:ha!
Martin:Okay, I feel honored as an educator.
Deirdre:Yeah! Thank you, Martin, for, uh, meeting the challenge of how do I talk about these things with no visual aids, audio only, and I think you did an excellent job because I understood a lot of it. Thank you, Martin. Thank you very much. Uh, Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek, and David Adrian. Our editor is Nettie Smith. You can find the podcast on BlueSky and Twitter at SCWpod, and the hosts on BlueSky and Twitter @durumcrustulum,@tqbf, and @davidadrian. You can buy merchandise at merch. securitycryptographywhatever. com. Thank you for listening!