Security Cryptography Whatever

Why do we think anything is secure, with Steve Weis

June 29, 2023 Security Cryptography Whatever Season 2 Episode 15
Security Cryptography Whatever
Why do we think anything is secure, with Steve Weis
Show Notes Transcript

What does P vs NP have to do with cryptography? Why do people love and laugh about the random oracle model? What's an oracle? What do you mean factoring and discrete log don't have proofs of hardness? How does any of this cryptography stuff work, anyway? We trapped Steve Weis into answering our many questions.

Transcript: 
https://securitycryptographywhatever.com/2023/06/29/why-do-we-think-anything-is-secure-with-steve-weis/

Links:
- The Random Oracle Methodology, Revisited: https://eprint.iacr.org/1998/011.pdf
- Factoring integers with CADO-NFS: https://www.ens-lyon.fr/LIP/AriC/wp-content/uploads/2015/03/JDetrey-tutorial.pdf
- On One-way Functions from NP-Complete Problems: https://eprint.iacr.org/2021/513.pdf
- Seny Kamara's lecture notes on provable security: https://cs.brown.edu/~seny/2950-v/2-provablesecurity.pdf
- How To Simulate It – A Tutorial on the Simulation Proof Technique: https://eprint.iacr.org/2016/046.pdf
- A Survey of Leakage-Resilient Cryptography: https://eprint.iacr.org/2019/302
- A Decade of Lattice Cryptography: https://eprint.iacr.org/2015/939.pdf


"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)

Deirdre:

Hello, welcome to Security Cryptography Whatever. I'm Deirdre.

David:

I'm David.

Deirdre:

and we have a very special day. Our, uh, guest, Steve Weis. How are you?

Steve:

Very good. Thank you.

Deirdre:

Awesome. Um, we invited Steve on today because there was, uh, a little bit of discussion prompted by an eprint that went on the internet, that we're not gonna describe and we're not gonna link to because it's hella broken, but it was an interesting claim in their abstract or the beginning or whatever, that their cryptosystem proved that P does not equal NP, because their crypto algorithm was correct, but it turns out it was not correct. But even if it was correct, it would be interesting to claim that your cryptosystem proved that P does not equal NP. And this led to an interesting conversation in some of our little cryptography nerd circles about like, computational complexity, hardness of cryptographic problems, what the hell do we base our assumptions and understanding of the security and hardness of cryptography that we design in general, uh, reduction— like, all these things. And we realized that we'd never really talked about these things kind of in a little box, and that there was sufficient discussion and questioning in a well-informed cryptographic circle that we wanted to talk to somebody about it. And Steve did a PhD in cryptography, and so did David. And so now I've invited them on to ask them questions about, just to just put on the record some of these questions. Can you tell us a little bit about yourself, Steve?

Steve:

Yeah, my name is Steve Weis and I'm, uh, currently a software engineer at a company called Databricks. And yeah, I did do a PhD in cryptography, but I would probably describe myself as a lapsed cryptographer, but, uh, I have not been practicing, in a while, but I can read papers and parse through them and understand what's going on.

Deirdre:

Cool. Thanks. And David worked on TLS, right?

David:

I did remarkably little math during my PhD, so we're gonna lean on Steve a lot here.

Deirdre:

Okay, so Steve, can you give us like an intro to like, why the kind of broad statement that, "I have a cryptosystem, therefore P does not equal NP", just kind of jumps off the page as like, whoa, there, what are you talking about?

Steve:

Well, the, the irony of this paper is that the scheme is completely just broken. It's, you know, you look at it, it's not even semantically secure, the way we would describe it; but in all that mess, they do make a statement that if we could show that a cryptosystem exists, that would prove that P does not equal NP. Because if P were equal to NP, you would not be able to have cryptography as we, we know it. You would not be able to have cryptography based on, you know, heart problems and uh, you know, the assumptions that you need to have symmetric cryptography.

Deirdre:

That's interesting because one, we have cryptosystems, right, that are not considered hella broken. That's why we rely on them. So ignoring this paper, say we just have regular schmegular, elliptic curve, Diffie-Hellman, and ignore the, presence of quantum computers, that blah, blah, blah. So, first for a general audience, what is P? What is NP and why do they not equal each other?

Steve:

Great question, and um, you know, I think that this, this is, uh, probably gonna not do justice to, for like a general audience right now, but P, you know, very broadly is the set of problems that we know are computable, with polynomial time computers and they can use randomness, there's some different variations about what they are, but P is basically what we know how to compute. NP is the set of things that we know how to check in polynomial time. So I might have a problem where I can't find the answer, but I can validate an answer in polynomial time. So this leads to a big question of whether these are the same. And this is, the field is called computational complexity. And these are just two of many, many different, what are called complexity classes. So we have P, NP, BPP, BQP, P-SPACE, goes on and on and on. Um, you can dig into this and it can kind of be fascinating. And they all have different assumptions that are basically like, you know, given a machine that can do this, and then, you know, can you decide this? And, uh, often people call these languages, like, you know, they're, their, language is in these complexity classes. So big, long, outstanding problem in computer science is whether P is equal to NP, and, you know, a classical NP-complete problem, that what that means is that you could reduce other problems in the space to, it is, um, called 3SAT. That's satisfiability with these, these terms that have have three terms in it. And the idea is basically you can write a circuit, using these 3SAT terms, uh, a general circuit, and I can plug in an iNPut, and the problem you're trying to solve is whether it has a satisfiable iNPut. And if you can do that, then you can actually kind of do a search to find the iNPut that satisfies it. So, I can set up something, say, "I don't know the answer, here's the problem, can I find an answer?" And it'll actually generate an answer. So, you know, kind of really roughly in, in the cryptography world, like that answer could be the secret key. I could set up a SAT problem that says, you know, find me a value that, or is this satisfiable that there's a value that would decrypt this thing to this known plaintext. And you can crank that through and, and do basically search and find the solution, which would be the key that would satisfy that problem. And that's when, you know, that's why when I say that, like if P equaled NP, cryptography wouldn't exist, like we would be in this world where, not just that, but all sorts of other things would be, you know, implied. Like you could, if you could write a checkable statement, you can find a solution for it. So it's kind of like certain parts of math are now like automated. You would, if you can write down a problem, you can solve it, um, with a lot of caveats. So, I mean, don't, don't quote me on that.

Deirdre:

Mm-hmm. So why don't we do, why don't we deploy cryptosystems that are reduced to the 3SAT problem? If it sounds, so, this NP-complete problem, blah, blah, blah. Sounds pretty neat and secure, right?

Steve:

I mean, I think this is an interesting thing is that, when it comes to public key crypto, we have it reduced to problems that, are not NP-Complete, are

not known to be NP-complete:

factoring, um, finding a discrete log problem. You know, tomorrow somebody might be the next, you know, Andrew Wiles and come out their basement and say, I have a new factoring algorithm and it's poly time. And we'd be like, all right, throw out RSA. But it doesn't collapse like the polynomial hierarchy or prove that P equals NP. And we know that these problems that we mentioned are solvable in a different computation model, like we mentioned, you know, put aside quantum in the beginning, but we know how to solve these if we have a quantum computer, which we don't have one that can actually run on these yet. So it's a great question. And the reason these problems were chosen is it's easy to find hard or what we think are hard instances of them, uh, in like an average case. So we can basically sample from a bunch of different, you know, RSA modulus and like they're mostly hard to, to factor.

Deirdre:

Got it. So to do one step back, we just said that we base cryptography systems or cryptosystems that reduce to these hard mathematical problems. We just mentioned factoring. We just mentioned discrete log, which is elliptic curves and the the prior generation before elliptic curves, and like say you based a cryptosystem on 3SAT, you would reduce to 3SAT. There's more to a cryptosystem than just, "factor", right? So when we say we reduce to one of these computational problems, what do you, what do we mean by that?

Steve:

That's a great question. It's probably like a term that not everyone is familiar with, and the idea is that, let's say I have RSA, and I have a black box that can, basically solve RSA. It doesn't necessarily need to spit out the factors, but you know, we can talk about what this means in a minute, but you know, let's say we have a black box that can distinguish RSA encryptions or that can give you some advantage in guessing, you know, the content of, of an RSA ciphertext. If we have that, we can use that black box, which we may not even know what's inside of it, we can use that to factor numbers because we would just put those numbers as an RSA modulus and then run it through this thing. And then we could basically iterate on this and, and get out a factor. It doesn't necessarily work the other way around, that if, um, we had a black box that could factor, we could, well it does in this case, but not, it doesn't necessarily always work the other way around with, with this problem. I might have this backwards actually, so.

Deirdre:

So that kind of putting a little black box around an oracle, that says I can distinguish between— the thing that we call in the security proof game model of trying to model, the thing you're trying to build, a cryptosystem that has specific properties, say encryption and decryption, and you're trying to construct a mind game, a theoretical game, and what you think an adversary can do and how it interacts with your cryptosystem, your encryption, decryption scheme, or whatever, and, when we say that we have some, like a little black box that says I can distinguish between different, you know, RSA encrypted or, or factor based encrypted ciphertexts, we're saying that in that game, the adversary or anyone else, has a way to distinguish between these things and use that when interacting with encryption and decryption, however it likes. And often what we do when we construct these sort of security games is we kind of do the, I forget the name of it, but the, the, the, the reverse, saying that, say you can factor, say you can have a, a black box that distinguishes, uh, between these different factoring

based ciphertexts:

then it means that the adversary can determine which one is encrypted by you and which one is encrypted by David, or, you know, whatever. It breaks encryption. But then, you've shown basically by showing that it must rely on being able to distinguish, you must rely on being able to break factoring. You're saying that the whole cryptosystem and encryption and decryption relies on en encrypt, uh, factoring being hard. If the adversary cannot factor, then your encryption, decryption cryptosystem is secure, at least in this very limited game space. So that's kind of the big context of the security of encryption/decryption reduces to the factoring problem or whatever. It's because we have done all this work to construct this hypothetical and tried to model all the pieces of how this works and shown a security proof, literally kind of a, a math proof like you might do in geometry class, at least the way I did in geometry class many years ago. That says, If this thing is true, then everything else about my proof will hold and the inverse will, will also hold. Does that, did I get that right?

Steve:

I think that, you know, we should probably talk about when we say these games too. Cause like when, when kind of in the public, you think about breaking a cryptosystem, you think about, okay, I can just decrypt and get the ciphertext out. And these proofs give the adversary like a lot of power by making it as easy as possible. So, you know, an example one is indistinguishability of a chosen plaintext attack. I take two plaintexts, encrypt them both, and then I flip a coin and I give you one of the two. And if you could even get any advantage in telling me which one of the two, you've broken the cryptosystem, you're not actually decrypting it. It's like, okay, this one looks like an encryption of zero. You know, I have 0.001 advantage, I win. So we basically make the game as easy as possible. And we want to give the adversary as much power as possible, and in that case, you know, if there's a black box that can do this with, let's say, you know, RSA, you could use that same black box to factor. I could take an arbitrary number and make it into an instance of an RSA modulus and then plug it into this thing. That's the idea, is that if somebody, you know, we don't know how it works. There's just some algorithm that can solve this. We don't know what's going on inside of it, but it runs in polynomial time. I can use that thing to solve this problem that a bunch of smart people haven't figured out how to solve. So that, that's really what it comes down to. And that's kind of like, the best we have, and that's in the public key world. In the symmetric world, we don't even really have that. Like, symmetric world is just, we get a bunch of Belgians to work on something and then they put it up on the internet for a few years, and then in five years we declare, all right, no one's broken this, let's pick the fastest scheme from, the Belgians or whoever, whoever's winning it that year.

Deirdre:

Yeah. That's not the reason that I don't, I'm not super into symmetric, but it's also like a, like this feels a bit woozy and wibbly, whereas asymmetric crypto, I, I kind of like the structure more and I kind of like the, the provability notions more, but that's just me.

David:

Yeah, and, in asymmetric, you're basically saying like, if I have a black box that can do anything from like distinguished ciphertext to fully decrypt things, then therefore I could use it to also quickly solve this problem that we think is hard. Therefore, because we think this problem is hard, this black box must not exist.

Steve:

Yeah, and like I said, like we usually try to make these games like the— as easy as possible, not like, recover the entire plaintext and give me exactly what's encrypted. It's like, okay, there's two distributions of output. Like, can you tell the difference between these distributions, and can you do it probabilistically? So you don't have to win every time, you just need to win a little better than flipping a coin. And if you could do that, we consider the, the cryptosystem broken. So like, you know, the, the idea is that you make the game easy, you give the attacker as much power as you can, and then that determines your level of security. And there's different flavors, like you might give the attacker the ability to decrypt from an oracle except for the challenge thing. So that's, uh, you know, adaptive chosen ciphertext, like that's a stronger security model than, you know, the first one we talked about, which is, is chosen plaintext. So this comes up a lot in like the zero knowledge proof world, where there's all sorts of different powers that an attacker could have. They could be, in a model where concurrency is allowed, where they can reset, the parties in this thing, and that one is, is strictly like they're basically showing that between a simulation and a real transcript, the attacker can't tell them apart. And so in that world, the attacker might have the ability to reset the real transcript midway through. And like if they can do that a bunch of times and then, you know, they get a real sample of that, can they distinguish that from something you can simulate yourselves?

Deirdre:

Mm-hmm. We talked about kind of, security games, and this is like a way to do, security proofs to try to model both the capabilities of an adversary, the environment that it's operating in, and then the, the scheme that you're trying to establish some sort of security property about. There's the standard model. I still don't really know what the standard model is, and this is not the, the physics standard model. This is just sort of like the barest bones standard model for security proofs. There's also the random oracle model. Um, and it's, it's funny because I see it so many places in security proofs, they're just sort of like, ah, if random oracles exist, and we can model them like so, and then all of these things are secure, and then at the same time someone will chime in and be like, it is absurd that we've decided that having the random oracle model is fine. Can you, can you talk a little bit about that?

Steve:

Yeah, so I mean, first off, standard model, in my mind, I think it's just these, math prom assumptions.

So:

discrete log. If you're just assuming discrete log and that it's hard to distinguish or you know, it's, uh, you know, computational Diffie-Hellman and like these things are kinda like these well assumed, um, assumptions. And then, you know, when it comes to random oracle model, the idea here is that when you're doing these protocols, you want to basically have, you know, something that you give it an iNPut and deterministically returns a specific output, and you kind of assume that, and that you can build different protocols off of, and in practice you would drop in, uh, you know, a hash function, HMAC, whatever, you can implement that. So this is, you know, great, it makes proofs simpler because you can kind of say, you know, you have this oracle you call it, and it gives us back a random thing. It's not predictable. Great. So I guess in the late nineties, early two thousands, um, there was a result, I wanna say it's Ran Canetti, but I might be wrong. They basically showed a construction that is secure in the random oracle model, which was not secure, because it was constructed in a way so that, um, when you implemented it, it actually let you break the protocol and it was a really artificial construction. So, the reason why people are still using it today, you know, kind of without a lot of, justification is that it's hard, it's hard to actually come up with that artificial construction. Like you had to actually really like think and come up with a really tricky way to do this artificial construction. So you know, the result of that was like, yes, in the random oracle model, just cuz you prove it there, it doesn't actually prove it secure, but it kind of hand— gets hand waived away as like, okay, well people using it naturally, which is like, you know, you're not constructing it on there. Um, that paper, is kind of over my head. I remember that I was like early in grad school when that came out and I tried to read it and I still am like hand waving what, how it works. But, um, yeah, that was a really interesting one because it really, you know, showed that this thing that seemed like a really natural assumption. You're like, yeah, you know, storm the, so we drop in a an HMAC or hash function and, it'll just work and it, it doesn't.

Deirdre:

Mm-hmm. So one thing that I, kind of blew my mind early in my cryptography journey, uh, was the fact that the, so many things that we reduce these problems to, these cryptosystems to, so we reduce to, factoring, discrete log or you know, some of these new theoretically post quantum assumptions. We do all these security proofs. We say we're in the random oracle model, the standard model, or you know, whatever we do, we do all this, you know, scaffolding to prove all these, like We give so many powers to the adversary and we're very conservative about the security property we're trying to prove, and we make all these long proofs and we make these implementations and they're both very fast and like, you know, they live for years and they seem really, really bulletproof, if you do all these things well. But at the end of the day, it reduces to a problem like discrete log or factoring or whatever, and we don't have a proof that it is in NP, or whatever. It's not one of these NP-complete problems and we don't have any sort of proof or solid foundation that it is a secure problem. It's just, if it is a hard problem, then our cryptosystems are secure, but like, it's all based on a, we think it is. We do not have results that show it is, and it hasn't broken yet, so, let's keep going with it?

Steve:

Yeah, pretty much. Like

David:

we know it's not NP-complete, right? Actually, like we know that it's

Steve:

so, yeah.

David:

um, cause we have algorithms for it. They're just not, there's just still bad. It's just not officially like NP-complete.

Deirdre:

Right. And like even that was like, so even if it's not NP-complete, fine, whatever. We don't have for RSA, for factoring, for discrete log, we don't have, any result that says it is at least this hard to solve these problems without a trap door solution, without a key, or knowing, already knowing the answer or whatever, and like, what the hell, man?

Steve:

Yeah, I mean it's like, like I said, this all comes down to is smart people have worked on this for a long time and we know the best that we can come up with and that's how hard it is. And like, you know, we, we have this concrete security where, you basically plug in the parameters to the best known algorithm we have, and that gives you your level of security. And like if somebody comes out with a better algorithm tomorrow, you have to just rerun the numbers with the new algorithm. And that's the new level of security and that that's happened. That's happened re you know, with some of the pairing-based signatures where, oh, there's new attack, this one's out. Get a new one. We gotta bump the, the key length. And that's what we have. Lattices have a little bit stronger, stronger basis because they reduce to a problem that is, is known to be NP-hard. And, it's, you know, it's not known to be solvable by, by these quantum and that's why all a lot of the, uh, post quantum crypto is all based on lattices or, or similar problems that are, are outside that, that gray area. So yeah, we do have, we do have stronger, stronger assumptions of what we could build things on.

Deirdre:

Yeah, I would, I would counter that lattices are attractive independent of that result, which is that they are very efficient to compute because of all this linear algebra that you have, you, you just have to do, and that sort of thing. You can do a lot of things with them. Um, fully homomorphic, blah, blah, blah, blah. And then having the icing on the cake, which is you don't have it for any of these other problems that we reduce to that are, we've been building real cryptosystems on, is this worst case, um, result for shortest vector problem? Is that the one? And then, okay.

Steve:

So yeah, that was the big leap was reducing that hard, shortest vector problem, which we, you know, have, have a worst case NP-hardness for, and you can reduce that to a average case hardness problem. And then that average case is one that we can, can sample lots of instances of and, and basically use to build, cryptosystems out, out of.

Deirdre:

Is that learning with errors?

Steve:

Uh, learning with errors is one of them. Yep.

Deirdre:

Cool. Yeah. That's wild. So can you explain a little bit, uh, what it means that the worst case instance of a problem is NP hard versus the average case?

Steve:

Yeah. So, you know, the, there was this result in '96 by Itai, which was the, kinda the one that kicked this off. And, uh, one of the first lattice-based cryptosystems was based on this, and that was a reduction of what's called the short integer solution, um, to this, uh, this shortest vector problem. Shortest vector problem, meaning that if you could solve this average case, hard, short integer solution, you could solve this, worst case, hard shortest vector problem, which is known to be NP-hard. if I had this, you know, the, the examples, if I had a black box, I could solve this shortest integer solution problem, um, I could craft a, take an iNPut, that would be an instance of a hard instance of the shortest vector problem and solve it with that black box. That's my understanding.

Deirdre:

And we basically don't have anything like that for, discrete log. I don't know about factoring, but basically it's, it's just a, a smooth distribution. We don't have any of these like reductions.

Steve:

Yeah, I can't take like one of these hard shortest vector problems and map it into an, uh, a factoring problem. Like, I, I don't think that, I think that's what's missing there is we can't solve, you know, lattice problems with factoring, can't solve, discrete log with factoring.

Deirdre:

yeah. Okay. Cool. We've gone pretty deep into the com, into the complexity hole and, uh, these worst case, average case solutions, so kind of unwinding the stack a little bit. Alright, we know about reductions to hard problems. What's a hard problem? Cryptography that we actually deploy isn't actually built on, like NP-hard stuff or NP-complete stuff, for the most part. So, going back to kind of the top, why is it kind of funny that if someone says, ah, my cryptosystem is secure, therefore P does not equal NP.

Steve:

I mean, if you could prove it was secure in our definition, secure, then you'd be proving that P is not equal NP. Like we don't know if cryptography exists. That's where we're at today. Like we don't, we all might just be making this up and tomorrow we're all out of a job.

Deirdre:

I mean it kind of sounds that way, and yet, the next step is that we consider, we pick our security parameters and we consider things to be secure because people keep trying to break them. And they haven't yet, or they try to attack the hard problem underneath and they haven't yet. There are different instances of, of cryptosystems that are different but reduce in their security proof to the same hard problem. And one might be broken, one and one might not. Um, for different reasons.

Steve:

in reality, like almost everyone suspects and believes that there's evidence that P is not equal NP. Like nobody is really expecting P to be NP. Like, it could happen, but you know, almost everyone is kind of assuming, yeah, P doesn't equal NP.

David:

Think some people expect it to be like undecidable as well, that like it's, despite the fact that it sounds like a very well formed question, that it's not actually a well formed question.

Steve:

Yeah. And there may be like refinements and these things get very, you know, there's a lot of different, like little tweaked complexity classes and you know, tons of different ways you can formulate this because it's, you're just basically making up like, a language and saying, okay, this is the class that its belongs to. So you can put all, you know, constant depth circuits, you can do logarithmic depth things and that's its own like, class. I think one interesting thing to call out is that, there may be a world that exists where we have like symmetric encryption, but not today's public key encryption. Um, This kind of comes up in some of the, the, you know, papers talking about this, and people call it minicrypt, is the, is the world where we would just have one way functions. And the cool part is that if we have one way functions, um, then we can build a lot of stuff on top of that.

Deirdre:

Mm-hmm.

Steve:

the, you know, the world where basically like RSA and discrete log get solvable and are shown to be in P or, you know, some approximation of that. There may still be cryptography in that world. Like we can have things based on one way functions, and that's kind of where we have good evidence that we're at least in that, that minicrypt world. Um, but you know, this is all best guess.

Deirdre:

Okay. A lot of what we've just talked about is very, besides the quantum computers, it's very proofs and complexity classes and theoretical and, you know, average case, worst case, blah, blah, blah. How about, 10 years from now and I have a much more powerful computer, just, just presume that Moore's Law continues. Just pretend. It's not, but just pretend it just continues. Um, and I just have a way more powerful computer and I decide to try to factor RSA-2048 or a 4096 or, you know, an elliptic curve group of 256 bits or, you know, whatever. I bet you my same algorithm, I'm implemented in Python, say like dumb, super dumb python or whatever. It's gonna run faster on like, 10 year, you know, better computer against the same cryptosystem and parameter set than it does today: how does that relate to polynomial time and non polynomial time attacks against my cryptosystem? Because one of them is going faster than the other and nothing changed except my computer got better.

Steve:

Yeah, I mean, we can look at it this way. So, uh, the, let's take. RSA, for example, there's this team that's based outta France and they run this algorithm called Cato NFS. Um,

Deirdre:

mm-hmm. Number

Steve:

look it up. We'll go. Um, but anyway, they, they've been setting the records for factoring and that that's the implementation, that's not the team name. And it's a number field sieve. That's basically the, the fastest implementation. And every couple of years they'll spin up like thousands of servers and crank them and find like the next, RSA challenge that has been sitting out there for a while. And you know, if you think about how computers get faster, let's say they, you know, double in speed every year. It may not be happening, but let's approximate Moore's law. So you think about that, that basically burns off like a bit of your key every year, and the algorithm may get better, their implementation may get better as well. So you might be burning off like, more bits, each year and um, you know, you kinda look at their track record and they're, they're making like, you know, steady progress of, of bumping up higher and higher in what they're able to factor. It's been a couple years now, but, they have their records online. But yeah, there's, there's progress going on there and, I think to get back to your original point is like, let's say we don't have any big discovery of like a poly time algorithm to factor RSA. or that we don't, you know, have some complexity result that we're just getting better, you know, best effort of what we have today. You know, just because it may be exponential doesn't mean that it's like, impossible. So I think with RSA especially, like, It's not inconceivable that RSA-1024 will be,

Deirdre:

Mm-hmm.

Steve:

factored, you know, within my career. Like, I, I wouldn't be surprised if somebody does that. And I feel like today we're at a point where you can throw a lot of money at it and do it. So, yeah. These things are, are constantly getting kind of chipped away by, the state of the art of actual hardware and implementations and also just people discovering better algorithms. Like there, might find faster ways to do this. And uh, you know, it could always very well end up be that we never have like a poly time way to do it, but we maybe have like something that is, you know, exponential that is growing very slowly.

Deirdre:

So, in that world, kind of the real world with actually implemented attacks on real hardware and you know, the variables are changing. I have access to either a much more powerful single computer. I have access to lots of compute. I farm out my algorithm across lots of compute that I buy from Jeff Bezos or whomever. in that world, are we still, do we still quote care about polynomial time parameterized by a security parameter, or is it more just I can implement, an algorithm and then execute it with against this parameter set. And it is getting faster, but like the iNPuts are not just algorithm security parameter. It's that with hardware, with scale up, with, you know, the latest and greatest, you know, tsmc, micro architecture, whatever, like in the concrete world, do we care anymore about complexity classes?

Steve:

I mean, I think in the concrete world, we will take the best known attacks and plug the parameters in there, and that kind of tells us how secure it is. I think that's, that's kinda the best we have for a lot of these cases. Um, I haven't, surprisingly, I haven't seen that kind of like analysis of symmetric primitives, like, you know, I don't know what the best attacks are other than just brute forcing and maybe there's some, some smarter things to do. But, um, at least in the, you know, factoring side and, and discrete log side, this is to me like the way that you would kind of like back out your, uh, your, your security parameters.

Deirdre:

Mm-hmm.

Steve:

Cuz there were some improvements in discrete log about 10 years ago or something. I'm kind of like refreshing my memory on it, but it was the, uh, had a funny name. I'm, I'm forgetting it right now.

Deirdre:

in terms of the concrete world, we talked about proofs and games and models and standard model, random oracle model, and there's a bunch of other, uh, you know, security game models that we didn't even mention.

David:

And how cryptography might not exist, but we know in one case that it does in fact exist. It's just really stupid. And can you explain what that is?

Steve:

Yeah. So I mean, one, one time pads, you know, they will exist. Like they're information-ly, information theoretically secure. And that will exist even if P equals NP and you just have to ship a key to somebody that's as long as your plaintext. And so, um, you can also think of like code books kinda like this too. Like we may pre agree that like, you know, two, two lights. It means one something and one light means another something. And like observing that's not gonna let you break it unless you know what's in the code book. So that, that's basically the world we'd be in where like everybody has code books and they're sending couriers around and you know, we're basically like Johnny Mnemonic world or something.

Deirdre:

And can you explain informationally, theoretically secure? Because that's not a thing we have for basically anything that we have actually deployed.

Steve:

Yeah, I mean basically it's that the ciphertext could conceivably be any value, like it could be encryption of any value. And there's no, like, I'm probably completely butchering this, but like, my stale understanding of it is that there's any possible key and plaintext combination could produce that result, which is basically, you know, you're XOR'ing things together. So like, um, there's, there's no way to know that you've actually broken it unless there's other kind of like auxiliary info there. So if you're like, yeah, I've completely butchered that.

Deirdre:

That's fine. No, no, that's good. So why don't we use this, this very cool information, informationally, theoretically secure cryptosystem in our real world. This sounds really neat.

Steve:

I mean, every three months somebody out there puts out something saying they're doing one-time pads and um, yeah, it comes down to like, you have to ship your key material around. And so it's like chicken and egg is like you've never met somebody before. They're over on the other side of the world. How do you get, a disk drive full of bits to them and like literally you'd be taking physical material around cuz you can't send it to 'em. I mean, let's say this is the world where P equals

NP:

like, I want communicate with somebody, I can't do any sort of identity over the internet because, you know, signatures don't exist. Um, there's no way for me to prove who I am over the internet, except by physical, properties. So something I possess, something you possess. And, you know, we're in this world, like, there's not even any like symmetric encryption. Like we, we would just basically be able to do one time pads and have code books. So I would've to send you some binder and then you go to page and like I send you a picture of like a frog and you, oh, that's a frog. Like what is it? Um, and that is obviously subject to like, you know, analysis too. Like if I keep sending you pictures of frogs every day, somebody's gonna figure out what that means. So

Deirdre:

Or you say good morning at the beginning of every message or whatever.

Steve:

I mean, this is basically like all we would have in, in that, that model.

Deirdre:

So basically, it is very hard to do well in practice, not because the, uh, fundamental properties of the cryptosystem are, unattractive or even inefficient. It's the humans doing the crypto cryptography that have the problem here.

David:

You have to send someone a secret key that is as long as the plaintext. And if you can secretly get someone a secret key as long as the plaintext, you may as well just send them the plaintext, but secretly, like,

Deirdre:

I was gonna say something about like, well, what if you have a seed and then you have a counter and then you hash it and then you use, you know, stretching or whatever. But like we, we lept into the world where we don't even have those things, so nevermind. So.

David:

you're just describing a symmetric cipher, a

Deirdre:

I know, you like you, and then you start, you, then you have segments of your message and then you encrypt those and then you chain them together in some sort of block mode and it's like a chain block mode. Something like that. Oh, where have I heard about this before? Uh, so cool. So yeah, sorry, one time pad. It doesn't work in practice,

Steve:

I mean, it, it's funny how many times you've seen people, have shady companies or cryptosystems. They're like, this is a one time pad. And it's like basically generating some stream of things and it's like, yeah, it's like a crappy stream cipher. But um, over and over again, like this has come up. Um, it's, you know, gone down a little bit, but for years as you like, see people saying, oh, this is based on one time pads and like, same thing over and over.

Deirdre:

Hmm, my theory is that it is a, it is a attractive, easy to understand, basic security thing and because they understand it and it comes with informationally, theoretically secure, it sounds really great. And then they're, because they understand that part and it sounds really great, they try to build on it and it's like, we are, literally 75 years past that, like we we're way, way past that. Sorry, that you're, you're just getting into the crypto game, that's great. But please don't ship it, please.

Steve:

mean, if you think about the properties you need, you have to like, have some channel with the person ahead of time, and you can only use it once. So it's, it's basically you're a spy. You give them the book of codes and they go out in the field and you, you know, rip off a page every day. And that, that's the one-time pad. Like that's, that's literally why it's called a one-time pad. So, um,

Deirdre:

Mm-hmm.

Steve:

you know, that's what spies did. That's, and they reused them and they found the pads in the garbage. And, that that's the world we'd be in if, if P happened to be NP.

Deirdre:

Mm-hmm.

David:

It is kind of funny that like, you know, Claude Shannon came up with all the, the one time pad, theoretical information security stuff, but that was just like, yeah, so clearly it's not possible besides this. I've solved it, I'm done, and just like moved on, didn't go into the rest of the cryptography. It was just like, well, this is as good as it gets.

Deirdre:

My work here is done. Bye-bye. Thanks, Claude. No, no really. So it would've been interesting if he kept going.

David:

It's too busy inventing digital logic as a master's thesis.

Deirdre:

Oh my gosh. Yeah. One thing that, uh, I didn't touch on that's part of like the concrete world is that we don't— how about things like side channels? Are there ways to include that in our security proof or security game, or our model in some way? And what about things like, fault injection, things are that kind of start intruding on the model of like a, a trusted computing base of some sort. Are there ways to model that in the way we reason about our cryptosystems and design them and show security? Or is that just sort of like a, don't look under the carpet sort of thing?

Steve:

Yeah. The answer to that is both. It's been historically don't look under the carpet. And yes, there are ways to model it. And you know, the, the idea here is that when we set up these, you know, simulations or these game-based proofs, you're basically saying exactly what your adversary can do. You say what it's runtime can be, what oracles it has, um, you know, what it can do with the system. Like, query it, reset it. And side channels are one of those powers. And so if the proof doesn't capture that, hey, you can get, you know, random bits of memory sampled, you know, um, or that you can, you know, cause an error and cause this thing to abort midway through. And then like what does that do to the protocol? So, there was a pretty, you know, extensive effort to, work on this, and it's called leakage resilient cryptography. And they, they basically are trying to, capture the idea that there'll be some sort of leakage to an attacker. And you know, I think the challenge there is that side channels are really hard to model because they're almost, by definition like surprising. Like, okay, so you define some leakage channel that's like, bits of memory. Well, okay, what about timing? So then how do you model that for an attacker? So I think it can be done. I think it's gonna be hard to really list and, and model all the different types of side channels an attacker could have in the real world. But yeah, there's been attempts to do this and, surprisingly you don't see that as kind of like a standard security model. I think it just becomes like, really complicated to prove things in it, that, okay, this is secure and you might leak, you know, a fifth of your, or, you know, a fraction of your memory. You have to probably do things to make it secure, that would make it like impractical in a, in a lot of practice. So, you know, I think in practice people are like, all right, you don't, you're not, you have an isolated compute environment. Like that's our assumption.

Deirdre:

Yeah. Like there are standard sort of like, yeah, buy-in assumptions that we do in the real world that we can just sort of be like assuming, you're air gaped, assuming that you know, whatever, it's not leaking like crazy or someone doesn't have a fucking microphone up against your computer.

David:

Assuming that all inputs take the same amount of time to process, like works pretty well, cuz that's then something that you can kind of, I don't wanna say like enforce in code easily, but like something that if you bang your head against the wall, you can get most computers to do.

Steve:

I mean, another thing that comes up too is just composition of these things. Like you might have two components that are each proven secure in isolation, and then you stick 'em together and all of a sudden it breaks. So it's like, you know, mac before encrypt, encrypt before mac, like, yeah, the order matters and you know, the mac is secure, the encryption is secure, but if you do them in the wrong order, all of a sudden the, the scheme is not. So that's the other thing too, that I think is complicated here, is that, you know, you might have, building blocks are all great and you just don't know the magic incantation to put 'em together and you, you do it wrong.

Deirdre:

The Universal Composability model!

Steve:

That is another one that I like, tried to read and I was like, what is this?

Deirdre:

Yeah. Uh, shout out to Ran Canetti.

Steve:

yeah, it's two for two Ran Canetti. Like,

Deirdre:

We're vaguely laughing cuz universal, universal composability is like this other way that would theoretically be really nice to literally compose schemes together and have a way to show that their properties either like, compose together nicely or you know, whatever. Yeah, but it's a, it's a very different kind of way to prove things than say, game-based security, random oracle model, standard model sort of thing. They're, they're kind of, they're kind of looking at each other, but never touching. And so if you, if you write proofs in like, quote, the regular way, or at least what's now the regular way, you don't really write proofs in the universal composability model. They're kind of, they're kind of on, on other sides of the dance floor. But it would be really nice. Universal composability sounds really great, and then you get into the proofs and you're just like, ah,

Steve:

Yeah, this is, this is one of the ones where like, I remember reading it and like not really understanding it and then kind of put it on a shelf and like, I know the name and I can refer to it, but I can't do it. I don't really know how it works under the hood, but I think there's been some work that has like revised it and is trying to like, you know, adapt it. I saw something in the last couple years that was like, you know, UC Revisited or something.

Deirdre:

Cool. I am a fan of the newer state separating proofs kind of approach. It's very in the game-based thing, and you, you know, do to transitions of games, but instead of kind of everything being in the, environment or whatever, if you're doing, you know, proofs of, you know, uh, knowledge in the simulation or whatever, you can like segment these packages and stuff like that. It's, it's very nice if you are like a programmer first or an a software engineer first, uh, versus a person who's making proofs to model all this crap, blah, blah, blah. But it's brand new. Alright, so we talked about a lot.

Steve:

I think we've covered a lot of ground here.

Deirdre:

All right. Does P equals NP? No? Everything would break?

David:

you think?

Steve:

Definite no for me, but, uh, yeah, one of those ones I'd like to be proven wrong, but definitely no for me.

Deirdre:

Yeah, I hope not. It would, it would be, I don't wanna use the one time pad. How about that? I hope P does not equal NP, cuz I don't wanna have to manage, code books and worry about the one time pad.

Steve:

I'm just gonna retire if it is like I'm done, I'm done with computers.

Deirdre:

Yeah, everything will be broken. Okay. Thank you very much for joining us.

Steve:

Thank you All

Deirdre:

All right. Bye.

David:

Security Cryptography Whatever is a side project from Deirdre Connolly, Thomas Ptacek and David Adrian. Our editor is Netty Smith. You can find the podcast on Twitter at scw pod and the hosts on Twitter@durumcrustulum, @tqbf, and @davidcadrian. You can buy merchandise at merch dot securitycryptographywhatever dot com. Thank you for listening.