Security Cryptography Whatever

Quantum Willow with John Schanck and Samuel Jacques

Season 4 Episode 6

THE QUANTUM COMPUTERS ARE COMING...right? We got Samuel Jacques and John Schanck at short notice to answer that question plus a bunch of other about error correcting codes, logical qubits, T-gates, and more about Google's new quantum computer Willow.

Transcript: https://securitycryptographywhatever.com/2024/12/18/quantum-willow

Links:

- https://blog.google/technology/research/google-willow-quantum-chip/ 
- https://research.google/blog/making-quantum-error-correction-work/
- https://blog.google/technology/google-deepmind/alphaqubit-quantum-error-correction/  
- https://www.nature.com/articles/s41586-024-08449-y
- Sam’s ‘Landscape of Quantum Computing’ chart: https://sam-jaques.appspot.com/quantum\_landscape\_2024  
- The above, originally published in 2021: https://sam-jaques.appspot.com/quantum\_landscape
- https://sam-jaques.appspot.com
- https://jmschanck.info/


"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)

Speaker 1:

Hello and welcome to Security Cryptography, whatever. I'm Deirdre, and we have two special guests with us today and no other co-hosts. We have Sam Jake. How are you, Sam?

Speaker 2:

I'm great. How are you?

Speaker 1:

Good, I'm a little cold but good. And we also have John Skank. Hi, John.

Speaker 3:

Hey and happy Friday the 13th. I know it's an appropriate topic. I know the scariest thing.

Speaker 1:

The scary quantum computers are becoming manifest. We have two of mere handfuls of quantum cryptanalysts in the world to come. Talk about Google's new quantum computing chip that they codenamed. Willow Sam is a researcher and associate professor at the University of Waterloo. I think, john, you did your PhD in quantum cryptanalysis at Waterloo and I didn't know that until I literally looked up your thesis.

Speaker 3:

I did. Yeah, Sam and I were there as graduate students at the same time.

Speaker 1:

That's great. So the headline is that there's a new quantum computing chip and Google has also done their Sycamore chip, which is a couple of years ago, and it was just sort of like ah, we did the random circuit sampling experiment and this is a thing that the you know, your classical computer theoretically can't quite do, and we did it faster and then a couple of days later people did it and simulated it on another data center. I think this seems to be a different kettle of fish. The headline seems to be they got quantum error correction working. Who? Who wants to start with a quote working good, working well, as opposed to just sort of like it sort of works but not well enough. Who wants to bite on what the hell? Quantum error correction?

Speaker 3:

I can, dan, do you want me to go? Yeah, okay, this is a big topic. Let's say we can start with error correction. I think that might help and then we'll go to quantum. Cool, yeah, okay.

Speaker 3:

So in error correction, if you have a bit that you want to encode, the simplest way you might do that is by writing it down multiple times, the repetition code. So you want to write down a zero, get your paper out, write down 0, 0, 0. If somebody comes along and erases one of those and replaces it with a 1, you can look at it again. Majority vote, go back to 0. Okay, and the great thing about the repetition code as you add more, let's call them physical bits, the things you actually write down.

Speaker 3:

If you're imagining that those bit flips happen independently, with some probability P, if you start adding more bits, the chance that enough of them flip to change your majority vote result goes down exponentially. It would go like roughly P to the D over two, right, and so, yeah, what they've shown here is, with a different kind of error correcting code, not a. Well, they did look at repetition codes, but they also look at the surface code. Right, for quantum states. They showed that as you scale the distance, kind of the analog of the number of physical things that you're writing down. As you scale that out, you get exponential error suppression Okay.

Speaker 3:

And that Up front of the logical state.

Speaker 1:

And that is significant, because anyone who's been paying attention to quantum computation especially people who care about in the context of cryptography and security for all the stuff that we already have deployed this is like the major threat vector for all of our asymmetric cryptography and it's always like, well, we don't have enough physical qubits yet and we don't have enough qubits. Period and theoretically we'll get enough logical qubits to run these attack algorithms with better error correction, and it seems like they've been working on trying to improve error. They everybody working on quantum computation has been trying to improve error correction so that it actually gives. So if you actually add a bunch of physical qubits into a code, like the surface codes or the repetition codes, it actually adds up to a robust, useful, long-lived enough logical qubit over which you can use to actually do quantum computation. What did they do? Did they just get nerd harder and get better at doing qubits?

Speaker 2:

Yeah, that seems to be it. I mean, I don't know about John, but I think the level of engineering, of actually making these physical things better is completely outside of my expertise. It's sort of like once you get them good enough, then I'm like oh, there's this nice theory of error correcting codes.

Speaker 1:

Okay.

Speaker 3:

Yeah, I think we can go into a little more detail. So there's like they do seem to have what is it like? A factor four improvement in, like the qubit, the main metrics that count for the physical qubits, and so there's like T1 and T2 times over. Is it the last machine? I'm kind of hoping that you have the numbers in your head. So they've got like a factor four number. That helps a lot Because you're starting with a better physical qubit. It's going to be easier to get a nice logical qubit out of that. Another thing I'm seeing a lot of in this paper is a lot of investment in the actual classical error correcting algorithms. They've got two that they talk about. There's the sparse blossom algorithm and then another one that seems to I don't know, sam, I didn't really read too much about it, but it's using some machine learning tricks.

Speaker 2:

Yeah, and I mean the surface code is kind of a funny thing because, comparing to your majority vote, that's an easy algorithm to implement you literally add up the majority, whereas the, when you look at the surface code and you try to say, well, what errors happened, that's actually a non-trivial computation to solve and so they've actually put a lot of work into that side of it is saying, you know what kind of classical algorithms we can do, and the performance of the code is effectively given then by this classical co-processing.

Speaker 2:

And so they actually have and this is something that there's been a lot of work on and I think they've got a nice, you know, parallelizable algorithm that can run very quickly. I think they had a neural network based thing and it kind of makes sense to sort of train it. You know each of the qubits in this device is going to be unique and have unique errors, and so you'd want to train your error correction to say, well, this qubit is more likely to do this kind of error, so let's assume it did that rather than a different error.

Speaker 1:

That's fascinating. That's something that I wouldn't have thought of, because you know you're still modeling, you still have like a brain model of classical transistors, even if they're physical qubit they're not really transistors, but like there is some band of kind of normative expected behavior for a specific class of you know, a thing that stores a bit and it's not a bit, it's a qubit. But the fact that you're basically saying, no, these physical qubits have enough variance that we're going to train our classical decoder, our error decoder, on all of them individually. There's such special little snowflakes that we have to pay attention to them all that closely. I mean, this is all algorithmic and you know all of that. But basically saying there's such a variance still in these things and there's only like 101 physical qubits on this ship, so it makes sense and you can do that.

Speaker 1:

But that's something that I guess I wouldn't even thought of yet. Like we're so far away from like anything close to productionizing this sort of chip making process, these physical qubit making process, that this is like a thing you do have to do. So how does the threshold for error correction like? I guess what is the threshold? Because in the paper and in all these blog posts. They basically say we have crossed this threshold of error correction to when you scale up the, I guess the diameter of the surface code versus like actually how much, how far down it drives down the error rate of the logical qubit that you are getting out of this mesh of physical qubits.

Speaker 3:

So, john, yeah, the basic idea there is that as you scale out the distance of the code, you're getting an improvement in the logical error rate and eventually through the threshold. He is saying, if your physical qubits are good enough, then all you have to do is scale out the distance in order to get the state that could persist for an arbitrary amount of time. Yeah, and concretely what they've done. I mean their distance 7 code. So this is it's. I always picture it as a 7 by 7 block of qubits, but it's a little more complicated than that. There's actually like 2 times 49 plus one whatever that is in there. It's kind of like two laminated grids of qubits, but so that one. It has qubit lifetimes that are like 200 microseconds comparing to the 60 to 80, somewhere in that range microseconds for the physical qubits. So that's an enormous inversion.

Speaker 1:

And so basically, the threshold that we're talking about is that the physical qubits are good enough on their own in terms of the rate of errors that they experience, that we can put them together in this diameter five or diameter seven surface code, this array of approximately seven by seven qubits, physical qubits, such that we finally actually achieve better logical qubit error rates than any of the physical qubits on their own. And that's kind of the threshold that we've been trying to achieve, or quantum computer scientists have been trying to achieve.

Speaker 3:

That is a milestone. Yes, the threshold is a particular error rate for the physical qubits. Yes, yes.

Speaker 1:

Okay, I hand-waved at this earlier, but basically this is the way we think that large, reliable quantum computers will become manifest such that we can actually do useful computation with them, right, that this is kind of like now that we have physical qubits with decent error rates, such that we can create logical qubits with these surface codes or other code you know error correcting codes, logical qubits with these surface codes or other code, you know error correcting codes. Then we can just like tie them all together with a little bit of classical, you know operational, you know sidecar control flow or something, and then, like in a couple of years not really we'll have quantum computers that do really useful quantum computation, but like this is kind of the roadmap we were thinking would go. But now we're finally there, right, yeah.

Speaker 3:

Yeah, this is not a surprise. I don't think. This is definitely something that's been on the roadmap and we knew it would come about this time. We've known that for a long time. There are some things that are missing from this if we want to get to, you know, doing computation.

Speaker 3:

Okay, so this is really building a quantum memory. There is more involved in doing computation than just having a memory. Right now, I believe, they're doing their decoding of, you know, like the measurements you get out of the error correcting code, the kind of parity bits that you check. They're doing that out of the fridge and they're doing that offline. They're trying to do it quick enough. They're trying to do it fast enough and in the paper they do talk about this that they do it fast enough that, in principle, they could feed those results back in to take action based on it, which you only need to do if you're going to do computation. You don't need that for memory. So they are succeeding in building a memory. But in order to start doing computation, you are going to have to feed the results back in and you're going to have to perform physical gates based on those results, and maybe Sam knows a lot more about that than I do.

Speaker 1:

So, sam, that's one of the questions I have, which is basically, this is just like having logical qubits with decent error rates. I think it's like 10 to the negative. Three, I think, is what they got. You can correct me I think yeah, about that.

Speaker 1:

The goal they would like to have is 10 to the negative six for, like, something that's considered almost indefinite or whatever. But anyway, that's not even including, how, the errors introduced by applying quantum gates and twiddling with the states of these logical qubits Like how does that interfere with actually accomplishing quantum computation by, you know, applying quantum circuits, depth and gates and all that stuff to these nice shiny new, you know reliable logical qubits?

Speaker 2:

nice, shiny new, reliable, logical qubits. Yeah, I mean, the interesting thing about the surface code is that when you do gates on it, more or less you're doing almost the same operations you do to just hold the memory Okay. So it's kind of like you shut off one part of the surface code at some point and then you turn it back on and you do this in a very careful way and this actually ends up doing kind of shuffles the data around and that kind of makes the computation work. So in that sense, really, once you have this kind of, you know you've got this grid that can be a surface code. If you make it big enough, just the pattern of surface code you do on it does the gates.

Speaker 2:

Now it gets a little bit more complicated and you do need this feedback eventually. And then you at some point need to do there are actual extra gates you would have to put in. So there's kind of the Hadamard gates, which are probably more straightforward, I think, for the surface code not demonstrated here. But the real tricky one is the so-called T gate, and this is something where you need to build this whole extra factory If you need this giant footprint where you this is something that you do physically. But again, you know your physical qubits you're not expecting to be very high quality.

Speaker 2:

So if you want to make this T-gate at a high enough quality that you can actually use it in your computation and you know, use the billion of them that you would need to do something like factoring, you need to make it better, and so there's this very complicated distillation process where you have these again sort of this redundancy like the error correction in general. You know, you do a lot of these little T-gates and you sort of mix them together in a way that kind of, you know, cleans the error off and makes one better one, and you, you chain this together, and so that would be a very nice milestone to see, and that is something that we are very far from. Yeah, cause even kind of the smallest version of this factory is just is already quite a lot of logical, quite a lot of logical qubits, or sort of the same footprint of logical qubits or sort of the same footprint of logical qubits.

Speaker 1:

Like how many I think this chip, I forget. They have like 101 physical qubits that they need seven by seven to make one logical qubit. So that's 49 modulo one or two, so like maybe they have two logical qubits or something.

Speaker 3:

No, they've got one logical qubit, and I think they would actually argue that they don't have any.

Speaker 2:

Yeah, I'm happy to fly to them and say it's a logical qubit.

Speaker 1:

Why, would they argue, they don't have it.

Speaker 3:

Well, you don't really have a logical qubit until you can do interesting quantum operations on it and, in particular, you're going to need two before you can do anything interesting logical qubit you are doing classical computation with one bit.

Speaker 1:

You're just kind of like oscillating in place and saying, oh look, how stable I am.

Speaker 3:

Yeah, so I think and I've seen there's some quote maybe in the paper or one of the news articles they want to get to like a logical error rate of like 10 to the minus six and they want two qubits in there. Have that error before they claim a logical qubit.

Speaker 1:

So if they just took two of these chips, you know you clone all the. You know the superconductivity, refrigeration and all the classical error. You know decoding controllers and all that sort of stuff. Can I just like hook them up? Can I just like USB-C between these two quantum chips, like I have two now, right?

Speaker 3:

No, If they're not directly in contact with each other. I mean. So there's some operations that need to be performed between neighboring pairs of physical qubits, and if you have these patches of surface code spread out in different refrigerators on other sides of the room, you're going to need to use some quantum communication in order to enact those gates over that distance. So that's quite a lot of additional work. We have seen people doing quantum communication between fridges with superconducting qubits. That's something that people are working on and it's one way to scale. When your chip gets big enough that it won't fit in the fridge anymore, probably the more direct way to scale is build bigger fridges for a long time.

Speaker 1:

First, Like how big are these fridges? Like they're bigger than the one in my kitchen, but like how big Is it? Like a container, like a container?

Speaker 3:

ship size, container size, no, no no, the ones that we had at the Acifra Quantum Computing were not much bigger than your refrigerator. Maybe they were a little taller. These ones that Google's using aren't going to be substantially bigger. I don't know what the you know. Sure, I don't know the volume of it yeah, just rough yeah.

Speaker 3:

I was just going to say. You have to imagine that the chip is, you know, a few square centimeters and that's you know. You have a lot of machinery there to cool a volume that's a little bit bigger than that and you need to run wires into it and all that. But the fridge at least the part that's cold is very small. Actually it's a pretty small volume, but then there's a huge apparatus around it in order to do the what's it called the evaporative cooling of helium, I forget, in order to make something that cold. And I mean it's cold. It's like, was it millikelvin?

Speaker 3:

Millikelvin 10, 20 millikelvin, we're talking like if you were in deep space so that your eye could not see stars. Wow, it's colder than that. Wow, right, that's cool.

Speaker 1:

We can get back to like who's going to be using? I'm not going to have a quantum computer in my house anytime soon, even if I could afford it, because it's like this is. At least all the things that Google's come out with have been superconducting physical qubits. There's other approaches to the technology, but it just seems that a super-cooled, milli-kelvin-stable environment is required to even get to these results anyway. But maybe I'll have a very cold server rack in my house in 2050 or something like that.

Speaker 3:

Maybe. I mean, some of us are still skeptical that these things are useful, even if you can build them, I mean for surveillance. Yes, that's clear. I don't know how many numbers you're trying to factor in your kitchen. I don't particularly want to. Seems like a big liability, honestly.

Speaker 1:

Yeah, so can we talk a little bit, if anyone wants to talk a little bit more, about the classical decoders, because they have a couple of algorithms. But this is one of those things that I find interesting is that, yes, we've got these quantum physical qubits and we're creating these logical qubits from those, but there's a lot of classical computing going on to make all of this work. How does that even work? Why are we allowed quote unquote to do a bunch of stuff classically but we're still getting quote the benefit of quantum computation, I guess.

Speaker 3:

Should we talk about the repetition code to start? Sure, why not Sure? I think this is something that's easier to understand. We might lose people on quantum error correction, but they do an experiment here in this paper using the repetition code and fundamentally you're doing the same operations and you're doing computations that are similar at least. So earlier we were talking about this repetition code.

Speaker 3:

You write down a few physical bits. If you want to encode zero, you write down 0, 0, 0, or 5 zeros, or 7 zeros. If you wanted to correct it, you could go and look at the actual values that are there. If an error is introduced somewhere in your string of zeros and it turns to a 1, you could go and fix that. But you could also do it indirectly. You could do it without looking at that string of zeros. And that's really important for quantum computation, because we can't go and look at the state without eliminating the nice quantum correlations. That system to go into a nice state that's kind of energetically separated from other logical states, meaning you would have to pump in a fair amount of energy into the system, it would have to come from somewhere in order to transition from one to the other.

Speaker 3:

So to do this on top of your string of zeros, kind of in a pyramid shape, just one level. Just imagine that you have some bits that are going to check the parity of the ones right below them. So just imagine three zeros and then right on top of those are two more bits, kind of sitting in between the pairs of zeros, and what you're going to do is periodically measure the parity of the two bits below either one of those parity bits and get that value out. So you'll actually look at the parity. But the computation that you do will not. It has to evolve unitarily in time.

Speaker 3:

It can't actually look at the data bits there, but we're going to somehow get the parity of those two bits that are below our measurement qubit, our measurement bit. If you do that, those two parity bits are enough to tell you where a single bit flip error is. You can work out the different combinations and see how I can correct any one of these. Okay, so exactly what they do in this experiment. You can go and look I don't know if I can maybe find the figure, at least to tell people where to find it. You think of the version from August. You know this paper did come out in August, right?

Speaker 1:

Yeah, it was funny. I went to the Nature and it's like ah, this says you have to pay $20 to read this article I'm like no, it's on archive.

Speaker 3:

Right, okay, so if people are curious, I can go and look in the supplemental materials of figure S7, and you can kind of see this zigzag shape. And this zigzag shape is exactly what I'm talking about, with kind of the parity bits sitting on top of the data bits. So yeah, exactly what you do is you're doing these parity measurements without really looking at the value of the data bits and you pull those out and feed them to your classical decoder. So it's going to look at all those parity measurements and it's going to figure out where the errors are and if you want, you can go and correct them. You actually don't have to. You can just keep track of what the actual state of the system is in software. But you just have to periodically go and look at the parity bits and look at how they change over time. So that's exactly what they do in this paper. They've got their grid of qubits and they pick some of them to be data bits and they pick some of them to be parity bits and there's this kind of zigzag shape that traces out over the surface, encode some bit string in the data bits and then they measure the parity over time and see that they can keep, keep. I guess they encode zero or one, but they can. They can keep it in that state for long periods of time. And this is honestly Maybe the headline result for me, because people look, you know a lot of people, especially in our field. They're like well, how many, how big of a number can I factor today? What's the progress? And if you look at the change in how they've been able to simulate these repetition codes from 2001 to today, they go from being able to maintain a regular bit, not a cube, but just a regular bit for 10 seconds in 2001 to five hours today Enormous change.

Speaker 3:

And fundamentally, what you're doing, if you're stabilizing bits or stabilizing qubits, is not that different. So, to go from the repetition code to the surface code, you're doing not just parity measurements, like we were talking about. You do a slightly more complicated circuit. It involves not just pairs of qubits that are near each other, but there's four qubits. So the number of physical gates that you have to do is a small constant factor, larger than what you have to do for the repetition code. But it's the same kinds of operations. There's no real fundamental difference.

Speaker 3:

And if you can make the repetition code work, you can make the surface code work. Oh boy, with repetition code you can get higher distances on your grid of qubits, right? They're able to go to distance 29, and they're only able to go to distance 7. So you can get these enormous error suppressions because they're able to go to those high distances. They don't actually see any bit flips in their five-hour experiment, wow. So if you imagine that you're able to make a distance 27 surface code, it's going to be the exact same hardware. You just need more qubits, right, but you could get similar error suppression and keep a qubit alive for five hours. So that's the other stuff for me. But when I look at it, I'm like what metric has improved? And I need something fine grained enough to see the change over time. We can't look at factoring. That's ridiculous. We're not going to see any progress on factoring until it's way too late. It's going to be, like in 2032.

Speaker 3:

And it's like oh, we factored like a 10-bit number and then a couple of years later it's a 1,000-bit number. Okay, that's just a step function. I want to see continuous change over time. I want to see how these things are improving. And you have to look at some of the other experiments that they're doing and there it's really clear that this is substantial progress.

Speaker 1:

Hell yeah. This makes me pivot over to Sam's chart that every time that we have a new result in quantum computing we're like, okay, where does this lie on Sam's chart of qubits on the X axis and like, oh gosh, what's on the Y axis? Just like how many, how much threat it is to bits of security. Well, it's error rate versus the number of qubits Error rate.

Speaker 2:

I'm sorry. I'm sorry, yeah, and I mean this paper is kind of shows the limitations of this, because there's so many metrics you can use to think about a quantum computer. You know John's saying basically, look for one that you can kind of sort of have a nice fine-grained view over time. And cubic count to error rate is sort of a bit crude. And you can kind of see that when I tried to draw this chart and I put Google's result there, which I feel like is one of the best quantum computers that there is right now, probably the best it's kind of in the interior of this sort of hall of current quantum computers. It looks like other ones are doing better, because if you just kind of take the maximum error rate, you know it's not the best.

Speaker 2:

And yeah, I think this is kind of the point of this is to sort of illustrate okay, we've got this little cluster over here at the bottom left and you know, factoring RSA is on the top right, so there's a big distance between them, right, I don't want it to be, you know. Oh, we've moved one millimeter. Therefore, this is, you know, you, you like you, you kind of have to be a lot more careful, um, and look at these kind of what are these consistent experiments and things, and you also run into this issue that you know, if we all started listening to john which we should and say you know here's, here's how these repetition codes have been improving over time, then every quantum startup is going to kind of over-engineer repetition codes, right. One of the things in the field is that, I think you know, since maybe five years ago, I feel like there's been a change in the somewhat public-ish discourse around quantum computing where people have started saying, you know, oh well, this many physical qubits, but what about logical qubits? You know, do we have?

Speaker 2:

any logical qubits and now this has worked its way through the quantum computing companies. So Quantinium had this announcement oh, we have 50 logical qubits. And you say, well, what? And in a misleading technical sense, yes, they do. They have an error-correcting code that encodes 50 qubits into 52 physical qubits. But they're very, very different than the kind of logical qubit like, completely different than the kind that Google made and completely different than what would actually allow us to factor anything. So yeah, bit of a rant on that.

Speaker 1:

No, I love it because you know, if you don't know what you're looking at and you have to dive in deep. Because if you just look at the headline from you know x, y and z company, they're like, yeah, we have 52 logical qubits. And you're like, yeah, well, they're shit. They're quote shit. Logical qubits. You can't do anything theoretically useful with them, whereas this one is like we have basically one logical qubit and yeah, you can't like actually do any quantum gates with it, but like it's probably the best one we've ever seen, ever.

Speaker 1:

Um, because of all this other work into the you know the error correction and all this other stuff.

Speaker 1:

So I guess like, yes, you know, trying to like accurately reflect your, your, the words that go along with your updated chart do a lot better to explain, like the you know the salience of this sort, the results in this work, um, but like throwing it in there, you're like, ah, well, we've got this really very reliable logical qubit, but we still can only factor 35. So like man, like call me when we can factor, you know a 1024 bit. Uh, you know RSA, modulus or whatever. As John said, it's basically like we're probably going to be around where we're at until it's a major step function and we are factoring like a 1024-bit RSA modulus. It's paying attention to those metrics that are very niche and you have to know what the hell is going on under the hood to track it. Because tracking oh, just call me when you can factor a 1024-bit RSA module is like, yeah, it'll be too late by the time we call you. So deploy your post-quantum cryptography now.

Speaker 3:

Yeah, absolutely. I mean I think that this really is the key point. Like we have to pick the right metrics to look at these things. I see a lot of people asking like will we see exponential growth in quantum computing? It's like, really by what metric? Because exponential growth, I mean a lot of people don't understand, like compound interest right, like it's not. Exponential growth is not hard. You have an immature technology and you get a bunch of engineers in the room and you tell them can you make a 10% improvement over this from last year? And you get exponential growth, as long as there's enough runway there, enough interesting parts of the technology to improve upon. And in quantum computing you have so many different new technologies where improvements in one of them actually lead to full system performance improvements, like improvements in the cooling technologies. Let you just be more wasteful. Put more of your classical error correction on your noisy classical chips that are spewing thermal radiation, or you are pretty closer to the chip because you can cool the thing better.

Speaker 1:

And that reduces latency and that makes it feedback even tighter.

Speaker 3:

Exactly, and you can talk about that for so many different aspects of the technology, like the physical manufacturing of the qubits.

Speaker 3:

Getting better physical qubits is going to mean that you need smaller codes to run your algorithms.

Speaker 3:

Designing better codes use fewer physical qubits in order to run your algorithms.

Speaker 3:

When we start talking about actually doing computation, we're going to be doing compilers to try and lay out because we are talking like these 2D grids of things you have to lay out the bits on the surface in order to minimize data movement costs and all that.

Speaker 3:

That's another thing. We're just going to have this whole period of time where people can optimize compilers and get huge improvements in the cost to run algorithms and, at the same time, the cost of the algorithms themselves are falling. Like there's a new paper on reducing the number of qubits that are required for, as I knew this year, reducing the number of qubits that are required for factoring down to N over 2 from 2N lecture 4 improvement Okay, like I haven't seen the full system analysis of what that means for the number of physical qubits that you need. But you know there's a lot of avenues for improvement and a lot of people that are interested in it, a lot of money and it's really hard to see how you would not make exponential growth right, those little like 10% improvements here and there that compound into exponential growth.

Speaker 1:

Yeah, and like kind of going back to the gates, where we kind of we discussed a little bit how the you know the surface codes and the repetition codes.

Speaker 1:

The progress there implied that like this kind of will carry over to both the gates themselves. So we're actually being able to do as well as they're able to do with the, with the error correction and these different codes, kind of as like ah, that means like, because we can do that different codes kind of is like ah, that means like, because we can do that we will also be able to do like actually applying quantum gates a little bit better, and then there's always some error that's introduced when you're applying quantum gates. Right, like, does this give you any kind of like intuition about like okay, so we have these good error correcting codes and then that may affect how much error actually applying gates will actually introduce? Or is it just sort of like yeah, we can't say at this moment and I'm thinking specifically about applications of Shor's algorithm or anything like that, which is like all factoring, and if this is sort of like a big shrug, we have no idea. Like we could say that too.

Speaker 3:

I don't really have intuition there, Sam. I'm also saying a blank stare.

Speaker 2:

Yeah, I mean again like, except for a couple of special gates, just having the logical qubit kind of, is the gate All right? And so I mean, well, the T-gate is also something that we're not even going to see for a while, because even to do the full distillation there's kind of a minimum to see anything. And that is another one of those things where it might be kind of a step function, where no one has ever demonstrated a you know distilled T-gate in a surface code and then all of a sudden, oh yeah, someone did, and then five years later we've distilled enough to do something interesting.

Speaker 1:

Can you say a little bit more about that? And is it just a blocker having a decent fidelity logical qubit which is kind of blocked on error-correcting codes, or is there something interesting about T-gates that makes it difficult?

Speaker 2:

Yeah, so the you know, one of the interesting things about these codes and it's kind of different from classical computing is that, you know, if I use an error-correcting code to send data over Wi-Fi or something, the router gets it and then decodes it and then operates on unencoded data. Your quantum computer can't do that, because as soon as it decodes it like if you look at these physical qubits and these, you know their coherence. Times are measured in, I think, milliseconds or microseconds, microseconds you really can't do any computation. You have to keep it encoded all the time. And you've done all this work to make this code, to make the quantum state kind of stable, so nothing can change it. And then you have to go in and say, well, I actually want to change it myself sometimes, yeah, and so it becomes that much harder to do that and totally lost my train of thought there.

Speaker 3:

Have harder to do that and totally lost my train of thought there. Have we talked about what the T gates are doing and why you need them?

Speaker 1:

I think we've mentioned them and I know a little bit about Fartiman gates and that's it. And the fact that in quantum computing we're all used to Boolean logic and ANDs and ORs and NANDs and things like that. There's a whole zoo of gates that are specific to quantum computing, and it's like, yeah, tell me more about T gates.

Speaker 3:

Yeah, I mean probably not worth like actually describing mathematically what's going on here, but like Hand waves, yeah, the thing is like if you only do dates, um, that you can kind of you can imagine that you can apply some gates by just touching every one of the bits in your surface, right like you do some physical transformation of each one of those qubits and you get some transformation of the corresponding logical state. Uh, for instance, if you want to just do a bit flip, it's really easy. You do a single bit flips all the way across from one side of the surface to the other. That'll do it. Um, you want to just do a bit flip, it's really easy. You do single bit flips all the way across from one side of the surface to the other. That'll do it. You have to connect one edge to another, but actually you're only going to get classical computation. If you are essentially classical computation, if you just do these things where you can kind of just touch each bit and do something to each one, in order to get full quantum computation you do need some additional resource state and that state gets consumed in order to enact some transformation of the state that you cannot reach by just fiddling with the individual bits in your code so you can decompose any circuit into one that does the little individual bit twiddling ones and then some of these special ones that consume a resource state.

Speaker 3:

And so if you're going to do surface code computation, you need these magic states, that's what they call them, the T states. You need the magic state, the T state, and you need to distill those from physical T states that you can prepare just by actually doing rotations of physical gates. But you get noisy ones and you need to get a high-quality one. So it's a circuit that you apply to a collection of like 15 physical T states and then you're able to put those together and you get a logical T state encoded in the surface code and then you're able to do some non-trivial like genuinely quantum transformation of the encoded state. So you need these distilleries kind of like alongside your circuit and it's about the same size as the circuit you're going to perform.

Speaker 3:

So it's kind of like at least my mental model. I don't know if maybe somebody can correct me on the exact numbers here, but you basically, if you're going to do a circuit with n bits, you're going to need like a similar size, area and time of like running these distilleries and then feeding those states into the computation that you're doing. Huh, so it's going to be a little while. I mean, like I think the actual distilleries are not that much. They're not bigger than individual logical qubits. You're just going to run them in parallel with your logical qubit. Sam's maybe going to correct me there.

Speaker 2:

Yeah, so if you want a kind of the bigger you make it, the better quality you get out of it and like at the scale of Shor's algorithm or something, the footprint is about 120 logical qubits gives you one of these factories.

Speaker 3:

MARK MANDELMANN. It's like constant factor. It's not that, MARK.

Speaker 2:

MIRCHANDANI Constant factor. Yeah, yeah.

Speaker 1:

MELANIE WARRICK this is wild to me because if you're looking at the Willow paper, the Willow work, they have like they've got the data qubits, they've got the parity qubits, they've got some other qubits in here these are different physical qubits that are being used for different things. Then we're talking about actually instantiating these gates and you have to have like a sidecar of other qubits that does it. This is so funny to me because if you think about general digital computation, like a gate is a gate of, not even a gate a transistor is a transistor transistor. You use it for like maybe you've got like some other funny little things. If you've got like an extremely heterogeneous bespoke architecture or something like that, but like for most people who took an architecture class in school, it's just like it. You is a transistor, a transistor, and in this world it's not like that at all. And so we mentioned earlier compilers. You're going to be able to do a little tweak and you're going to get some crazy optimization out of your quantum computation and they're going to be a bespoke compiler for every bespoke quantum chip for decades. I'm going to bet you we're not going to have a consistent, you know, instruction set architecture for quantum computation for a very long time, I think Because there's just so much stuff going on, there's so much bespoke, almost bespoke hardware, even though, like and maybe you can tell me differently if, like, if these different physical qubits, you could decide like ah, we've got 101 on this chip and we're using them in such and such a way, such that we can do this surface code, but we can just decide to use them differently for whatever.

Speaker 1:

Is that a thing?

Speaker 3:

That's absolutely the case. Okay, they're the exact same physical qubits. You just use some of them to produce these resource states, these magic states, and some of them to be logical qubits. Okay and so, and they are. That is all that you need. You need the resource states and you need the logical qubits and you're off to the races. Like it's. There's not a bunch of other special types of mysterious qubits that'll show up when we start talking about bigger algorithms or anything. This is it, okay. And as far as like architecture and instruction set, you're doing physical qubits in the plane 2D nearest neighbor connectivity, and your instruction set is the surface code. It tells you how to do the basic operations on it. Okay, I guess you can. Probably. You can probably cook up some operations using different resource states and things like that that are a little bit different. I don't know what people are looking at, but yeah, we basically know what that's going to look like.

Speaker 1:

That's cool.

Speaker 2:

Yeah, I mean that's basically like these little things about sometimes making the distilleries a little bit better or something like that. You know that kind of improvement exists. And I mean distilleries a little bit better or something like that. You know that kind of improvement exists. And I mean also as far as kind of a compiler. You know this is something where if we're absolutely limited by the size of the device and let's say you wanted to break cryptography, it would be in your best interest to just hire a couple of smart people to like stop automatically compiling and just by hand, you know cram the circuit, you design the circuit to be as small as possible. So I think there will also be a lot of this kind of by hand. You know, very component by component, like when you're saying the qubits are these special snowflakes? I'm not even. Probably transistors are too. It's just it's not worth anyone's time or money to care, whereas if you've only got 105,.

Speaker 2:

You know you treat them all with a lot of reference, I guess.

Speaker 1:

Fair enough, just before we dive off in a different direction, can you kind of give an overview of like, okay, say, we've got these really great high fidelity, long, 10 to the negative six rate of errors logical qubits, and we've talked about T gates and Hardeman gates and distilleries and factories and things like that. So when we're actually running a quantum algorithm on a chip full of nice, really reliable logical qubits, what are we doing? Because to a general digital computer sort of person, they're like yeah, I've got all these Boolean logics and I have adders and multiplexers, and I've got all these Boolean logics and I have adders and multiplexers, and I've got all of this stuff built upon itself in terms of Boolean logic and I give it some instruction, set commands, I tell it to add and store and load and things like that. What is happening when I'm running a quantum algorithm on a really nice quantum computer that we will totally have available to us in five years, right?

Speaker 2:

MARK MANDELMANN so it'll be a little bit different, because in the classical computer case you're kind of pushing your data through the gates. Right, You've got these gates as this sort of static object that kind of does the computation, Whereas in the surface code quantum computer, where you imagine you actually have these logical qubits, the qubits themselves are just kind of sitting there and when you want to do some operation on it it's this process that you would enact on them, Like with the superconductors. You've got this microwave pulse. So what you'd imagine is kind of a very carefully synchronized set of instructions. You know what pulse is going to which qubit at which time, and this is kind of entangling them and modifying their state very carefully over time. And just sort of changing that pattern on the same set of qubits is what ultimately makes these gates happen.

Speaker 2:

And then, kind of going one step higher until you think about, like an adder or something, You're putting something together that basically is sort of like a circuit built out of these gates happen.

Speaker 2:

And then, kind of going one step higher till you think about, like an adder or something, you're putting something together that basically is sort of like a circuit built out of these gates. And again, because you don't have to build the gates, you're sort of doing them dynamically. You don't need to have a fundamental unit which is an adder that you use every time. You don't need to think about more or less. You don't need to think oh, I've got a 32-bit architecture or something. You really can kind of build it on the fly to whatever works, to make the circuit as efficient as you want. And so this is kind of something that we can also think about and work on now, where we can say, well, we've got this layout, how would we actually lay out this pattern of operations to efficiently kind of? You can even draw sort of a 3D picture of this thing and sort of try to push it together.

Speaker 1:

MELANIE WARRICK Is that sort of like if you're going from left to right. That's sort of like the gates you're applying over time from left to right in the sort of like OK, so I started t0. This is my state. Maybe it's just literally empty and you do some quantum gates and now you're at T1 and it has a different state and you apply different quantum gates, you're at T2, and so on and so on. That's the kind of 3D.

Speaker 2:

Yeah, by convention time goes up instead of left to right, but exactly the same Awesome.

Speaker 3:

Yeah, I did just want to add one thing to that. I mean the qubits, the quantum chip itself. That's a memory, and all of the computation is happening in the error correction and the classical signals that you're sending down to the chip You're changing how that system evolves in time in order to enact a transformation of the quantum state.

Speaker 3:

So whenever you look at a quantum circuit, you should actually be thinking this is what the classical computer does with changing microwave signals and changing how it's going to interpret measurement results and things like that. It's not forcing a bit through an adder?

Speaker 1:

It's so cool because once I I think it was with Sycamore I was like I want to really actually grok this adversary to all the cryptography that I love and like. It's such a cool mind warp of like if you learn how to compute the digital way, this is just such it's so different, but it's also like beautiful and fascinating and like everything goes into 3D. Okay, Sam, what's your hot take for the next 10 years of quantum computing and how it applies to all the classical asymmetric cryptography we have deployed?

Speaker 2:

My hot take on the 10 years is nothing will happen to our asymmetric cryptography, but kind of if the progress, if we have kind of the progress we've had now. You know, we have Google's chip and maybe we have two logical qubits, or what they would actually call a logical qubit, next year, maybe for the year after that In 10 years you know we still don't have enough to factor anything right.

Speaker 2:

And the other kind of worrying thing about this for the field of quantum computing is that if you're in the logical layer and you've only got a few dozen logical qubits, that's not doing anything that interesting. I mean, that's a level you know. Classical computers can actually simulate logical qubits. So if you have up to, say, 40, you're not doing anything a classical computer couldn't do anyway. And with all the error correction you're probably doing it even worse than the classical computer. And even if you're above 40, it's still sort of is there an actual algorithm that's going to outperform the classical computer at that?

Speaker 2:

You know, maybe you need 100 or more, even for things that aren't factoring, like different kinds of applications for quantum computers. Yeah, so it's possible that kind of in the next 10 years there will be nothing useful from error corrected quantum computers and so kind of there is. What might save things for the field of quantum computing is things that don't actually need all that error correction. Like you found something that can maybe do something useful. Definitely not factoring, definitely no threat to cryptography. To me the leading candidate is chemistry applications. But you can do that without you know this full layer of error correction.

Speaker 1:

Mm, hmm, error correction. Is that the stuff that's like, I think, on your nice chart? It's like, literally, how do you get ammonia from the ether? Or something like that. What is that thing?

Speaker 2:

Oh. So yeah, this has been kind of the sales pitch, for the field is oh well, what can we do with chemistry? Well, maybe we can come up with a better nitrogen fixation process.

Speaker 1:

Yeah, yeah, yeah.

Speaker 2:

As though this will somehow resolve our unsustainable farming practices. That was the magic bullet. We needed was just better nitrogen.

Speaker 1:

I mean, it was a significant thing when we were able to do that better, I don't know 100 years ago or something like that. So who?

Speaker 2:

knows, doing it at all was a very big shift to agriculture.

Speaker 2:

But, I think there's a lot of maybe I shouldn't be ranting about agricultural practices quantum computing, so this is one application, but even that is kind of well above like that's an error corrected kind of algorithm. As you can kind of see on this chart. It's not that much earlier than RSA. So I'm sort of expecting this rapid transition again, this sort of step function thing where we're not having any useful logical layer algorithms and then all of a sudden you know RSA is broken and so that earlier step of you doing some sort of chemistry without the logical error correction. So I put that on my chart as a sort of green region up at the top and the best estimates I could find were that you actually need much higher quality qubits than we have now. So it'll be kind of an interesting question over the next 10 years of whether that improves at all, whether we actually can do something with the devices.

Speaker 1:

Right, okay, john, what's your hot take for the next 10 years of quantum computing?

Speaker 3:

I've been working in this field for a little bit now. I think I started in like 2011. And since about 2014, I think I've been consistent in saying that I think we'll have cryptanalytically interesting quantum computers in the mid 2030s. I'm not changing that Cool. I think that we will be surprised by the progress Now. Granted, I am a quantum computing skeptic. I really don't think that these machines are useful. I don't buy many of the applications at all and, honestly, my goal is to defeat the quantum computer. I'm not the quantum computer's friend. I'm not promoting it. I want to beat it with the better cryptography that we can deploy today. And, if I can say some things that discourage people from working in the field, great.

Speaker 3:

We're not going to get these machines, they're not going to worry about the surveillance implications. So maybe don't take what I say too seriously. But I really think that we're going to keep seeing progress, like we've seen recently with this Willow chip, and we're going to get to a point where it's really undeniable, except for a few fringe skeptics' holdouts, that you can build these machines and that it was necessary to transition to post-quantum cryptography. We saw something similar, just with adoption of quantum mechanics and understanding that this thing really works.

Speaker 3:

There's a long history of people doing loophole-free Bell inequality violations. They're just like oh yeah, like there's a long history of people doing loophole-free Bell inequality violations. They're just like oh yeah. But like maybe there's like maybe you missed something, maybe you got to move them farther apart before you do the Bell violation test, like you got to do all this stuff. I don't really believe it. These skeptics hold out for a long time. It's great and it's the scientific community to have that skepticism, but I think we'll really quickly get to a point where you have to be concerned, like yes, you can build these machines, they are worth it.

Speaker 1:

Yeah, it's a paradigm shift At first. First you fight it and then that's the new world and everyone agrees like yep, okay, it's real Awesome. Sam John, thank you. I was so happy to get two of the handful of the world's quantum cryptanalysts on the horn at such short notice. Thank you so much and thank you for answering my silly questions. Security cryptography whatever is a side project from Jared Conley, thomas Tachek and David Adrian. Our editor is Nettie Smith. You can find the podcast online at SCWpod and also online at DurhamCrestalim, at TQBF and at David Adrian. Our editor is Nettie Smith. You can find the podcast online at SCWpod and also online at Durham Crested Limb at TQBF and at David C Adrian. You can buy merch at merchsecuritycryptographywhatevercom. If you like the pod, give us a five-star review wherever you rate your favorite podcast. Thank you for listening. I'm going to go.