Security Cryptography Whatever

Signal's Post-Quantum PQXDH, Same-Origin Policy, E2EE in the Browser Revisted

November 07, 2023 Security, Cryptography, Whatever Season 3 Episode 3
Security Cryptography Whatever
Signal's Post-Quantum PQXDH, Same-Origin Policy, E2EE in the Browser Revisted
Show Notes Transcript Chapter Markers

We're back! Signal rolled out a protocol change to be post-quantum resilient! Someone was caught intercepting Jabber TLS via certificate transparency! Was the same-origin policy in web browers just a dirty hack all along? Plus secure message format formalisms, and even more beating of the dead horse that is E2EE in the browser.

Transcript: https://securitycryptographywhatever.com/2023/11/07/PQXDH-etc

Links:

- https://zfnd.org/so-you-want-to-build-an-end-to-end-encrypted-web-app/
- https://github.com/superfly/macaroon
- https://cryspen.com/post/pqxdh/
- https://eprint.iacr.org/2023/1390.pdf


"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)

David:

So when I was in St. Louis, um, I encountered somebody who was like, Oh, you're David. You're from security, cryptography, whatever. And I was like, yes, that's me. So Deirdre, although you, uh, bailed on today's intro, um, you'll be happy to know that your intro is having an effect on attendees of strange loop.

Deirdre:

Aw

Thomas:

the hell was that? Is this like This American Life? What the hell was that? Say the words!

David:

Hello and welcome to Security Cryptography Whatever. I'm David.

Deirdre:

I'm Deirdre.

Thomas:

I'm disgusted.

Deirdre:

And today it's just us. We all have a special guest. Uh, we just had some time and some stuff we wanted to talk about. And so today we're gonna start with Jabber.RU? don't even know what this is.

Thomas:

Wait, do you not know this story?

Deirdre:

I'm assuming it has something to do with Russia and has something to do with the Jabber texting protocol. And that's all I know.

Thomas:

All right, so there are a series of, I'm going to get some of this wrong too, right, but like there's a series of XMPP, dead ender, not real, encrypted chat servers running on Linode and Hexner

Deirdre:

Okay.

Thomas:

in Europe, and the operators of those servers, one of them noticed that when they connected to the server on the encrypted, the TLS encrypted Jabber port, that they were getting certificate errors. And they tracked it down, and if you look at certificate transparency, they had extra certs issued for their domain, and then they were able to, like, with just basic network diagnostics,

Deirdre:

Mm hmm.

Thomas:

show that there was a machine interposed between them and the network that was only picking up the encrypted Jabber ports and forwarding everything else directly over, and so, like, so that's a story, and, like, the conclusion of the story is Any Jabber communications on these servers should not be trusted. We think this is German lawful intercept.

Deirdre:

German!

Thomas:

I think it's because Hetzner is in Germany.

Deirdre:

Uh huh.

Thomas:

I don't know if there's a logic beyond the fact that, like, the governing law would be of it, would be in Germany, but I think the idea is that, like, I don't, I don't pay, you say Jabber and my brain turns off, but I think the subject, the subtext here is that the only reason that people use Jabber is to trade hostages. You know, it's like for kidnappings and, you know, yeah, it's all

David:

think they'd at least use OTR on top of Jabber then, but I guess

Deirdre:

Or Telegram shitty DMs.

Thomas:

Well, if you'd used OTR, maybe you don't care that much as long as you've verified keys on both sides, right? But if you haven't verified OTR keys, which I guess is easy not to do, it's been a long time since I've used OTR. But if you're not verifying keys, then people are just mad in the middle of that as well.

Deirdre:

Mm hmm.

David:

with you that people don't verify keys, but I feel like people that use OTR probably verify keys. But I don't know. It's been a while. I gave up on OTR for iMessage, so that tells you what my threat model

Thomas:

They have, they have other ways of enforcing message security than technology.

Deirdre:

hmm. Mm hmm.

David:

I don't, I was just excited to see that like certificate transparency caught something.

Deirdre:

Yeah?

Thomas:

Wait, so, first of all, it did not,

David:

Well sort of. But the point, okay, so like, it is very difficult to use CT to catch things, like, at the time.

Thomas:

What? What?

David:

catching it after the fact

Thomas:

No,

David:

is actually like, yes it is, like you tell me if you know when all of your certs are being issued and why and by which systems.

Thomas:

have a security team and I think maybe one of the literal first projects that that security team had was setting up CT monitoring for our domains. just...

David:

yes, but once you get told that there's a Let's Encrypt cert for your domain, like can you actually confirm that you issued that? Sure,

Thomas:

We wouldn't then immediately know that Germany was like, you know, doing lawful intercept, but we'd have like a start. We'd, we'd go investigate if we saw, if we saw an auto process issuance.

David:

claim is that you, uh, it actually can be hard to identify what's an out of process issue, especially if you're a large organization. That being said, if you are offering an encrypted chat service, it's a cute kind of look. Mm

Thomas:

In this case, no one was looking

David:

hmm.

Thomas:

until after they noticed they were getting certificate errors. The funniest thing by far of this whole thing is that. Whatever government did the lawful intercept here, and we assume it's lawful because, like, there's enough network diagnostics to know that, like, Hetzner and Linode both did weird network shit to make this work, right? So whoever did this, either both of the, either both of those hosting providers, both Akamai and Hetzner, got owned up, or they're both complying with lawful court

Deirdre:

yeah, yeah,

David:

Yeah. Did we go back and look at, like, BG... Did they go back and look and try and detect a BGP hijack or something like that? Because that should be detectable as

Thomas:

yeah, it's all just like, well, I mean, it wouldn't be detectable with BGP if you actually got Hetzner to kind of suborn your connection, right? Like it's all within Hetzner. They're all like looking at trace route hops and things like that. It's all pretty rudimentary, but like, there's like, obviously there's strong evidence that like something happened inside of Hetzner and something happened inside of Linode, right? But the funniest thing here is like the most logical assumption here is government lawful intercept and they let the certificates expired. And like, the post is like, this is slapdash, their opsec is bad, their reality is, they don't give a fuck, right?

David:

hmm. Uh...

Thomas:

they got what they wanted.

David:

actually. Yeah, what this goes to show is we need more cert errors.

Deirdre:

And louder.

Thomas:

apparently what this goes to show is that what we need... is DNSSEC.

Deirdre:

Oh, God. Uh...

David:

if there's one thing that if you you can't detect it in CT, you definitely will somehow detect it when any DNS server in between you and them changes their answer.

Thomas:

Like, there's like a whole CAA record thing going on right now. It's like, if you had CAA configured, then Let's Encrypt wouldn't have issued the certs. And it's like, if it's a government, I'm pretty sure that Let's, Let's Encrypt was not the bottleneck here. Like any number of others with DNS, but like, whatever. And then there's also like Dane would have like he would have controlled all the issuance for your domain or whatever because and it's it's a little interesting right because the servers here are like XMPP and Jabber. ru and I'm reasonably sure that Germany can't get ru servers to issue Bogus records.

Deirdre:

Hmm. Yeah.

David:

I don't know.

Deirdre:

Who is issuing RUs?

David:

thought Let's Encrypt stopped issuing for ru domains.

Deirdre:

I wouldn't be surprised, yeah.

David:

I don't remember what, how they're like, what their sanctions compliant was. I could be mistaken. Maybe they restarted, maybe they'd never change that. There was, they definitely stopped issuing for some domains, I thought, in Russia. Also, fun fact, if you're a CA, you're not supposed to issue to any domain on like the sanctions list. Um, and you have to like fetch that list every now and then so that you don't like accidentally issue a cert to the Taliban. Um, a thing that, like, definitely happened pretty early on by one CA that I'm not going to name, but you can infer,

Deirdre:

Like in 2001 or in 2020?

David:

in 2015,

Deirdre:

Oh, boy. Oh,

David:

just, anyway, like, you don't really get in trouble, you just get, like, hey, don't do that, and then you stop doing that, and then it's fine, for, like.

Deirdre:

what's, what's the thing? Is it like, it's not export. It's some other, you're on.

David:

I was about to say IACR, but it's not IACR, it's Uh, I, I can't, I not, I can know. Um, I iar,

Deirdre:

Yeah, that sounds

David:

traffic and arms

Deirdre:

Yeah. Yeah. Okay.

Thomas:

I know you guys wanted to hit this because it came out today and also because it ties into what you guys want to talk about next. But like also, like, I don't know that the high order bit of this whole story isn't just don't use XMPP. Like if your security devolves to certificate issuance, something's gone terribly wrong. Am I wrong about that? Am I missing a complexity of this? Or is this just like, why are you idiots using these services?

David:

I think a lot of security devolves the certificate issuance, but like,

Deirdre:

my

David:

like, it doesn't, uh, your, uh, like apply.io deployments eventually devolved a security issuance.

Thomas:

If I'm wrong about these things, we have to cut this section out of when we edit the podcast, right? But, but, but my, my, my immediate

David:

answer it too.

Thomas:

thought is that if you're using, say, Matrix or, you know, Signal for all of your secure communications, the certificates wouldn't matter.

Deirdre:

Correct.

Thomas:

That's all I'm saying.

David:

Yes. So, for messaging, there's probably a lot of cases where,

Thomas:

And that's all we're talking about here,

David:

that's a property that you want, is that the security of your end to end encryption isn't actually just devolved to the security of your TLS connection.

Thomas:

So just the idea that like, at the, at the end of this story, it's like, these certificates were compromised, ergo, if you were talking on these servers, your messages were probably compromised. Seems wrong.

David:

Well, I don't know if Jabber Markets itself is end to end encrypted. I assume it doesn't,

Thomas:

It doesn't.

David:

this would be true of Facebook Messenger. This would be true of like, Slack, right?

Deirdre:

Yeah. Slack.

David:

literally anything that says, like, it isn't end to end. Like, This is just how the internet works, so, I, the, maybe, the better takeaway is like, If you're doing illicit things over messaging, you should pick a good end to end encrypted messenger.

Thomas:

All right, moving.

Deirdre:

drinking.

Thomas:

Moving right along.

Deirdre:

Yes.

David:

Okay,

Deirdre:

Speaking of end to end.

David:

yeah, so, because we do this every episode, let's do it even more. Um, and let's talk about threat models for end to end cryptography. Um, and then specifically on the web.

Thomas:

What an excellent segue. What's motivating this?

David:

Well, what's motivating this is like, so, historically, the web crypto API, which I believe is on window. subtle, not window. crypto.

Deirdre:

Correct. Well, I think it got moved. It was window. subtle and then everything... I can't remember which way around it

David:

got moved too subtle from crypto because some other thing stole the crypto name.

Deirdre:

I don't know. Well, let's... I'll just check on that. Keep going.

David:

Okay, so, anyway, just it generally has a bad rap for a variety of reasons, ranging from, like, just the general lul JavaScript, to some JavaScript specific reasons that people don't really like that it's like an async API, to the fact that, like, in JavaScript land, malicious JavaScript or other JavaScript Can just like replace objects on window, um, for some reason. And so you could, no matter like how good window. crypto or window. subtle, we'll get back to you on what the correct object is.

Deirdre:

Both. Window. subtle. getRandomValues is the nice cryptographically secure random number generator. It also gives you random UUID, which is nice. And then window. crypto. subtle gives you decrypt, derive bits, drive key, encrypt, export, sign, verify, all that crap. But directly on window. subtle you have getRandomValues, so.

David:

So there's that API. There's just like JavaScript being a mess. And then there is, like, a plethora of failed web based, uh, end to end encrypted messengers, for example, CryptoCat, that just, like, didn't pan out. And then you have Signal, which I don't know if they have this, like, documented on a blog post anyway, or, or, but, like, the conventional wisdom is that... Like Signal will not bring its app to the web because the web does not provide the security primitives that Signal needs to be secure, which is specifically

Thomas:

like, your tone.

David:

No, I agree. It doesn't, but like, let's talk about what they are and then talk about like the cases where you actually need those is kind of what I want to get into.

Deirdre:

Legit. And this is for end to end persistent chat sessions. Like long lived chat sessions.

David:

that, I think that is kind of like the most sort of hardcore end to end encrypted use case, right? Would be like, person to person chat of long lived things. Other use cases might be like, I would like a peer to peer video chat

Deirdre:

Mm hmm.

David:

like, ephemeral to like, short lived messages. It's just like, I would prefer my stuff to not exist on, um, someone else's servers and so on.

Deirdre:

Yeah. That is easier.

David:

his beard. We're in a new world where we can see Thomas, and so now I can see that he's slowly getting upset.

Thomas:

I feel like Meredith Whitaker should already be listening to us. And that being the case, this is a segment about how when somebody comes up to her and says, What we really need to do with Signal is come up with some affordance for people to use it in a browser page. She should have ready answers for why that's a batshit idea that will never work. So, please continue. I'm just, I'm, I'm gearing up.

David:

Okay, so like, let's go through like the things that you, let's say for a, a, an an app that is using end to end encryption to do something. Let's go through the kind of things that you have to trust, like somewhat platform neutral. Right, so, like, base level one, the developer,

Deirdre:

Mm hmm. Mm hmm. Mm

David:

you kind of have an implicit trust that, like, the developer itself is building an end to end encrypted app and not building an app that posts all of your content to Twitter and then says it's end to end encrypted.

Deirdre:

Consciously Intentional generally knows what they're doing, developer.

David:

This also kind of covers, like, app store phishing a little bit of just, like, you installed Signal and not definitely, totally Signal, I swear I'm Signal, right?

Deirdre:

Yeah,

David:

Okay, moving on, then you have developer's account security, meaning, in the App Store case, right, the developer signs into some account that can log into, like, the Play Store or the iOS App Store, and so on, and, like, you kind of are trusting that they have not lost control of that account.

Deirdre:

also their, their source control, their CI, their, possibly their

David:

I, I, maybe covering that separately, but I'm thinking more about distribution here.

Thomas:

What's a, what's a model where you don't have that concern?

David:

Well, you don't really have that concern. You have like a different concern on the web, right? It's just like, there isn't an authority that you're registered with on the web. So you don't have to worry about the account security with that authority. But you then do kind of have to verify what I would say is the distribution, which is like the next step after that of just like, okay, did I get the correct app from the play store? Did I get the like correct code from the website?

Deirdre:

it's weaker.

Thomas:

But this is also, this, this is the whole supply chain problem for node, right? Is this is the place where you bucket, like you trust the developer, but somebody got ahold of the NPM entry for that name or whatever, and they publish something else.

David:

Yeah. Although I think like third party dependencies are when you can actually treat, I was going to treat even separate from this, like app app distribution, I would say almost doesn't really exist on the web as well, because what I would kind of call it instead is the privacy of the server side infrastructure, meaning is the access to whatever, if there is something serving you content or code, is that serving what, like, is that being viewed by someone or not. And then like, I would say like access after that is that it, let's say. Kind of like in how Matrix, it allowed the attackers or an admin or a server operator to basically surreptitiously add someone in and out of the group. Maybe you're not even modifying the code, you just have some sort of access to the ACLs. That's another step that is kind of stronger than privacy, but weaker than access to the server side code. To then, which I think is what you're getting at of just like, on the web. The server side content and the server side code are basically the same thing because we just pulled JavaScript out of thin

Deirdre:

the, the client side content and the server side content are like, they're all generated the same

David:

Yeah, there's no separation on the web.

Deirdre:

You could, in theory, architect it in such a way that you have as much separation as possible from the client app and, like, the backend, but, like, you know,

David:

fundamentally, if somebody controlled, was able to modify the HTML that was coming out on your server side, like, it doesn't matter what your nice app split is, like, they can just add more. Add more JavaScript, um,

Deirdre:

like, uh, an app in an app store, especially for a mobile app model, like, there's a very, like, the, the binary that gets built and signed by Apple or Google and then gets shipped down to your phone is just a completely different thing than just, I generated some shit that I handed down to you from my web server.

David:

yeah, like that, that's what there, there is a client side app distribution in the phone case and kind of in just the like general apps on the internet case, if you just like go download an executable, but there's not one on the web, which then brings us to like what we were just talking about, which turned out to be a better segue than I anticipated of like TLS. Thanks. Right? Which is just, okay, if TLS is broken, like, what happens? With Signal, nothing. On the web, that's equivalent to server side infrastructure code access, which is equivalent to client side code access. Because, like, once you break TLS, you can just inject JavaScript. Even if you're using, like, CSP and shit, right? You just, like, block that header, or you add a resource that's loaded from somewhere else, or you rewrite the page. Like, it doesn't just have to be, like, eval, right? Mm hmm.

Deirdre:

Like, for the other settings, if you break TLS, you still have to do more steps to, like, get anything out of it, whereas if you break TLS for the web setting, if you're just sort of, unless you're, like, Unless you're doing weird, crazy shit like WhatsApp, who's like also install this browser extension that does this like out of band web app checking, hashing thing, uh, on top, like TLS, breaking TLS basically gives you everything.

David:

Which is not necessary to use WhatsApp, like, I assume only a small number of people have installed that extension.

Deirdre:

Yeah, like it's definitely like a, we want to accommodate some of our users to use WhatsApp web, which is like kind of this paired sidecar thingy for using web app prop, uh, WhatsApp properly on your phone. Bye. Bye. Bye. But they were like, just because we're aware of some of these shortcomings with the web model as it currently exists, like, please also, if you want to be, like, super duper protect your shit, we built this thing that you can also install in your browser to, you know, protect yourself. But anyway.

David:

Yeah, and then, there's the third party dependency problem, which, if you kind of separate the actual third party dependencies from, like, the fact that your code is getting loaded at, like, runtime, right, it looks pretty the same on apps versus web. Although, like, the Node ecosystem is obviously notorious for kind of being the worst ecosystem, but, you know, given the plethora of React Native apps, I think that actually

Deirdre:

Yeah, anyway.

David:

to both sides of the world.

Deirdre:

Like why is, why is NPM considered the worst? It's just extremely popular and full of

David:

just very popular and big. As far as I can tell, there's no difference between NPM and Cargo other than, like, the Cargo TOML file is perhaps slightly nicer than, like, the 18 iterations of package. json. And what's the other one? Yarn. something?

Deirdre:

Isn't Yarn just on top of NPM anyway? But, Yeah,

David:

I don't know, but...

Deirdre:

I would just,

David:

sometimes having both

Deirdre:

I would just chalk the sort of NPM hate up to it being extremely popular and probably fueling JavaScript. Because you can use it in the web. In a lot of

David:

having, like, the lowest quality packages as a result.

Deirdre:

yeah, that too. Yeah.

David:

the classic is even, or the, uh, uh,

Deirdre:

Is number or whatever. Yeah. Cool. So, like, it sounds harder to deploy, like, an end to end encrypted equivalent of Signal purely served as a web app than for these other settings.

David:

so basically because like server side code access and or TLS, breaking TLS is the equivalent to like breaking client side app distribution on the web, which otherwise you would have to somehow get into a developer account or an app store, or the app store itself would have to be malicious. Like, you end up in a worse off world if you are trying to match exactly the, um, security properties that, um, You get for like SIGDL in a native app.

Deirdre:

And this would basically, if you did, if you tried to do this with something like Signal and just tried to just make it work, you're trying to protect, say, a long term identity key for Signal. You know?

David:

Thomas as a dinosaur?

Deirdre:

No, that's a hippopotamus.

Thomas:

a hippo.

David:

okay. Well, I can only see part of it. Clearly, I have, uh, I have animal face blindness, so I can't tell the difference.

Deirdre:

You're specious, they're all the same to you.

Thomas:

Go on, continue.

Deirdre:

That's a very cute animal. Uh, you're trying to protect some sort of long-term key material or something related to, to long-term key material that authorizes you to do something with it. And it's generally just harder to protect that when the software you need to implement that protection lives in this world where a TLS breakage is, is endgame and all these other things.

David:

I think I'll pause for a second and say, like, Thomas, do you agree? And then after that, I want to talk about, do you need the sig like, in what situations do you want to leverage some form of end to end encrypted cryptography for something where you might not need as strong of a threat model as Signal that might work better on the web? And then two, like, what could we change about the web to, like, try and make some of these better?

Thomas:

I want to hear what you have to say before I have opinions on this.

David:

Okay.

Thomas:

And not least because while you were rattling off that threat model, I was rapidly learning about the SignalPost quantum key exchange.

Deirdre:

Ha!

David:

That's, that's also a thing. Um, Deirdre, do you want to tell everyone about your new job?

Thomas:

I read the, I read the document. I read it and I knew the list of things you were going to enumerate. So,

David:

Okay. So my posit or hypothesis or corollary or something is that there's a huge class of applications that would benefit from end to end encrypted stuff, even with the shortcomings of the web. And the example that I would come up with is basically. Enterprise anything. There's basically any situation where you're relying on some third party to kind of mediate authentication already

Deirdre:

Yeah, yeah, yeah, yeah, yeah, yeah,

David:

but then you would prefer that a bunch of things that you do Do not exist on their servers, or go through their servers, but you, like, by virtue of the problem space, are trusting them to authenticate people already. So, like, corporate video chat from that one vendor that you see in the West Winch, um, for example. I think could benefit from having their video chat be ephemerally end to end encrypted through a web app, even though it still reduces to the security of the TLS connection, because you would prefer that some employee at that, you know, that the plain text of that does not traffic through the servers.

Deirdre:

and that's kind of okay because one, the long term identity authentication material does not have to live in the client application stack. It is handed to you from someone else. You get a token or something that lives for a, you know, a scoped amount of time. So you don't have to worry about the long lived security of that. You just need it for this session and then the client app that's doing the end to end encrypted stuff is kind of trust on first use or trust on this use. You have a token that someone else hands you, you load the end to end encrypted video calling app, you trust it for this call alone, you, you know, identify yourself, you do your end to end encrypted web call, and then you hang up, and if anyone was recording anything, it was encrypted, and then you're done! And everything gets thrown away for the fir basically.

David:

I guess it's any situation that's ephemeral where you're already delegating identity and access to a third party.

Thomas:

the root of trust is still the TLS connection. When we're talking about like, when we're talking about the encrypted video chat vendor, right, we're saying there's a benefit to the vendor or the operator like the, like, you know, I don't know, Allstate, right? When Allstate is running this thing, Allstate would prefer that plain text of this never runs through their servers anyways, just for kind of logistical security reasons, right? Um, the trust relationship is just the TLS connection still. You're not gaining anything beyond TLS trust from this. The enterprise is getting, like, an additional kind of, like, logistical security benefit by running end to end security on the web using JavaScript or web crypto or whatever. Even though they don't have improved trust, they still know that they don't have to worry about plain text being on the server.

Deirdre:

Yes.

David:

I don't know that I would, although the security of it does reduce to the TLS connection, I don't think that the TLS connection is the root of trust necessarily, but like, I don't know that that's a point worth arguing, right, because the root is still like, you are assuming that this third party provider that is doing identity and access is like behaving correctly to do identity and access.

Deirdre:

But then you have

David:

the root is still like some database that they have.

Thomas:

But if I'm an attacker going after that system, if I know that that's the setup, and I want to record... You know, conversations or whatever. What I'm gonna do is make a beeline for the js files or whatever and have it feed key information to me, right?

David:

Correct. So you'd either, you're raising the bar from read access on the server side to write access, or to write access to the TLS connection.

Thomas:

yeah, and like one of the

David:

I think that's a benefit.

Thomas:

it's like a marginal benefit, right? But like one of the problems I've always had about this is like the realism of the threat models that we're talking about where it's Like at the point where you have read access to things that are going through that server Like at the point where you have read access to TLS post TLS termination the plaintext post TLS termination at that point one thing is just that, like, in the kind of, in the classical kind of internet TCP model, right? As soon as you have that read access, you kind of implicitly have write access because you can hijack TCP or whatever, which is kind of silly, but whatever, right? But like, in reality also, if you're able to read the plain text post TLS of that stuff, you probably are, you're probably in a place where you can easily, you know, backrow the JavaScript anyways.

Deirdre:

Depends where, okay. Depends where you're backdooring it. Because in most modern web apps, you bundle your shit, it gets shipped off to ACDN edge, you gonna serve it to somebody from an edge and it's gonna be a different place than literally the depending which TLS termination you're, you want to get. Is it the one that's the backhaul of the actual. You Web r tc, encrypted

David:

yeah, if you, as soon as you get the domain, it doesn't matter where the rest of the JavaScript comes from because you can just edit it.

Thomas:

In the enterprise model, none of this is coming from CDNs, right? This is coming

Deirdre:

Uh,

David:

Well, it's still coming from like CloudFront and shit,

Deirdre:

yeah. Like they still, yeah. They're not all serving it on their own things. They still need to, they still don't wanna have their own shit.

Thomas:

let me say right off the bat that JavaScript cryptography, I can see it making sense in a bunch of enterprise situations. I don't have a lot of clarity to that thought, but like, yeah, the intuition that if you can make sane decisions about whether or not you trust your own servers or whatever, or whether the operator of an endpoint should trust the server, then sure.

David:

I think JavaScript is much broader than like end to end JavaScript cryptography, because it may also just be the case that like. You have a token from some API that you're calling and it would be nice to be able to verify said token, you know, in JavaScript without, with either with a third party library or without a third party library and just be like, Oh, this Poseido or JWT or protobuf token or whatever, or whatever credential I got was in fact like signed by the other party, right? Like, that's just like a thing that you might have to do in the course of building an app.

Deirdre:

Yeah. It would be nice if you could call web dot subtle dot crypto dot verify on a, you know, credential that someone handed to you, and it's not just only RSA and ECDSA, but

David:

And that's just completely separate from like the end to end model.

Thomas:

but it still comes down to the problem of like, you know, you could just, so like, the mental model I have of this is like, you could also just have the API export a method to say, do I trust this token or not? And it's the same security model, right? Because if you didn't have that endpoint, if you rely on cryptography to do that, then the server can just feed you code that will subvert the cryptography.

David:

Yeah, but like someone needs to implement the, like, do I trust this token or not method. Like, I guess you could implement it with a post to another server, like, sure, but like, if you don't want

Deirdre:

And that's another

David:

it. Yeah, I guess, anyway, I guess my point is that I think there exist use cases where, like, this, this threat model does make sense because, like, you don't want, and it's basically just, like, you don't, whether it's ephemeral or, like, medium length, it's just, like, you don't want some sort of content existing in a database that someone can run select star on. Or, on a stream that like, someone could tune into, um, by just doing like, rtc. connect.

Thomas:

you're right. And then, like, in the standard HTTP, like, HTML, like, more importantly, HTML application model, where, like, I can push JavaScript and I can push HTML to you, I don't natively get the capability of giving you ciphertext, but if I just give this, like, JavaScript blob that does cryptography, then I can be sure that there's no plain text in my database. Or I can be reasonably sure that there's no plain text in my database, right? Like, I buy that completely, right? If people go into this, like, understanding that, like, That's a situation where your, your trust in the cryptography is the same as the trust in the server. And there's just like logistical kind of operational reasons why it's nice to be able to trust the JavaScript instead of, you know, having the server export new methods or whatever, whatever that is, right? Like, that makes perfect sense to me, right? And there's like a, it's probably true that most of the opinions that people have about browser based cryptography are A, based on a really dumb blog post that I wrote like, I don't know, 15 years ago,

David:

heh.

Thomas:

but more importantly, That sounded really bad, but

David:

Thomas wrote it before I was born.

Thomas:

importantly,

Deirdre:

Are you 15 years old?

David:

No, the blog post is actually, uh, older than

Deirdre:

Oh,

Thomas:

It's real, it's real, real old, but like, more importantly, There was a time when there was like a, you know, Like a monkey knife fight of different secure messengers, right? And there was like intense motivation to get more users really quickly for things, And like a really easy way to get more users for your messaging application, was to not have to install things. This is like one of the oldest stories and kind of go to markets for applications is like that app store click is death, right? If you can just be on the browser, so much, so much simpler, right? So like at the time there was like immense incentive for people to say, well, we can just deliver this cryptography code via the browser. It's the same code, it's the same cipher, you know, it's the same crypto system. So like, who cares,

Deirdre:

Well,

Thomas:

And for end to end messaging, that's absolutely not the case. It's the, it's the opposite of the case. It's the worst possible thing you could do. But like for the, like the really narrow kind of marginal security use cases we're talking about here, where it's more about like, I like the person who's trusting the cryptography. is the person who's running the server. It's not about the end user. It's about the person delivering the application. I would personally, as the vendor of this application, not like to have hazmat plain text in my database. So if I could just trust my own code to keep hazmat out of the database, that's a huge win for me. Yeah, I totally buy that's the case. And I also buy the idea that dogma about keeping, you know, cryptography out of browser JavaScript would keep you from kind of really seriously considering those things. That seems right to me.

Deirdre:

Yeah. And now that you have wasm, wasms, not like an end all, be all for. Getting around the fact that, uh, you know, Wasm and JavaScript eventually go through an interpreter and things like that.

David:

But, shit.

Deirdre:

Yeah, a JIT, uh, which gives you, you know, if you care about shit like side channels and you care about the JIT not fucking around with your like very pretty cryptography implementation that you originally wrote in Rust and then compiled to Wasm, then you shipped into the browser. I wouldn't care too much about that, but having WASM is a nice compilation target for writing your cryptography in a higher level language than just JavaScript, and then you can compile it and ship it and run it on the web. Um, I think that is nice to

David:

Yeah, it addresses the fact that writing code in JavaScript to do cryptography is just the fuckin worst.

Thomas:

There was a, there was a two week period where we were doing our, where we were doing our macaroon token implementation at fly where we were like, my first cut implementation was in rust and we were seriously considering doing just, so we have this problem of like, we, we write this, this rust macaroon library. But most of the code that needs to consume and deal with macros is in Go. And like that, what, what isn't in Go is in Ruby. And it's like, well, one thing we could possibly do here is just compile the whole thing to Wasm. And there's a bunch of different Wasm interpreters that we can run from Go. So like there was like a, there was a brief, wonderful period of about a week and a half, whereas all of our token code was going to be Rust code. But it was going to compile down to WASM and to evaluate a token from Go code, like server side, nowhere near a browser, you would evaluate the WASM of the Rust code? Um, yeah.

David:

And then you realize you didn't want to run anything that involved C Go because there's not like, a good pure Go web interpreter.

Thomas:

no, no, there's like, oh, I mean, I don't know if there's a good pure Go WASM interpreter, but there are pure Go WASM interpreters, right? Like, if I could do C Go, I could just go directly to the Rust code, right? But it's like, there are pure Go WASM interpreters. And then it's like, how much do I care about, like, whether the, you know, the WASM interpretation of the Macaroon token code is going to be fast enough or whatever. And that's like, I just, I, it took me like a night to port it all to go, and all the rest of the people were very sad, so.

Deirdre:

oh, oh. So you, you rewrote the ma, the original macro implementation and go, and then you're just like, nah, we're done.

David:

Yeah.

Thomas:

Pretty much, we just open sourced our Mac, we have like, I think we have the world's most important Macaroon implementation right now. Just because no one uses Macaroons, but still. We have one running in the wild right now.

Deirdre:

That's awesome. I'm putting this in the show notes.

David:

Are we gonna get a blog about this at any point?

Thomas:

Yes, you, you certainly will. And I don't, I don't mean to hijack you with that. It's just, you said Wasm and I just remembered my only real contact with Wasm was not browser stuff, but getting my cryptography code to go from rust to go.

Deirdre:

But still, like there's all these runtimes that are like, we are a super fast Wasm runtime and we are, you know, all these things Like it having, yeah, and like, I think Wasmtime had this like high assurance implementation validation of their thing or whatever. It's actually, Wasmtime is pretty cool. Having Wasm as like this kind of new, new target that's like cross platform is kind of sexy. Um, and I'm, I don't know if anyone expected that, but like the fact that that kind of fell out of we're trying to do better bytecode shipping on the web instead of just raw JavaScript. It's pretty cool.

Thomas:

The moral of the story here. As I understand it, is that people are blinkered about browser cryptography because they're all assuming that the only threat model is signal threat model.

Deirdre:

Yeah.

David:

And I think there are both threat models in between, and then orthogonal concerns about how much JavaScript sucks are somewhat addressed by the fact that you can compile things to Wasm.

Thomas:

And for me to concede that this is the truth, I do not in any way have to acknowledge DNSSEC.

Deirdre:

Correct.

David:

Correct. Also, you already conceded that this was true like a couple minutes ago, so.

Thomas:

All right, we're on the same page. We're good.

Deirdre:

Yeah.

David:

Okay, cool. We're on the same page. We didn't even have to argue and turn out that it agreed. Just agreed up front.

Deirdre:

Um, same origin policy.

David:

Before we talk about the same origin policy, what does, you know, a great set, if you ever hear that at a party, just leave, let's talk like what, what would fix this? I'd like, is there a way to like make the web look more, uh, a regular app? And it's basically what that means is like, you need some way to like separate client app distribution from like server side app distribution.

Deirdre:

like when Chrome or like browser extensions and like extensions that were shaped like Apps, like installable apps in your browser that seems to kind of just have fallen away when those existed that,

David:

isolated web apps.

Deirdre:

yeah,

David:

A thing I know about as a thing, because I work for Google.

Deirdre:

sure. I don't know if I've used,

David:

just like mega PWAs,

Deirdre:

all right.

David:

progressive web apps, um, but you can imagine, uh, uh, that, that there exists like some way to bundle a bunch of JavaScript code up front and then have some sort of guarantees that it doesn't load other JavaScript code and then Run it, but then there's still a bunch of interesting questions after that.

Deirdre:

because that doesn't, that doesn't follow the, like, installable store model of, like, we track versions of this thing, and then we, we sign the thing, and then you can chain those signatures up to some sort of authority, or, you know, whatever equivalent, because, like, I install shit from the Chrome extension store, and it kind of has that, Which is nice, but that's not a web thing. That's a Chrome thing. It's just, there's a cross web extension sort of, you know, manifest thing now with a common ish, like, API definitions and crap like that, but, like, it's still the Chrome web store and it's still the Safari equivalent and the Mozilla equivalent and, you know, whatever. Don't we have site integrity crap and didn't, wasn't that supposed to help or something?

David:

so, I guess, like, if we, there maybe exists or might one day exist, like, a mechanism for, like, saying, here's a bundle of JavaScript codes, like, go install it, like, once and, like, install, in quotes. And then you can't load in new JavaScript code, but then you have this problem, which I think you're saying is that, like, how do you know when there's a new version of tap code and who is giving it to you? So you've shifted the problem from on page load to there's a new version and it's probably automatically getting installed and then, like, what do we do now?

Thomas:

But that's a huge, that's not a small shift. That's a, that's, it's mostly the same as the, at that point, you're mostly in the same place as native apps.

David:

well, what do you mean? Sorry, what's not a small shift?

Thomas:

I'm saying that like, just as a, as a veteran of a million of these arguments on message boards, the kind of the classical form argument of web app crypto is just fine because native apps are not as secure as you think they are, is your native app auto updates anyways. When there's a new version of the native app,

Deirdre:

Mm hmm.

Thomas:

store updates it or whatever,

David:

Well, let's, let's explore, like, why that, that, I, I, I don't agree that, like, if you just suddenly had an installable auto updating web app that, like, that would have the same security properties as an auto updating. Native app. And so like, why not?

Deirdre:

I mean, if it doesn't do the, like, refresh app pull from server thing, if it is literally, if it's installable bundle, I refresh it. It's refreshing from disk, not refreshing from the server. Where do

David:

Well, assuming that there's, there's like a, a auto update mechanism for the server to push a new version of the app as well.

Deirdre:

Oh,

David:

Just like there, just like there isn't a, like you on, on your iPhone, in the US, you're probably auto updating all of your apps,

Deirdre:

so, it's not, so in this update, this sort of model update, it's pushable on, in, from a random server like fly. io or, you know, whatever, securitycryptographywhatever.

David:

Yeah.

Deirdre:

every time I open the app, it's pulling from that same server. In contrast to,

David:

pulling some manifests from well known, and then that's, that's like, oh, there's a new bundle of JavaScript code available at X, but otherwise there's no new JavaScript loaded, something like that.

Thomas:

it's worse than every time you open the app, right? It's like every time you interact with the app, it's potentially loaded in the web app security model,

Deirdre:

current model. Yeah,

David:

yeah, in like a progressive web app world.

Deirdre:

but this

Thomas:

potentially every interaction you have.

Deirdre:

Yeah Like you you might click on something in your web app and it might actually be loading some dynamic JavaScript from some random server in the background and that's totally fine

David:

Well, probably the same server as last time, but yeah, in the sense, the same name as last time. And so, we all know that that doesn't have the same security properties as loading a new app from an app store. And let's just say, like, why is that? And the answer is, like, it still kind of reduces to your TLS

Deirdre:

Yeah. Okay. So,

David:

Is there a way to fix that? I don't know. I like to throw a few of magic some transparency onto it. Like, does that help? What does that mean? I don't know. This is me. To what you're actually hearing is me just kind of brainstorming through part of my job.

Deirdre:

right.

David:

Like, trying to get, uh, we've had plenty of episodes where Thomas tries to learn how tokens work so that he can build his macaroons. And now, um, we're moving on to just, like, David thinking about web security because he has to do that for work.

Deirdre:

why couldn't we have not literally NPM and not literally Cargo, but something equivalent for my web app bundle? It's like a store with some sort of like, you farm out your developer accounts that give them keys and shit like that. It doesn't necessarily have to

David:

describing the Mac App Store except React Native apps, right? Like,

Thomas:

you're, you're also describing the Chrome app store, right? Like that exists.

Deirdre:

yeah, yeah, but it would be a proper app and it doesn't necessarily have to be Chrome. I mean, like I

David:

There's only in a extension store. There's no, there's no such thing as, as, as crow maps,

Deirdre:

Well, there used to

Thomas:

But we're,

David:

eh.

Deirdre:

I mean, like you could, in theory, make this work without a ton of, I feel like this isn't a terrible thing to support. For the web, you just have to like make it work and it's basically a hop, skip and a jump from here's my giant Electron app that bundles in a browser to my Electron app just runs in the browser and you pull it from this sort of either community store or Apple store or Chrome store and Google store or something like that or Mozilla store. I don't know. I figured that could be something, and people get it, and it's not much different than an Electron app, which Signal builds their shit on Electron. So their

David:

this question, well, like, who runs, no one wants, running a store sucks, like, even the companies that run stores don't want a store, too, like, a store, like, is, like, not the web, so, like, I don't know. Um, I don't know, anyway, this is just where you get to, well, this problem kind of sucks, because you, like, you're like, I want a centralized entity. That you register with and then you're like, ah, it's the web, I don't want anything at all. And then you just are sad. Some

Deirdre:

I mean, honest to God, if Electron or whoever produces Electron, which is like, I think it's just an open source project that forked Chromium or whatever, and then made it into an app developer platform. If like Electron ran a

David:

of Facebook and Microsoft, I think.

Deirdre:

If, like, Electron started up a store that's literally, it's just Electron apps, but they run in your browser and it has, like, kind of the update mechanism of Electron apps, I would try those, and prop depending on how they run their shit, like, I

David:

you and maybe a dozen other

Deirdre:

Hey, a dozen more than a dozen people run Electron apps, they just don't necessarily know they're running Electron apps unless they're a

David:

know what they are.

Deirdre:

Yeah. They're just beefy and they're like, meh, why does, why does Slack installed on my desktop eat, run as much battery as like a million Chromes? Well, let me tell you.

David:

So you could solve it with a store, but specifically for things built with web technologies, that's one way that you could solve it. For some definition of solve. Other than that, like, I don't know. Anyone have any magic ideas? Like, I guess... Some MLM out of transparency would at least make it so that... If your app was compromised, then everybody else's apps were compromised. And then a day later, Thomas could tell us about how a bunch of users got their app man in the middle, um, on unjapper. ru because someone broke a certificate and uploaded a new version of the app. But at least we'd have a ledger of

Deirdre:

Yeah, it, would be like some sort of binary

David:

hand waving over who runs the, the transparency servers.

Deirdre:

but like I could definitely see Chrome, Google, Apple, the CloudFlare, you know, to do run something, some Akamai, maybe running something like that, and you can opt into it. Well, Chrome would probably turn on whatever it can do by default, but you can also add other logs if you wanted to or something like that. Yeah, yeah,

David:

don't know.

Deirdre:

yeah. I, I want the thing that I

David:

there probably

Deirdre:

want.

David:

some transparency scheme that you could put together, but would probably involve a bunch of people running logs out of the goodness of their heart.

Deirdre:

Yeah.

Thomas:

Sync Store is about, right?

Deirdre:

Oh yeah? Sort

David:

God. I don't want to go into that

Deirdre:

Okay. Alright.

David:

because I don't, don't know enough about, like, the details of how it works, and two, just because I feel like everyone gets really mad all the time. Ha ha ha. No, no,

Thomas:

call in your questions to our podcast reviews in One Star Reviews. If you give us a One Star Review with feedback,

David:

give a, uh, no, a 5 star review.

Deirdre:

Yeah. Give us your feedback in a 5 star review. You can, you can cuss the shit out of us if you give us a 5 star review and ask us your questions. But also you can find us on,

Thomas:

if it's a One Star Review, I will personally respond to your

Deirdre:

Oh god.

Thomas:

show.

Deirdre:

Geez.

David:

Alternatively, go to This American Life, leave a 1 star review, and we'll know it's for us, because you're the only person giving This American Life a 1 star review.

Deirdre:

Yeah.

Thomas:

Alright, same origin policy.

David:

Okay, yeah, I guess this kind of brings us to, like, all these things are kind of related, and this is just me getting to the fact that the web just fucking sucks, even though it's really cool. So, I guess we'll, we'll pause it, like, a hypothetical question, which is just, like, One, are cross origin leaks, like, trying to defend against cross origin leaks, Can you, like, say that's important while simultaneously saying supply chain security is important? I don't think that you can believe that, like, one of these things matters if you think the other one matters. The argument there being, like, why are you including cross origin things if you care about supply chain security? And two, just that the same origin policy, like, didn't ever actually work and is kind of a dumb solution to a maybe actual problem. This is kind of my charge, hot take statements to start with.

Deirdre:

Okay. Yeah, heh.

David:

Okay, so we had this problem that was like, you got the web, and then you're like, ah, website A, like, includes an image from website B. And then you're like, ah, well, let's, so let's send all the cookies that you have for website B when website A uses it. And then they're like, shit,

Deirdre:

heh

David:

what if you like do actual things with that cookie on website B from website A? Then it's like, okay, well, you can still send the request. But you can't read the response in JavaScript. I guess maybe that'll work. And then you're like, Oh shit, what about, I guess, well, maybe we'll hide the cookies from JavaScript some of the time. And basically my posit is that what we should have just done was isolate storage and move on with our

Deirdre:

Hmm. Could you re

David:

answer is OAuth didn't exist in the 90s, but like,

Thomas:

This could be a, this could be a real short segment.

Deirdre:

Yeah.

David:

yeah,

Thomas:

Cause, cause it comes down to, I took a job at Google and now I'm questioning all the first principles of the web.

Deirdre:

Well, it took a job at Chrome to be sure. Heh heh.

David:

well, you know, we're just trying and then so the other thing the same origin policy does is it kind of prevents a network pivoting in which if you are on your enterprise network and then you load blah. com, blah. com can't start sending requests to your local network and reading the responses. But, I don't know, anyway, I'm just kind of like, all of this seems really silly. Because like, sending a request still has a lot of impact. And I guess my question is like, what else am I missing here? Or are we just like, is this, this might just be me having an existential crisis that the web security model doesn't make any

Deirdre:

I mean, I feel like I've had that several times.

Thomas:

That's what's happening here.

David:

Mm hmm.

Thomas:

my

David:

I apologize for bringing everyone else through

Deirdre:

That's

Thomas:

You haven't. You haven't. I bounced. Very... Very quickly, I identified what's happening here, right? Like, none of us thought the same origin model was good,

Deirdre:

I mean, like, it felt like, uh, it felt

David:

mostly didn't think about

Deirdre:

yeah, it felt like just patching a, like a, a hole that you didn't think about when the web as a document, hypertext, linking, who's he, what's it, was first developed. And it's just sort of keep patching holes.

Thomas:

if you had this, the whole security model to do over again, would you end up with the same origin model? Is that the question?

David:

Yeah, I think that's the question.

Thomas:

No!

David:

Well, I don't know. Cause I just think that like you end up at this problem of like it, the shipping client code dynamically actually is both like really powerful and kind of sucks.

Deirdre:

Hmm.

Thomas:

Well, I have a lot of thoughts about this, and so, like, to start with, I'd like to say, the single post quantum key exchange, what do we want to say about that?

David:

yeah, we'll finish this conversation another time to your listener and Thomas, please go on. What, Signal as opposed to Quantum Key

Deirdre:

Oh, hey.

Thomas:

Deirdre's the one who wanted to

Deirdre:

Oh, right, right, right, right, right. Cool, cool, cool. So, Signal a couple of weeks ago announced and rolled out an update to one of the pieces of Signal the protocol called the triple Diffie Hellman key exchange, and they updated it by adding Kyber prekeys along with their existing elliptic curve Diffie Hellman prekeys, specifically to try and mitigate store and decrypt later attacks by people who have a sufficiently large quantum computer that could do something with the signal protocol encrypted messages as they currently exist. This was... We've done a couple of weeks ago and I was one of the reviewers that looked at like the preliminary draft document just before it got deployed. I got looped in late and we gave some feedback and they tweaked some stuff. But a couple weeks later, there was some work done by one of our collaborator reviewers to formally model the updated key exchange using ProVerif and CryptoVerif. ProVerif is an automated tool in the symbolic model, and CryptoVerif is a prover tool in the computational model. So they give you kind of... Different views at modeling a cryptographic protocol, the computational model with Cryptovera gives you kind of ways to model and prove things like unforgeability of your signature scheme or something like that. Symbolic model kind of gives you like, you know, if the attacker can see all these messages and all these crypto primitives like work perfectly, what can this give you with like lots and lots and lots of runs and things like that. And they found a couple of bugs and they found a couple of nits in the specification, uh, and those are getting updated in the, uh, signal schema, and I think there's some changes gonna eventually get rolled out in updated versions of the signal. PQ, Diffie, Telman, Triple, Thingy, whatever. Um, I don't even know how to pronounce it anymore because they changed the name. This came out today, I was very excited about it because it only took about a month of work to model it. And, as always seems to happen, if you try to formally model a... Even decently specified crypto scheme, bugs just seem to fall out of it in the process of trying to model it. And that did seem to happen here as well. Although, I do have to say that there was one of them.

David:

up? What did Signal screw up?

Thomas:

Okay, so, alright, alright,

Deirdre:

Yeah, yeah, yeah, yeah,

Thomas:

know, like, like, there's the X3DH protocol, the triple Diffie Hellman Authenticated Key Exchange, which is like the backbone of Signal, right,

Deirdre:

The roots.

Thomas:

yeah, so X3DH is, I think it's a 25519 based curve, DH scheme. What's clever about it is that it doesn't explicitly use signatures. It's a, it's a series. You guys both know it, but I'm just

Deirdre:

Sure, hmm.

Thomas:

a series of Diffie Hellman's that results in an authenticated key exchange. Right? So you have this thing, and I'm always a little bit fuzzy on this, but it's like, it's sort of pseudo interactive, where it's like, it's, it's a messaging system that people use from their phones, right? And they're talking to each other. And like, at any given time, when you send a message, like when Alice sends a message to Bob, Bob may or may not be online at that point. Right?

Deirdre:

the first time.

Thomas:

Yeah, and Alice wants to send a message to Bob for the first time. So Alice needs to get enough key information from Bob to do not just an authenticated key exchange, but an authenticated key exchange that hides their identities, that has enough ephemeral key information. So that's, that, like, identity hiding key exchange is easy to do when both parties are online at the same time, but trickier to do when one of the parties is not necessarily online. So there's this whole scheme where, like, Bob uploads Prekeys to the signal server, which is like a signed bundle of ephemeral curve25519

Deirdre:

Signed with the, the signing equivalent of their long term ID key pair, which is, yeah.

Thomas:

So you've got the situation where like, there's a finite, like expendable resource that the signal server holds of prekeys that Bob has uploaded. Every time you do a new exchange with a new person, right, you're, you're spending some of those er to prekeys or whatever. Right. And the protocol accounts for that, right? Like, I think you like. You get like a less identity, if you run out of prekeys, you get like a less identity hiding thing, or I forget what the whole deal is there, right? But there's a run of the protocol that exists when there's plenty of prekeys, and there's a run of the protocol that exists when the server is out of prekeys. Which means that, like, it seems like a through line for... The formal verification work for Signal here is that, like, there's a case where you have three, like, so remember, like, the whole idea behind X3DH is instead of doing a signed Diffie Hellman key exchange, you do three Diffie Hellman key exchanges, which accomplish the same thing.

Deirdre:

Mm hmm.

Thomas:

Diffie Hellman key exchanges are much faster than signatures. That's a win, right? So you have like DH3, which is like the normal 3DH thing. And then you have DH4, which is if there are enough keys to do the fourth ephemeral pre key. Diffie Hellman, then you do that, and you have a fourth Diffie Hellman thing. And you take all those Diffie Hellman values together, and you run them through a KDF, I assume it's like an HKDF or something like that, but you run it through a KDF, and you get a shared secret, and you're done, right? So, the root of all evil here is that difference between the run of the protocol that gives you three Diffie Hellman values, and the run of the protocol that gives you four Diffie Hellman values,

Deirdre:

Or at least one of the issues. Yes.

Thomas:

Right. I think I see it in more than one situation here. Right. But either way, like the first day, there's like a public key confusion thing where like, you have this problem where like the new post quantum thing adds an additional key exchange to the whole thing, right? Which is like the post quantum security thing here. And like, there's a confusion attack where it's like, do I have one? The full pre key run where I have the 4th Diffie Hellman value, or is that just the post quantum value?

Deirdre:

Mm hmm.

Thomas:

just the result of Kyber? And it's like, if you were, like, it's not encoded such that you could tell them apart just by the encoding. But of course, the Kyber key, or the Kyber, whatever the message is for Kyber, is not the same size as the 25519 thing. That's immediately apparent in actual signal in reality. That can't happen. But in a formal verification sense... Um, if in the future you swapped out that Kyber key exchange for something that was 32 bytes or you went to some different curve

Deirdre:

Or there was a bug that was like, it did everything right, but then like, it spat it out on the wire as 32 bytes out of, you know, you did everything right for your Kyber, but you had a bug that actually accidentally sliced it and sent it over as 32, like the formal verification is basically saying the thing you wrote down doesn't prevent this public key or whatever, this, uh, pre key confusion thing. And basically they are arguing that If you don't have these specifically like separated encoding things, and you ended up giving the weak truncated Kyber key to the other party, it could lead to like a weak Diffie Hellman and make it easily attackable or whatever, something like that.

David:

So, in the store, then decrypt threat model, how does 3DH fit in? Like, is it interactive enough that it's, you would need to, does it effectively break, still function the same for 3DH, or would you need to be like a man in the middle?

Deirdre:

uh, so this is for, uh, non active

David:

Yeah, well the prekeys case would devolve to like, effectively it stores and decrypts, but

Deirdre:

Yes. Uh, for

David:

non prekeys, I don't

Deirdre:

I think at this basically, Oh shit. No, I don't remember. Um,

David:

you need to be an active attacker, or could you decrypt a transcript? That's an interesting question. I don't have the answer

Deirdre:

so what they did when they upgraded to PQ is they have the elliptic curve pre keys. The long existing triple Diffie Hellman was you've got your identity keys. That you have your public identity key and you, you, this is the thing that you kind of smush them together with your, your partner's, um, public identity key and you do the, the fingerprint compare. That's up there. You've got your, your elliptic curve prekeys that are signed by the ED25519 equivalent of your identity key and uploaded to the server, um, and then you do an ephemeral one when you're doing your, your first setup. So you do a Diffie Hellman between your ephemeral. This is if you have an ephemeral, um, and if you don't have an ephemeral, you have like the last resort pre key or something like that that you share. with other people. And this is how you do your, your, you do Diffie Helmans between your identity key and the ephemeral and the other identity and ephemeral and the ephemeral and the ephemeral. This is the trippy Diffie Helman. The PQ one up, up additionally with the elliptic curve prekeys uploads Kyber prekeys. Those are also signed by the elliptic curve identity keys. They're not signed by a post quantum equivalent. And then you do, you decapsulate your, your Kyber shared secret, um, and include it in your, your KDF. This is all trying to protect from a just listening, capturing all the public traffic, um, and storing it and decrypting later. I think the falldowns for, uh,

David:

said it backwards. 3D, like, if you have a transcript, you just take 3D, you use your, 10 years in the future, you take your 3D log, and, like, you would have the plain text, but, for prekeys, you would need a, a quantum computer now to, like, upload fake ones, which is why they can be signed by.

Deirdre:

if you store everything and in some time in the future, DLOGs just evaporate, you would be the current old school signal, um, you'd just be able to derive all the traffic all the way down if you capture everything for the triple Diffie Hellman, I think. There's a little bit more work, but whatever, because of the double ratchet. Now with CHEMS, as long as you are doing, um, your PQ triple Diffie Hellman with your CHEMS, your Kyber, you fall back to that as being fed into your KDF to start your double ratchet. So you would also be able to fetch that out. But they had some other cute little bugs too, although I'm reading the... Not

Thomas:

forward secrecy thing

Deirdre:

forward secrecy thing, but, oh, sorry, yeah, yeah, the weak post quantum forward secrecy. Yeah.

Thomas:

it was earlier. I was like the pre key thing was identity hiding and I think it might just be forward secrecy either way like because they it's just like a Wait, hold on. I thought I understood the weak post quantum forward secrecy thing. Everyone, this was published today. We're reading it on the fly as we go, right? But like, the one time, the one time post quantum pre key is, like, in the good case. In the run where you've got enough pre keys to run the whole protocol. And the signed public key thing, the signed post quantum public key thing is the, like, the last resort

Deirdre:

They're all signed, but basically they keep the last resort key and they may use it for multiple parties, I think, or something like that, because they need something up there or else they can't start a conversation at all.

Thomas:

and they're signed by

Deirdre:

They're signed by the, yeah, the ED25519 equivalent of their identity key. This is another funny thing in the footnotes, where they're like, So, there's no, like, security notion of literally having your signing key and your Diffie Hellman identity key be the same key, because they are, and, like, Signal's done this for a very long time, but it's also like, it always felt a

David:

They're, they're doing a separate trick, not just 3DH, they're doing the, like, the Zed, the Xed transformation.

Deirdre:

And, you know, that's seems to work okay, except there isn't actually like a formalized security notion of doing that. Usually what you do is you have a root secret and some sort of key derivation tree, and you would like turn that, like get a signing key from that root secret and a Diffie Hellman key from that root secret. Not literally have them be like, there is no root, they're just different versions of each other because that's a thing you can do with Montgomery and Edwards. Yeah. And it's just kind of funny because they're like, because of this, we have to just like do a hack in our model to make everything go through and just pretend that they are, otherwise that the proofs just wouldn't go through. So I'm, that's funny to me that that just kind of falls out. Like the prover, the formal modeler is like, what do you mean you don't have two different keys? The proof tool wants to kind of tell you that they should be different. And like, I don't know if there's just been no pressure to like, Synthesize a security notion of what if they are the same key and like, what does that mean? But whatever,

David:

Historically, this has been, like, bad.

Deirdre:

right. But

David:

guess I'm not coming up with any specific examples off the top of my head. Sort of the old style RSA TLS SKI EXCHANGE, but,

Deirdre:

right. Yeah, we kind of talked about the, um, kind of when you're putting all these things into your KDF, you might have a triple Diffie Hellman, you might have a four Diffie Hellman. Depending on what prekeys and stuff are available to you, whether someone's online to do some more ephemeral crap. Or in the PQ set, uh, you're doing three or four classical Diffie Hellman's plus a chem thing that lets you get a shared secret from the chem. And the formal model basically spat out that, like, there's a little bit of protocol specific information that you feed into the KDF for all of these cases, but the KDF does not change depending on whether you do trippy diffie hellman, forward diffie hellman, triple diffie hellman plus a kyber, forward diffie hellman plus a kyber, and the formal model was like, you should not do that because basically... You took something that was pretty secure in the classical setting, the original triple Diffie Hellman setting, the protocol, and then you changed it to add a new thing, and you may have introduced a security issue that the formal model picked up on that you didn't have in the first place, even though you're taking Kyber, which is good and post quantum and secure, and you're taking triple Diffie which are good and secure in their own way. And, like, the general KDF that, you know, HKDF is generally considered fine and good and all this, like, you can put pieces together that are all independently considered good and secure, and you can combine them in an insecure way, and the formal model basically was yelling about that. So, that's an interesting one.

Thomas:

This is kind of what I was thinking earlier when I said, like, the root of all evil, like, in the formal model here, right, none of these are real practical signal vulnerabilities, right, but like, in the formal model here, like, the big complexifier is the classical cryptography and signal might be working with three Diffie Hellman shared secrets, and it might be working with four, depending on the pre key situation, and then you might or may not have the post quantum situation, right, so there's a lot of, like, There's this weird variable, the variability, right? And so when you think about plugging, like, the purpose of a KDF is just to glue that shit together, right? Like, I have three keys, I have 50 keys, I have 100 keys, what the fuck ever, I feed them to HKDF, I get, like, a fixed output for that, and I move on, and life is happy, right? That's the beautiful thing about HKDF, is it patches over all those details,

Deirdre:

Mm hmm.

Thomas:

But, like, you have an ambiguity there of, you know, do I have three classical and a, you know, a post quantum, or do I have... For classical and a postquantum classical and two postquantums because I have a postquantum prekey or not, right? And in the formal model, it's like the input to the KDF appears to just be the keys, right? So it's like nothing disambiguates those cases. So like the formal model is fucked at that point, right? Everything is cross protocol because ultimately the whole point of all these protocols is to get fed into HKDF to, you know, spool out. You know, your, your shared secrets, right? But in reality, there's this blob of metadata, the infoblob or whatever, that encodes, like, what the keys are, and also a bunch of other random metadata in the protocol, and, like, the infoblob disambiguates all that stuff, so it's not really, in any practical sense, a problem here. But then the infoblob itself is not formally specified, so, like, now Signal knows, oh, we should superspecify this infoblob, because it's actually a load bearing part of... So it's interesting that, like, that blob of info stuff there, like, now we recognize that's load bearing.

Deirdre:

Well, yeah, yeah. The info blob basically says like, I am Signal, I am using curve25519, I am using Kyber, and like, that's it, and it did not vary, uh, like, it basically says, I am either supporting PostQuantum or I'm not, it did not encode, I am doing 3 Diffie Hellman's and a Kyber, or I'm doing 4 Diffie Hellman's and a Kyber, or I'm doing 3, you know, whatever, like, It did not encode the fact that this thing will vary based on the different kinds of keys and the different numbers of keys that are being fed into the KDF. And that would be the cross protocol attack part. So, it's interesting. And then,

Thomas:

And then there's like a final finding here, which is the chem re encapsulation attack, which I will summarize as, I will summarize as saying, I don't understand this at all.

Deirdre:

this is kind of cool because this is where chems and, and Diffie Hellman key exchange kind of start. to like skew and like you have to think about them slightly differently. So if you're doing a Diffie Hellman, you give someone your public key, you get someone else gives you their public key, you combine the two public keys and you spit out a shared secret after doing some math with your secret keys. So in theory, there is no way to agree on a shared secret unless you're. Both of your public keys have contributed to something. Um, with a chem, you're using the pub secret public key thing to do encryption of a shared secret, and then the other side is going to decrypt the ciphertext of the shared secret using their secret key. It's sort of more like public key encryption than really anything close to a Diffie Hellman. And basically what they said is basically, you, there may be an issue where you are using the shared secret that you get from your CHEM, but because you aren't committing to the specific public key of Kyber that was used in this chem that encrypted the shared secret. The shared secret doesn't have anything to do with the public, the secret public key pair. It's just randomly generated and then with this kind of public key encryption stuff of the chem handed over. They're basically saying that you cannot just rely on the indistinguishability, like chosen ciphertext attack security of the chem. You have to commit to the public key that you used. To do that encryption as well, and this is not a thing that you have to think about when you're just doing kind of classical Diffie Hellman, but you do have to think about it when you're shoving a chem into a protocol that was completely designed around Diffie Hellman's in the first place, um, and I forget what the actual attack is, but I'm like, this is not the same as committing to your public key in something like a Schnorr protocol, or, you know, you're turning your identity protocol into a signature scheme. But it smacks of the same sort of thing of you need to commit to the public parameters of this scheme, this like transcript that you're doing, and in this case, in this protocol, the public key of the chem is the thing that you have to commit to before you start feeding the shared secret you encrypted with the chem into your KDF or whatever, because you can vary and stuff. The main issue here is that the compromise of a single PQ public key, so a Kyber public key, in fact, enables an attacker to compromise all future CHEM shared secrets of the responder. And this is even after the responder deleted the compromised Kyber public key. And it can be carried out without violating the indistinguishability chosen ciphertext attack assumption. So, yeah, you need to commit to the public key to avoid it.

David:

oops, hehehe,

Thomas:

It's pretty neat, right? Like, none of this matters in any practical sense, even in like, the rodents of unusual sizes sense, where like, Classical cryptography stops working, like signal still survives that because of like the way it's actually implemented But like as a case study of like I've got a really good working Authenticated key exchange and like transport protocol and it's like can I just plug Kyber into that and have it go like it's it's tricky It's tricky in ways that like you kind of see coming so like encoding ambiguities and stuff like that But it's still it's it's a neat case study. So

Deirdre:

Yeah. I'm very happy that this analysis happened so quickly because a bunch of crypto people, cryptographers, looked at this scheme as written down. They basically took the original triple Diffie Hellman specification, which was Fine. It was pretty good, but when I, I had looked at it years in the past, but then I came back to it years later and I was like, Hey, there's a bunch of stuff here that I would improve generally, if you intend for anyone to ever formally model this or, you know, use this to implement Signal from scratch. And not look at your code because things are underspecified like the encoding stuff and the KDF separation stuff and basically the formal modeling of it confirmed those things because a lot of these, some of these things are just, they fell out of the original specification, not the PQ updates, but then you put the PQ updates on top of it cut with the Like the chem thing and you have to commit to the public key and the fact that you are extending that KDF even more and it's not varying according to the number of things you're doing it like blah, blah, blah. It's very pleasing that one, those things were discovered so quickly by just doing some formal modeling with Cryptoverif and Proverif, which I am not an expert in, but, uh, I'm glad that that happened. And then this game shoved out to single users pretty soon. Um, one thing they noted is that. These things are mostly falling out of the Signal, TripleDiffie Hellman, and PQ Diffie Hellman spec. And then they went into the code implementation that actually gets shipped out to Signal, and they're like, Hey, we have these encoding ambiguity issues, and they're like, Oh, we don't actually have any encoding ambiguity issues in the code, but they are in the spec. And that's, that's one of those things that's kind of annoying about like, you know,

David:

PLASTIC SIGNAL. Like, like, oh, we have this spec, but we don't actually quite follow this spec, cuz we fixed all these other things about

Deirdre:

Yeah, and this is like one of the

David:

Stop asking me questions.

Deirdre:

This is one of the inherent issues of like, if you don't update the spec, is it any good? And then it turns out if people are looking at your spec and not at your code, and like, especially if you only have a spec and you don't have open source code, that might happen more often than not.

David:

only have GPL code.

Deirdre:

like people can go look at the GPL code that is, you know, libsignal or whatever. All they want, but then, like, you have to, you know, pray that you aren't

David:

Then you've been tainted.

Deirdre:

you have to pray that you're not accidentally, like, violating the GPL by having once looked at the lib the GPLed version of libSignal because just looking at the signal specification is not exactly nailing down every single detail the way that the code seems to do better. This is extremely my shit. This is catnip. I love this shit.

Thomas:

also it turns out that all the INRI, I don't know how you pronounce INRIA or whatever, but like all the INRIA people that were doing formal verification have their own company now called Cryspen. So there's a company called Cryspen, which is like Kartikan, Bhagavan, and a bunch of other INRIA people that were doing formal verification. So if you want your protocols formally verified, go call Cryspen, and they'll formally verify your stuff. Say, cryptography, security, whatever sent you for a 15 percent discount.

Deirdre:

I don't know if they'll actually give you a discount, but they may thank us.

David:

The real question is, for extra money, will they tell you what's fine?

Thomas:

They absolutely will, by the way.

Deirdre:

Yeah, Chrisman is pretty cool because they're doing, they're doing a lot of Rust, they're doing Hackspec, which I've talked about before. And they're helping with modeling OpenSSL, the Rust implementation of, um, the messaging layer security protocol, which is a big mother of a, you know, thing. So they're doing a lot of cool stuff. Give them business. They'll do a good job.

Thomas:

Tell them Thomas sent ya.

Deirdre:

Tell them Thomas sent you. Oh, and the encoding shit. I wanted to talk about this the other day, but there was a cool paper called Comparse from a related person that worked with Karthik, which is basically formally modeling how to do secure message encodings and message formats, which is a thing that we've talked a little bit about, but is a thing that It fights you in the ass all the time when you're implementing kind of cryptography protocols, which is like, how do you send these, like, important cryptographic blobs of material over the wire in a way that doesn't, you know, have parsing issues or ambiguity issues or, you know, Weirdness over the wire in a way that, you know, a kind of symbolic level attacker can, can leverage. And, from my perspective, it just seemed to be a bit of kind of folk knowledge that's kind of handed down, like, okay, you do fixed length encodings, and if they're not fixed length, you have, like, a specific byte that tells you how long the next field is, and like, what type it is, and that you cannot vary, and like, shit like that. Because it's all kind of folk knowledge of attacks that you've seen before. And so this is just sort of kind of lessons learned of how to do secure message encodings and formats. Passed down, but not with any sort of formal notions underpinning it, and the paper Comparse basically does that for the first time, where they have, they have a formalized framework of notions of like, how you create the secure message formats, and like, what those notions are, and I think there's four main ones of them. I loved it, and I think it should be, you know, must read reading for any secure protocol implementer, or anyone that has to like, take the secure protocol bytes of whatever crypto protocol they're doing and send it to somebody else. I think it's a important reading and maybe we'll talk to them more about it some other time.

David:

So you're saying I shouldn't use python. pickle for

Deirdre:

No, no, no, don't do

Thomas:

That was pretty weak.

David:

was a weak one. Yeah, that was a weak one.

Thomas:

I thought it was a good read. I had no problem reading it while you guys were talking about the browser security model. I recommend everyone go read it.

Deirdre:

The blog post, the Cryspen blog post. That one? Yeah, cool.

Thomas:

There's a paper. I assume the paper is equally easy to read.

Deirdre:

I think they, they have their blog post. They have their actual models in ProVerif and CryptoVerif. I don't know if they're doing much more beyond that. Maybe if they're doing a paper, I haven't seen it yet, that might be it. There's a nice README in the, in the github repo. Yeah,

David:

All right, Deirdre, take us

Thomas:

We have I was just gonna say, we have an excellent next episode coming up. I'm very psyched about it. I'm not gonna say what it's about, but it's gonna be great. But, yes, this was fun. Good talking to you. Take a seat, Deirdre. I will not interrupt

Deirdre:

Nice little teaser. Yes, that'll be happening in a little bit. Cool. Security, cryptography, whatever is a side project from Deirdre Connolly, Thomas Ptacek, and David Adrian. Our editor is Nettie Smith. You can find the podcast on Twitter at @scwpod and the hosts on Twitter. But we're not really on Twitter that much. Uh, I'm on BlueSky, uh,@durumcrustulum.com on BlueSky. Uh, you can buy merchandise at merch dot securitycryptographywhatever dot com. Thank you for listening. Bye, hippopotamus!.

Issues With Encrypted Jabber Communications
App and Web Security Challenges
Benefits and Limitations of Web Encryption
Benefits and Challenges of Browser-Based Cryptography
Web App Security and Distribution Models
Web Security and Signal Key Exchange
X3DH Protocol and Signal's Key Exchange
Camry Encapsulation Attack and Secure Encryption