-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
7 day SXG expiration? Why? #597
Comments
Why do you need signed exchanges? A bundle of unsigned exchanges would not expire. |
This is a critical component for the security of signed exchanges. In the online case of TLS, you have an online TLS negotiation (using client and server exchanged nonces), delivered over a fresh TCP (or UDP, if using QUIC+H/3, but I digress) connection, negotiated using a relatively fresh DNS response, and itself routed through fresh and stable BGP information. The notion of an “origin” rests on this very concept of “approximate freshness”. The choice of 7 days was not arbitrary; it was the effective upper-bound of detection in the event of a revoked certificate. CRLs and OCSP, which populate browser revocation data directly or in aggregate, have been defined at 7 days by browsers. It’s easy to see that the entire security premise of signed exchanges, namely the signed part, disappears if that window is extended. A malicious entity could compromise the security of the origin beyond the current capabilities of the Web Platform if that 7 days were extended. Conversely, if browsers were to take steps to further reduce OCSP and CRL validity periods, it would also be reasonable to expect signed exchanges to similarly be reduced. Alternatively, it’s possible to imagine creating fully unique origins, that have no bearing whatsoever to the domain name system, if things needed to be extended longer. It is precisely because signed exchanges try to assert a DNS based origin, which today is negotiated via TLS, that it is necessary to preserve the properties of the platform today. Solving Zooko’s Triangle for such exchanges would be a sizable challenge, but it would also have the benefit of no longer needing to rely on CAs to issue certificates, as CAs are used only for authority of the DNS, and are not needed if not using that naming. However, that’s a rather significant task, and almost an entire exercise in its own right. Without foreclosing on the possibility, because it is a use case desired to be supported, it was not the current primary use case, and the current tradeoff, of requiring fresh authorization within a window that is no-less-secure than TLS today, was and is acceptable, for now. |
My apologies, I have restructured & expanded my concerns. I believe I have completed all major revising. Thank you for your quick comments, which I now attend to. @dauwhe Many of my works require a SecureContext to function. for example, the first ⌛💣 scenario hypothesizes a course-charting page, which would undoubtedly rely upon geolocation services to function. My understanding is that for this page to work, it would need signed resources. [Edit+4: i'm still trying to understand this better, but it seems like bundled unsigned resources belong to a distinct suborigin: https://github.com//issues/583. I don't know whether this suborigin retains a secure context or no, but it certainly imposes a big barrier & complexity, if not having sxg. still trying to understand better!!] @sleevi As ever, thank you for your enormously hard work securing the web. And double thanks for your fast & speedy reply. And a whole round of thanks for your comprehensiveness & thoroughness in reply. [edit+4: I still find the answer you give to be somewhat impossible. It rings true & sound, & yet the collision of this immovable wall you cite seems like no defense against the obvious & clearly necessary & vital exceptions that must be made for unstoppable force of users & their lives, which we must somehow bend to. How do we allow the user the freedom to step outside of normal regular operating territory of the connected, well behaved internet, when they must? When no other options are available, when they are far away from the warm glowing noosphere we all rely on so heavily: are those users to be abandoned? I answer no, they are not. The internet must serve users, even when they find themselves adrift from it. |
It’s a bit harder to track what changed and see whether the response above addressed your primary concern (or at least, what was originally presented). A few small notes and corrections, rather than a full response:
|
@sleevi your reply was good. i mostly expanded & edited. thanks, apologies.
|
I’m not sure how this follows from validity windows of signatures being 7 days? You can still batch update. However, I do think this is veering into the “usable offline” (e.g. a train or a plane) and more into the “persistently offline” (e.g. a peer-to-peer/sneakernet web). Ultimately, I suspect progress here may require more definition around the “offline” scenarios. It would also stand to reason that this is a constituency today that the Web doesn’t address; that is, users on modems will have a bad time on the Web writ large, period, where a single article on a news site can be 5-50mb of traffic.
The signatures can be delivered from anywhere, and URLs extended for sourcing signatures, IIRC. @jyasskin can confirm. This is relevant in that if your threat model is “website knows my IP address as I visit it from different locations”, which today is a given by virtue of TLS, then SXG are a strict improvement over that by allowing an anonymizing intermediary to deliver the SXG and signature to you directly. This is a form of “privacy preserving preload”
This seems like it might be worth splitting out as a different issue. The internet is, by current design, fairly end-to-end, and so when you access a server, it learns how to route a response back to you. SXG doesn’t change that, and the validity period of a signature seems orthogonal to the discussion of “If I browse a website, it learns how to route back to me, and if I use SXG, it could... also learn how to route back to me”. If this is your threat model, it seems also relevant to disable HTTP caches and service workers, at a minimum, since those also involve degrees of revalidation. Not trying to be glib or dismissive, just that I suspect this is something very different than the validity period discussion, and might be either fundamentally about things like distributed web or particular use cases.
Better, and without risks that Unlike CDNs, which require you to canonically configure the CDN as authoritative via DNS, and thus serve your users directly through the CDN, SXGs can also be packaged externally by a provider, by giving them an SXG certificate. This isn’t necessarily “more secure”, since they still have signing keys, but is remarkably “less complex”, because your server and configuration are still directly controlled by you. The SXG-DN (for lack of a better word) just packages your content, if you do delegate to them. The validity window, while reasonable to think of in absolute terms of “downtime”, is somewhat misleading, precisely because SXG requires good automation. If automation is constantly a source of anxiety or failure, it’s not good automation, and SXG can’t be blamed for that. Further, the failure mode, of going back to authoritative, limits the harm caused in general. It’s only for the “truly, persistently offline” case that this becomes problematic. Like I mentioned, for that case, it’s likely that we’d need a separate certificate policy entirely, to go with a separate (non-DNS based) origin, just like unsigned bundles. Granting access or the ability to serve on behalf of a DNS origin and for a period greater than 7 days, however, would be a serious security regression. |
Also note that once a particular user has loaded a signed web package, it can install a service worker that can outlive the expiration of the original signature, in the same way that a service worker or HTTP cache entry can outlive the expiration of the TLS certificate that protected the connection that delivered it. The 7-day limit "only" hurts the ability to share the content with a new user. |
I have a more fully reply I have been working through, but it's taking some time. There is one point that I do wish to put out here asap,
I used the modem as one example of a person who is not persistently online, someone who is "semi-connected". Even at their trickle slow speeds, the modem user actually has a radically better experience than the other semi-connected folks who the sxg 7-day limit will grievously degrade: those who have to bus, bike, walk, ride-share, or rickshaw their way to an internet uplink. We're seeing the impact of this right now very strongly with Corona-virus, seeing families that can't afford internet connections suffering radical hardship as their children must go to extraordinary lengths to "attend" school, sometimes by commuting to the same school to attend from the parking lot! These users are torn between having content that, if they share, might not be shareable or in many cases even usable to the sendee for very long (it depends on whether there's a full blown js framework powering an offline capable web experience for the content), and the time-consuming manual slow process of establishing a connection. The semi-connected folk of the world deserve to keep the web content they treasure & want to share with them for longer, without us (sxg) imposing on them a chore to have to renewing it every couple days. I was incorrect in 2. ⟳∞ being a redownload trap, thanks to updateable signatures, but 2. is a ⟳∞ update signatures trap, and one that, in the span of a week while actively degrading in usefulness, is quite the burden to these many many users that webbundle very explicitly seems to be intent to serve & help. This does not seem like a persistently offline issue to me. A hard-fast 7-day sxg limit seems like something the most direct beneficiary of this technology will feel the impact of every day of their lives, with nervous, frightened energy. |
I agree that the 7-day signature-expiration upper-bound is likely to be a usability problem in practice. It's a compromise between usability, as you've noted, and security, as Ryan noted. Several folks want to shift the balance even more toward security, by requiring an online connection in order to verify the source of the content. Other folks want to let content authors pick any expiration time, and trust that they'll pick something that adequately protects their users. So far, I haven't seen a compelling argument to change in either direction (including this thread), but I'm also not firmly convinced that 7 days is The Right Choice. To change it, I think we'll need someone other than me to write an essay weighing the tradeoffs on both sides, and then a discussion on at least the [email protected] mailing list and probably at a live IETF meeting. (The IETF isn't the only plausible body for this decision — browsers might wind up imposing stricter limits for uses on the web — but we have a WG there, and a lot of the right people are involved there, so that's where I suggest we start.) |
This doesn't seem the right place to start, either? The IETF doesn't set policy, as a rule, and this is very much tied to certificate policies, right? |
Good point. So it will wind up on the browser side, where we don't have automatic meetings or a mailing list. I'll be registering for a TPAC slot this year, so that's a good place to present any essays that arrive by then. |
I think there's another interesting possibility for intermittent/absent links, which is that the validity URLs with new signatures can be distributed as SXGs themselves, allowing a mostly/totally offline client to get its SXGs & bundles revalidated by push rather than pull, and reducing the burden on the publisher. In that situation, I think there'll be a demand for 'big' bundles, both for efficiency (making the revalidation SXG size smaller) and privacy (hiding what, specifically, you're interested in in a bigger set of resources). |
How can I help? I can try to spend time cutting down my existing work to something of a manageable more direct size? I'm very concerned about giving users the power to take things with them, & them finding themselves surprisingly unexpectedly locked out. I don't think any level of warning or notification to the user is sufficient. I think the user has a rightful desire that we ought respect to access the signed offline content that they opt to trust. |
Crud, I missed it! I hope you all had a good session. Thank you so very much for the mention. I thought requirements for participation were much much higher. |
My issue #621 may be related. Android Apps use signatures. And your phone doesn't just decide that your app has expired (yet..). Allowing an overall signature for an "unsigned" bundle, which has no expiration requirements, would mean that you could safely publish something lasting. The certificate used could either be domain based, giving the bundle a proper origin, or self-signed, giving the bundle a random hex string origin. These bundles would not need to use the same certificate used by the web server. The risk of ever needing to revoke them, or even expire them, is much lower when the key can be kept in a safe on a smart card. If you can't trust them to keep a key that is only ever used under manual supervision safe, one might not want to trust them whatsoever. With domain based certs, one can always revoke the traditional way, and create a new bundle(Although this could still cause lost access, if one got the revocation notice but did not have bandwidth for the new bundle). These overall signed bundles also resolve the issue of picking and choosing legitimate resources to form a vulnerable combination. |
@sleevi , Regarding your comment:
Does LetsEncrypt support SXG? I was under the impression that SXG certificates are available exclusively from digicert. |
@bitdivine That wasn’t my comment That said, DigiCert supports ACME, so your existing, well-behaved ACME clients should support certificates from DigiCert, if configured properly. |
Hello. One of my core interests in webpackage is to allow offline use cases. For example, I think of the 100 rabbits interactive creative design team. They sail around the world, & sometimes make multiple week crossings. I had hoped WebBundles would make using web systems valid for these sorts of trips, & it felt confirmed when I saw that the use case document contained offline browsing as a primary objective.
To my chagrin, I recent read that there appears to be an arbitrary 7 day limit after which Signed HTTP Exchanges (SXG) will return as "invalid". And that is from the time of signing, not the time of downloading. This short expiration window very much calls into question whether or not webpackage/webbundles signed exchanges support the offline browsing use case.
1. ⌛💣
Let's talk further to the sailing model. One day you are sailing the sea, looking at your offline charting webpage, and the next day you wake up and all of a sudden the browser & page you are relying on to get you to land have decided that the signed http exchanges they rely upon are now no longer valid, and your page no longer can access the offline resources you both expected it to read. This is an extreme hazard, introduced by dual problems of lack of awareness to this ticking self-destruct clock, & an extremely fickle & short fuse before offline content reaches this self-destruction. Hikers, travelers, boaters, spacemen, disaster-survivors, & other perhaps connection-starved folk will all be enormously encumbered by this short self-destruction fuse that burns within their browser.
This problem is particularly pernicious for those with inconsistent connectivity. If you live somewhere that experience multi-day internet or power outages a couple times a year, the chance for SXG to be able to help you is highly questionable, & dependent all too much on how recently you happened to visit a site. For example if you read novels in, say for example, a Project Gutenberg offline book reader, but hadn't opened the site in 4 days, and you lose power, you will very quickly lose the ability to access or share the books you thought you were carrying with you. You couldn't predict or guess when you might need to re-download your content (unchanged for hundreds of years but requiring a re-download none-the-less, to freshen the SXG signature date) with new signatures.
In the ideal case, If I refresh my content (say it's a lengthy download), go to sleep (while the download runs), wake up & hop on a 12 hour flight, and the sys-op did everything in their power to give me as much time as possible, I've already potentially used up 15%+ of my offline time granted by SXG's remarkably short self-destruct fuse before I land. This is the ideal scenario, fully planned ahead of time, the best case.
2. ⟳∞
[+4 hours edit: I was not aware signatures could be updated, which resolves the download-cost. I am relieved to hear this. I do still have reservations about the user-cost of maintaining fresh data, that we will be driving people to a behavior that makes them compulsively connect & update. And I feel like there are anti-user privacy aspects that are highly concerning lurking here.]
The case of losing internet access unexpectedly highlights a problem even bigger & worse than than content unavailability: we are trapping the user in an endless cycle of redownloading content, creating a sense of fear that the content they choose to bring with them is forever only days away from expiring.
Using service-workers, a user might safely read a book at their leisure. But with SXG, a user now has a new expectation for their web experience, that they can share their content too. Alas, to do so, they must continually re-download content. Not just 52 times a year, once a week, when the content expires, but ahead of time, so that if they do want to share the book they are reading, the band website they are enjoying, they can give those they are sharing a gift that will last as long as possible before it self destructs. Rather than freeing users from connectivity, we have now created a new cycle that insists we regularly reconnect & refresh, generating enormous fear that the content we cherish most is ever expiring, that we must re-download, or face negative consequences, find ourselves stripped of this new ability that they have grown to enjoy.
This is so striking, so remarkable an end scenario for the current version of SXG. The very users most in need, with fickle or periodic connectivity & likely also costly connections, who we claimed to be helping, are now instead bound into a vicious cycle of continually re-downloading the same content regularly that, pre-WebBundles, they could rely on to be cached, in order to enjoy the ability to share content.
3. 💢💻
[+4 hours edit: i had missed/forgotten that sxg is not required for a SecureContext-approved WebBundle, which makes sxg "voluntary", albeit still a thing i think most users will i believe want. none the less, it is not in the critical stream for operators: i continue to wish small/indie operators had a less demanding time-scale to operate sxg on, but my alarm here is significantly less than the already fairly-mild concern i originally felt.]
My third concern is for operators. Outside of all offline concerns, WebBundles are also a tool for bundling and uh bundling. This fills a desperately needed hole in the JS ecosystem in particular, where EcmaScript 6 / EcmaScript-2015 modules have been the official language specification, but there has been no way to get many modules to a browser (effectively short of rewriting EcmaScript modules into something different & not modules.
To this end, we have created tools like Browserify & Webpackage, which perform long complex operations to convert our sea of EcmaScript modules into concatenated non-module JS, such that they can be shipped effectively & with 'cross-file' compression.
WebBundles use-cases talk about these bundling concerns, describing these needs & desires. And the hope of many is that has been the WebBundle would free us from these complex & obfuscating machines, & give us a path to enjoy transporting bundled JS modules without weighty transpilation that turns these modules we coders & webmasters work with every day into not-modules that when run behave very much alike modules but which are something completely different, supported by their own purpose built execution systems outside of the browser's own module loading & running systems. Which sounds wonderful, like exactly the answer we've been desperately looking for: finally, it sounds like we can actually directly bring EcmaScript modules on to the web.
Except for the operational disadvantages. Historically these bundlers could produce static bundles that could be served easily & readily. Now, with this new 7 day expiration, instead of building a site, & publishing it, operators have to either rebuild their bundles regularly with freshened signatures, or hand over their private keys to webhosts that will rebuild the signatures for them.
At a minimum, this means rebuilding your bundles every 7 days. That alone imposes a great burden on those who enjoy "static" websites, something web operators used to be able to create & publish & forget about. But a weekly cron job to rebundle is not the end of the story for operators. If an operator wants to allow users to share the website, then it's up to the operator to regenerate the cycle more frequently, such that the expiration date remains 7 days in the future even as time passes.
I see no reason why the operators should have to face such sharply increased demands to continue serving content as they switch from WebPack to WebBundles. In many cases, I fear small websites will opt to turn over their most treasured most private keys to external operators, to offload the complexity, at great cost of the privacy & security of their own site, which they now no longer retain cryptographic responsibility for. This seems like a deep regression for the web as a whole, and one that a much longer expiration would radically ease.
How and where did this 7 day expiration arrive in the spec? What justifies this?
On three accounts, this fantastically short expiration seems like a dangerous & arbitrary limitation. In the first of scenario, offline users are suddenly being cut off, frozen, after a short window, from the offline content that kept them from being dashed into rocks or left adrift at sea. In the second scenario, far more prevalent, many users, in fact the most vulnerable & needing of robust offline access users, become addicted to a tragic tragic & enormously costly & energy-intensive never-ending ceaseless-redownloading cycle that they can never escape from, a- and I'm serious here- new technocratic limitation introduced to the web that feeds a vicious cycle of addiction & need & want & overall fear of the computing world. In the 3rd case, we see enormous operational complexity introduced to being & remaining on the web, particularly cumbersome to smaller websites that have enjoyed static hosting for decades & who now have to check their cron jobs at a minimum 52 times a year. I've known there was an expiration in webbundles, but you know: I always assumed it was reasonable. All three of these scenarios seem like critically unhealthy & short scenarios that users & sites will face with great pain on a regular & day to day time-frame imposed by the day-to-day time-scale of this 7 day expiration.
I do not feel safe at all describing webbundles or SXG as an offline browsing technology under these conditions. If operators wish to make content available longer, let them. If users wish to not keep content as long as an operator allows, let them. Yes, it would be good for sites to be able to freshen themselves, to not rely on aged content, but a short expiration does not seem like the appropriate way to enable that. I am terrified to know what answers might come, because I am not prepared to accept any counter-reply to issue #2 the redownload trap in particular, but I must ask: why? Why so short? This seems outright cruel to all involved; why would we not want a user to be able to offline browse a site a month latter, or even a year latter, if no other options are available & on hand for the user to turn to & the certificate chain is good & the content sits there on their drives or on a flash drive nearby? Surely we can act in the interest of the user here, help them to continue to benefit from the use of their assets, for as long as the signing certificate chain is good for, versus leaving them with no alternatives & without?
As a small indie systems operator I had hoped for WebBundles to help reduce some of my systems operating complexity by letting me package my JS modules effectively, but the price has become radically higher with these very short-lived expirations. But far more so, discovering this critically short expiration leaves me in a state of critical mourning for the user, the user I thought we were about to be helping. The offline use cases seem not merely weakened, but so unreliable & short as to be an anti pattern, as to be actively dangerous to folks wishing to use offline web systems. But a system that forces ongoing regular multiple-times-a-week re-downloading to enjoy the benefits of knowing that you can keep or share the data you want, that is meaningful to you, actively enslaves & degrades the human spirit in unconscionable ways, and which will radically penalize the less connected folks of this earth who could otherwise best & most truly benefit from this technology.
Technology should have our back. It should permit, allow, enable. This expiration sets a time horizon forever near at hand, forever a trap we must watch ready to spring shut on us.
I have been radically hopeful that an offline web where we don't need regular connectivity could come. I have hoped we could simplify the experience of sites by giving them good systems to ship modules without mangling. For these reasons, I have been a longstanding avid fan & supporter of webbundles, deep in my heart, & in countless places online (recently). Webbundles has been an endless source of inspiration that the future is about to get radically better, and I have believed fully in the use cases & webbundles ability to serve them.
This one constraint though, it has me falling apart completely. I am shattered. I have tried to maintain some distance from my opinions above, but in purely my own humble opinion, forgive me, I intend no personal injury nor attack, but my assessment is that with this present 7 day expiration, 🌐📦 is grossly antithetical to the web, to sites, & to, most of all & our shared greatest concern so say we all, the user. I beg, even a month is scant time when the clock is ticking. I would like us to better 🏄 the offline 🌐 together, for longer, much longer.
The text was updated successfully, but these errors were encountered: