I strongly disagree: inlining your entire CSS and JS is absurdly good for performance, up to a surprisingly large size. If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it, I would recommend deploying with only inline styles and scripts. The threshold where it becomes more subjective is, for most target audiences, possibly over half a megabyte by now.
Seriously, it’s ridiculous just how good inlining everything is for performance, whether for first or subsequent page load; especially when you have hundreds of milliseconds of latency to the server, but even when you’re nearby. Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.
It’s also a lot more robust. Fetching external resources is much more fragile than people tend to imagine.
1. Inlining everything burns bandwidth, even if it's 100KB each. (I hope your cloud hosting bills are small.) External resources can be cached across multiple pageloads.
2. Best practice is to load CSS files as early as possible in the header, and load (and defer) all scripts at the end of the page. The browser can request the CSS before it finishes loading the page. If you're inlining scripts, you can't defer them.
3. If you're using HTTP/2+ (it's 2025, why aren't you?[0]), the connection stays open long enough for the browser to parse the DOM to request external resources, cutting down on RTT. If you have only one script and CSS, and they're both loaded from the same server as the HTML, the hit is small.
4. As allan_s mentioned, you can use nonce values, but those feel like a workaround to me, and the values should change on each page load.
> Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.
Source? I'd really like to know how and when slow caches can happen, and possibly how to prevent them.
[0] Use something like nginx, HAProxy, or Cloudflare in front of your server if needed.
Do you have data to back this up? What are you basing this statement on?
My intuition agrees with you for the reasons you state but when I tested this in production, my workplace found the breakeven point to be at around 1KB surprisingly. Unfortunately we never shared the experiment and data publicly.
In principle, you could imagine the server packing all the external resources that the browser will definitely ask for together, and just sending them together with the original website. But I'm not sure how much re-engineering that would be.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP
``` <h3> hello $user </h3> ```
with $user being equal to `<script>/* sending your session cookie out, or the value of the tag #credit-card etc. */</script>`
you will be surprised how many template library that supposedly escape things for you are actually vulnerable to this , so the "React escape for me" is not something you should 100% rely on. In a company I was working for the common vulnerably found was
`<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}` with unsafe deactivating the auto-escape because people wanted the feature to be released, and thinking of a way to translate the string intermixed with html was too time-consuming
for inline style, it may hide elements that may let you input sensitive value in the wrong field , load background image (that will 'ping' a recipient host)
with CSP activated, the vulnerability may exists, but the javascript/style will not be executed/applied so it's a safety net to cover the 0.01 case of "somebody has found an exploit in
At our place we do abide by those rules, but we also use 3rd party components like Telerik/Kendo which require both unsafe-inline for scripting and styling. Sometimes you have no choice laxing your security policy.
I absolutely agree with you. I've been very very keen on CSP for a long time, it feels SO good to know that that vector for exploiting vulnerabilities is plugged.
One thing that's very noticeable: It seems to block/break -a lot- of web extensions. Basically every error I see in Sentry is of the form of "X.js blocked" or "random script eval blocked", stuff that's all extension-related.
Do you mean people should be banned from inlining Google Analytics or Meta Pixel or Index Now or whatever, which makes a bunch of XHRs to who knows where? Absolutely!
But nerfing your own page performance just to make everything CSP-compliant is a fool's errand.
Here is the 9 year old bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1267027
And their extension store does not permit workarounds, even though they themselves have confirmed it's a bug.
For example I helped uBlock Origin out in 2022 when they ran into this: https://github.com/uBlockOrigin/uBlock-issues/issues/235#iss...
https://extensionworkshop.com/documentation/publish/add-on-p...
They said this was not allowed and removed it from the extension store.
I appreciate they had to move for other reasons but I also really don't like the idea that the DevTools and browser chrome itself now has all of the same security issues/considerations as anything else "web" does. It was bad with Electron (XSS suddenly becoming an RCE) and makes me pretty nervous here too :(
XUL was in many ways always a ticking time bomb.
That would keep "static form" helpers still functional, but disable (malicious) runtime templating.
I recently implemented a couple of tools to generate[1] and validate[2] a CSP. Would be glad if anybody tries it.
[1] https://www.csphero.com/csp-builder [2] https://www.csphero.com/csp-validator
Obviously hard to say what those tradeoffs are worth, but I'd be a bit nervous about it. The work covered by this post is a good thing, of course!
Why? Some sites implement then break this, sadly.
I have extremely locked down instances for banks and so on. On Linux I have an icon which lets me easily launch those extra profiles.
I also use user.js, which means I can just drop in changes, and write comments for each config line, and keep it version controlled too. Great for cloning to other devices too.
It is also made in a way that it is optional (never break the web mentality), so what happens in practice is the same as with CORS: allow all, because web devs don't understand what to do, and don't have time to read the RFC.
For example: try getting a web page to run that uses a web assembly binary _and_ an external JS library. Come back after 2 weeks of debugging and let me know what your experience was like, and why you eventually gave up on it.