Jampack is a post-processing tool that takes the output of your Static Site Generator (aka SSG) and optimizes it for best user experience and best Core Web Vitals scores.
As of today it can:
- Optimize local images, CDN images or external images
- Optimize above-the-fold vs below-the-fold
- Limit images max width
- Inline critical CSS
- Prefetch links on scroll
- Improve browser compatibility
- Auto-fixes HTML issues
- Warn for HTML accessibility issues
- Compress all assets in the end
It processes directly the static output so it's compatible with any SSG or framework. We are intensively using it as a post-processing step to our Astro websites for example.
With Jampack, we end-up focusing more on how simple, readable and maintainable our code is, throw images of any size, and let it optimize for maximum performance.
We hope this can be helpful to lot of people! Cheers, Georges and the ‹div›RIOTS team!
I ran Jampack after building my Quarto static site and got a 32% smaller folder with no noticeable drawbacks yet. Here are my metrics before Jampack and after using PageSpeed Insights:
before Jampack:
- mobile: - 52 Performance - 73 Accessibility - 100 Best Practices - 85 SEO
- desktop: - 90 Performance - 75 Accessibility - 100 Best Practices - 82 SEO
after Jampack:
- mobile: - 49 Performance - 80 Accessibility - 100 Best Practices - 92 SEO
- desktop: - 85 Performance - 82 Accessibility - 100 Best Practices - 91 SEO
Github repository is archived: https://github.com/apache/incubator-pagespeed-ngx.
Or has it perhaps moved to some other location?
It only has one extra commit, which deletes RETIRED.txt which was added in the repo you linked.
So at the moment it seems someone is intending to continue development. And might even be working on it, but haven’t pushed any work to the main branch.
Any unimpressed commenters willing to point any defects? To me this looks like the equivalent of compiling C to super-optimized assembly and is definitely doing things that I wouldn't want to do myself.
If we really have to ship super optimized assembly, then I would completely skip HTML and CSS altogether and just ship highly optimized web assembly and let developers use whatever language they want.
You know, I don't disagree. In fact I am 100% with you, it's just that we have to work with the realities presented in front of us. I'd think that nowadays we (as in, the general bigger tech community) know how would we do HTML+CSS much better from scratch but this is not ever happening -- I think we all know it.
So the next best thing in my view is (when it comes to generating static pages, that is, not sure about how viable this tool would be if you put it in the pipeline to improve all your dynamically-generated HTML -- it has to be hyper-optimized in order for it to not get in the way; having an nginx/Caddy plugin could also work):
1. Write Markdown or something else that's easier on the eyes and fingers;
2. Write your own CSS or use a theme from your SSG software;
3. Generate the HTML with your SSG software;
4. (NEW STEP) Run it through JamPack;
5. Deploy.
Personally as a techie I want my pages to have 100/100 lightweight / speedy / small score. That includes things like not loading images until they enter the viewport, that includes using all sorts of OS- / browser-specific hackery for faster loading, it includes the fastest time to first contentful paint... that includes everything that can be used in terms of tricks, in fact.
So again, I get your point and I really wish we lived in that reality but at one point I stopped believing that we ever will so I am trying to settle for the second best thing.
I would like to do the subset font optimization. I'm just not sure how much of an improvement it's going to be. Have you done it manually before?
I was talking in context of variable fonts which come with lots of features mapped to opentype tags and axes. font-feature-settings property selects (or activates) those features. Usually this is done at few CSS selector levels. The variable fonts can be trimmed by freezing those features. I did something like this with Fira Code a few years ago. [1]
> I would like to do the subset font optimization. I'm just not sure how much of an improvement it's going to be. Have you done it manually before?
It can be quite an improvement for fonts like Inter which ship with massive number of glyphs to support different languages.[2] Doing this manually is a huge pain. Zach Leatherman created a tool called Glyphhanger to automate some of the usecases [3]
[1]: https://github.com/naiyerasif/FiraSourceMono
[2]: https://paulcalvano.com/2024-02-16-identifying-font-subsetti...
I was hoping there was some principled way of identifying critical and non-critical CSS (e.g. user interaction effects like :hover would always be considered non-critical), but it looks like the library it's using just tries to render your page and do a best-effort detection on which rules are considered critical, which IMO is a little unsatisfying: https://github.com/GoogleChromeLabs/critters
Seriously, inlining is absurdly good for performance, even compared with a warm cache, and the threshold where external stylesheets or scripts perform better is surprisingly high, into the hundreds of kilobytes for some common markets.
The notion of critical CSS… it’s a defeatist attitude, trying to grasp back some squandered performance, rather than fixing the underlying problem.
I regret to say this is just based on casual experience and observation, not any methodical technique. I would really like someone to run with this concept and measure it more fully. I just doubt it’s going to be me.
Is there a reason for preferring them as a separate post-build step? I guess the tradeoff is faster rebuilds when you're developing vs the possibility that you miss subtle bugs resulting from introducing stuff like width declarations for images?
However, this does inline aspect ratios, so the layout won't change after they're done loading - the cardinal sin of lazy loading.
10+ years ago I wrote a GreaseMonkey userscript for a dating site that collected all your results into one scrollable table. It exercised very explicit control over the image loading sequence. If I recall (and my memory might be spotty), it grabbed 1 thumbnail image per match initially, in the order they were displayed, to populate the table. Once all the thumbnails were pulled (which didn't take very long), it would start downloading full sized images, in the background. Hovering a match to view more details would immediately prioritize that matches photos, and if you hovered over one of said match's photo thumbnails (as if to click) that specific image would be placed at the top of the queue.
It was all done using Javascript (no frameworks) and XmlHttpRequest, and worked pretty well. This was back when servers only allowed you a couple (or handful) of connections at a time. I wrote a "TaskQueue" class in Javascript implementing a very simple form of cooperative multitasking (jobs designed to do their work in chunks). Tasks could "preempt" others, and you could define simple relationships so a group of tasks could block on one they were dependent on.
Funny story, I actually sent a link to the tool to a girl on the site who I eventually wound up in a long-term relationship with ("here, let me help you make it more efficient to browse for other guys...").
Anyway, at the time I felt like it was table stakes that a page like this should be able to exert some sensible control over the sequencing and prioritization of its image assets, in a fashion that has the user's best interests at heart. I'm glad browsers have finally evolved out-of-the-box attributes like "loading=lazy", but I kind of wish there were an option between "eager" and "lazy" that simply deferred the lazy content until all the other content is done. So I can still walk away to get a coffee (or switch to another tab) and come back to a fully, instantly-responsive page.
There is currently no way to change this behavior but I could add an option that would preload the below-the-fold images in the background when all the page is loaded. It's actually a pretty nice idea.
I'm just afraid it will load unnecessary images at the bottom of the page but if it's an option, anybody can choose to have it or not!
( Case in point, yesterday I spent all day trying to follow examples to turn Divjoy react website into a simple html to serve from S3 bucket. I can't believe how hard that was and I am still struggling. Ideally something that can just auto-deploy to S3 bucket and point domains to it. It hurts that I paid money for it and the developer is gone, discord is abandoned. This is why I always prefer FOSS )
It looks like a great idea and would probably help if I had any images in my project.
My site consists of a project grid (large thumbnails) and project page with hi-rez (3200x2000) images in a JS slideshow I wrote.
Not my site, but mine (in progress) is almost exactly the same UX/UI but with zero dependencies or libraries/frameworks.
So not sure about that one.
Also some svg images went missing.
[Edit] A little bit more tuning, 93/92/100/100
There where many errors, I'll try again later and send you the vitals + errors.
[Edit] Generally I would love this.
> Lazy-load assets below-the-fold ⬇.
Ok so that seals it - this chases scores over actual user experience.
Can I expect it to run out of the box for Next.js static-generated files? If so, any recommendations on how to set it up in a Next.js project?
If the site is 100% static then it should play nice. If it's hydrated with JS: results may vary. Let me know!
docker run -v ${PWD}:/dist node npx @divriots/jampack /dist