My experience with next.js has been it’s like working with struts or enterprise java beans or something. It’s a giant, batteries included framework whose primary purpose is to lock you in to the Vercel ecosystem. There are some bright spots - next-auth is decent - but all this SSR spaghetti with leaky abstractions dripping out (try rendering emails in the backend of a next app) make it really not worth it.
Also compile times when working on it locally can get astronomical with relatively modest applications for no obvious reasons. Like Scala but less predictable.
I suspect the % of apps built with next.js that actually genuinely get a lot out of SSR is tiny, it’s mostly the darling of B2B SaaS companies.
> Compiled /api/myroute/ in 797ms (3674 modules) > Compiled (3680 modules)
It compiles when nothing has changed.
I recently added some benchmarks to the TechEmpower Web Framework Benchmarks suite, and Next.js ranked near dead last, even for simple JSON API endpoint (i.e. no React SSR involved): https://www.techempower.com/benchmarks/#section=data-r23&hw=...
I discussed it with a couple of Next.js maintainers (https://github.com/vercel/next.js/discussions/75930), and they indicated that it's only a problem for "standalone" deployments (i.e. not on Vercel). However, I'm not entirely convinced that is true. I wonder if there are major optimizations that could be made to, for example, the routing system.
The fix is clearly to take out your wallet /not.
NextJs is slow. It doesn't have to be. For their swc, turborepo and everything else going on they could have made the actual production usage fast just as well.
> Next has been a nightmare to use every step of the way.
Nextjs tech is interesting. Unfortunately, their business model is reliant on their integration being a nightmare and thus your complete reliance to use their platform for deployment.
I'd be really concerned to have to work around a performance limitation like that in a more complex app.
I could not disagree more. Why does a "modern" website that just has simple static text and images need to be tens of times larger/slower to load than a simple static website with plain old HTML and CSS?
What kind of "developer experience" do you need for a static website? Just write HTML or markup and run it on a local server with hot reload -- what more do you want/need? Specifically what use cases is NextJS satisfying here?
So at most it is the actual dynamic content being served, ie API calls.
Now whether next.js bad deployment experience on your own hardware is to blame for the complexity of doing that easily is up for debate.
As much as I dislike the accidental complexity of React and it's "frameworks"(react-router/next), I can deploy a good looking site with good accessibility very quickly using it and evey JS bootcamp dev won't be completely lost if they happened to need to work on the project.
Sometimes the technically best decision is decided by non-technical factors.
In an earlier section, there was a statement about how there's something like 60 individual requests in a webpage load, so the "90% less" (1/10th speed) could actually be faster overall.
Also worth investigating is how many concurrent requests can exist. If it's a little slower; but a single server can handle 5x the number of concurrent requests, because most of the interaction is busy-waiting for something else, that could be worthwhile.
Given how many job openings seem to be interested in Next.js and/or anything 100% Javascript, it seems like some parts of the industry are pushing all JS all the time; but it seems like maybe that's not the right route to go, and that _also_ is interesting.
Just interesting things all around :)
They call the pregenerated static HTML "SSG" or "Static Site Generation"
That should be as fast as hell: basically a CDN job.
Every image has always been a seperate request and video, which is chuncked, is multiple requests.
Unlesd you are deploying an unstyle HTML page with no media the server isn't serving 1 request even for old school web pages.
The big convenience that SSR brings for most of my use cases is querying the CMS for changes and getting instant updates on the frontend without having to statically rebuild the HTML anytime content is updated. That approach allowed me to run SSR without all the performance issues it would bring otherwise.
I don’t have exact numbers but I can provide them if anyone’s interested.
That is especially true when comparing a site done with Rails + Hotwire vs Others + React / Client Side Rendering.
Ok but remember browsers will reuse HTTP connections, HTTP2 introduces multiplexing, and browsers have tight mode. So you can't just take the figures for 1 request and divide by 60 to get the real world performance.
The article doesn't dig too deep into this sadly, rather just accepts that it's slow and uses a different tool.
But seriously, this is responding to a request with the content of a file. How can it be 100,000x slower than the naive solution. What can it possibly be doing, and why is it maxxing out the CPU?
If no-one else looks into this, I might get around to it.
I ran a script once that would show an archived copy for links that stopped working. Then ended up hosting/stealing most of someone website which moved to a different domain. The concept was nice tho.
This way only the first request hits your actual server, the rest is handled for you.
For shits and giggles, you should try compiling to completely static files, then host them with nginx.