For starters, there's no official way to chain multiple middlewares. If you want to do multiple things, you either stuff it all into a single function or you have to implement the chaining logic yourself. Worse, the main functions (next, redirect, rewrite, ...) are static members on an imported object. This means that if you use third party middlewares, they will just automatically do the wrong thing and break your chaining functionality.
Then, there's no good way to communicate between the middleware and the route handlers. Funnily enough the only working one was to stuff data through headers and then retrieve it through headers(). If someone knows your internal header names this could be very unsafe.
One additional issue with that is that headers() turns your route handler into a dynamic one. This opts you out of automatic caching. I think they recently gave up on this entirely, but this was the second biggest feature of Next 14 and you lost it because you needed data from the middleware ...
And lastly it still hides information from you. For whatever reason request.hostname is always localhost. Along with some other properties that you might need being obfuscated. If you really wanted to get the actual hostname you needed to grab it out of the "Host" header.
I'm not really surprised that the header/middleware system is insecure.
No way to communicate information from middleware to requests means people encode JSON objects into text and add it as a header to be accessed from requests using headers(). They put session/auth info in there.
I would never recommend the framework to anyone on this basis alone.
What a joke.
With that said, I do agree that nextjs middleware is trash. My main issue with it is that I never use nextjs on vercel, always on node, but I'm still limited in what I can use in middleware because they're supposed to be edge-safe. Eye roll. They are apparently remedying this, but this sort of thing is typical for next.
I also don't think every other framework has the exact same issues. Take a look at SvelteKit for example.
You can add data from the middleware/hook into a locals object (https://svelte.dev/docs/kit/hooks#Server-hooks-locals). This is request scoped and accessible from the route handlers when needed. It also supports type definitions (https://svelte.dev/docs/kit/types#Locals). I wouldn't call this brittle. It's just dependency injection.
Note that it doesn't explicitly support multiple middlewares either (well, sort of; there's https://svelte.dev/docs/kit/faq#How-do-I-use-middleware but I think you're meant to be using hooks for your code https://svelte.dev/docs/kit/hooks#Server-hooks-handle), but at least it's easy to use and doesn't intentionally try to obfuscate information from you.
Edit: It seems that at some point sequence (https://svelte.dev/docs/kit/@sveltejs-kit-hooks#sequence) got added, so disregard the paragraph above.
I haven't kept up with Next.js idioms but in generat that's what middleware is for. It's implied in the name. Middleware-chaining is a common idiom.
It's the littke details that Next.js middleware intercommunicate over HTTP headers (?!) that makes it a different pattern.
Dotnet has no problem with that when using Minimal APIs.
https://zeropath.com/blog/nextjs-middleware-cve-2025-29927-a...
This looks trivially easy to bypass.
More generally, the entire concept of using middleware which communicates using the same mechanism that is also used for untrusted user input seems pretty wild to me. It divorces the place you need to write code for user request validation (as soon as the user request arrives) from the middleware itself.
Allowing ANY headers from the user except a whitelisted subset also seems like an accident waiting to happen. I think the mindset of ignoring unknown/invalid parts of a request as long as some of it is valid also plays a role.
The framework providing crutches for bad server design is also a consequence of this mindset - are there any concrete use cases where the flow for processing a request should not be a DAG? Allowing recursive requests across authentication boundaries seems like a problem waiting to happen as well.
That's basically the same way phone phreaking worked back in the day. Time is a flat circle.
Relevant parallel to this is the x-forwarded-for header and (mis)trusting it for authz.
This seems like a consequence of Vercel pushing that weird "middleware runs on edge functions" thing on NextJS, and b/c they are sandboxed they have no access to in-memory request state so the only way they can communicate w/ the rest of the framework is via in-band mechanisms like headers.
Is that a fair characterization?
(the fix was to add a random string as another header then checking to make sure it's still there afterwards, effectively an auth token: https://github.com/vercel/next.js/pull/77201/files )
Or if there is, and I've somehow missed it, please *please* share it with me.
(I think the answer is because of the "requirement" that middleware be run out-of-process as Vercel edge functions.)
(An abandoned spec is at https://w3c.github.io/web-performance/specs/HAR/Overview.htm...)
I'm going to disagree on this. Browsers and ISPs have a long history of adding random headers, a website can't possibly function while throwing an error for any unknown header. That's just the way HTTP works.
This is clearly a case of the Next devs being silly. At a minimum they should have gone with something like `-vercel-` as the prefix instead of the standard `x-` so that firewalls could easily filter out the requests with a wildcard.
1) Plain HTTP, go wild with headers. No system should have any authenticated services on this.
2) HTTP with integrity provided by a transport layer (so HTTPS, but also HTTP over Wireguard etc for example). All headers are untrusted input, accept only a whitelisted subset.
With this framing, I don’t think it’s an unreasonable for a given service to make the determination of which behaviour to allow.
I guess browser headers are still a problem. But you can get most of the way by dropping them at the request boundary before forwarding the request.
I was made aware recently of a vulnerability that was fixed by this patch: https://github.com/vercel/next.js/pull/73482/files
In this vulnerability, adding a 'x-middleware-rewrite: https://www.example.com' header would cause the server to respond with the contents of example.com. i.e. the worlds dumbest SSRF.
Note that there is no CVE for this vulnerability, nor is there any clear information about which versions are affected.
Also note that according to the published support policy for nextjs only "stable" (15.2.x) and "canary" (15.3.x) receive patches. But for the vulnerability reported here they are releasing patches for 14.x and 13.x apparently?
https://github.com/vercel/next.js/blob/canary/contributing/r...
IMO you are playing with fire using nextjs for anything where you care about security and maintenance. Which seems insane for a project with 130k+ Github stars and supported by a major company like vercel.
https://clerk.com/changelog/2024-02-02#:~:text=Our%20solutio...
At first read that sounds very reasonable! But then you realize that not all vulnerabilities got a security advisory...
2025-02-27T06:03Z: Disclosure to Next.js team via GitHub private vulnerability reporting
2025-03-14T17:13Z: Next.js team started triaging the report
They didn't spend 2 weeks making a fix, that took a few hours. It took them two weeks to look at the report.
This is probably the most important comment. You don't have to use Next.js, and if you do have to, you don't have to use everything they have in it.
This has always been an issue with Vercel. I highly recommend people stay way from their stuff.
If you start looking at big corps, you will very quickly find instances of fairly severe vulns that sit for months before a fix is issue.
(I'm assuming "started triaging" actually means worked on fixed. If they didnt even respond to reporter for 2 weeks, that is kind of bad)
That's how zero day exploits work. People keep it quiet so they can keep exploiting it.
React added a lot of complexity to the front end, but, for an app with a lot of front end state, brought a ton of value.
Next brings us file based routing, which seems cool, until you get into any sort of mildly complex use case, and — if your careful and don’t fuck it up, server side rendering, which I guess is cool, if you’re building an e-commerce product and is maybe cool for maybe a few other verticals?
I keep hearing this but I disagree completely. Does no one remember Angular.js? Backbone? Ember.js? Even my favorite framework, Knockout, had lots of complexity.
SSR has been misused widely for years and we’re now starting to see the effects of that. But there ARE great use cases for SSR.
And frontend dev is the easiest it’s ever been. Run Vite Create and you have a fully working React SPA that can deployed in minutes on Render.com. No more messing with Webpack, or Bower, or Brocolli, or Gulp or Grunt or whatever madness came before. Frontend dev is in the best place it’s been in years.
You're using a different frame of reference. Those people you're referring to, including gp, probably mean that frameworks add complexity to the frontend. That would include all the ones you listed.
If you're looking for something simpler that's closer to Next's original premice, Remix.js is awesome and much lighter.
They got their start way back with React-Router. At the time, their business was React Training. They’d train people how to use React. React Router had this curious tendency to change its API drastically with each release. Stuff you depended on would suddenly go away, and you’d be told “That’s not the right way to build apps anymore. This is the True Way.” It really sucked, but it seemed like a good way to drive demand for training.
Then they came up with Remix. Remix has been pretty stable, but when looking at React Router, I kept noticing there was stuff that felt more like an app framework than a router. It felt like it’s pulling me into Remix. Then last year they announced that they’re merging Remix and React Router. So if I was already dependent on React Router, I’d be fully committed to Remix, whether I wanted to be or not.
What new shiny thing or new business model will they be chasing next year? I’m not willing to risk finding out.
The exploit involves crafting HTTP requests containing the malicious header:
GET /protected-route HTTP/1.1 Host: vulnerable-app.com x-middleware-subrequest: true
So... just adding a "x-middleware-subrequest: true" header bypasses auth? Am I understanding this correctly?
correct.
That is how serious this bypass is and why it is a severity 9.1 (I think it should be a 9.8, as it is so trivial by adding a single header.)So to confirm: where does this middleware run?
Is this not access control?
BTW ppl are talking about why middleware should be used for auth and, while I don't like this pattern, it is the adopted pattern for app router in nextjs and services like clerk and supabase use it heavily.
Are they saying they had a special flag that allowed requests to bypass auth, intended to be used by calls generated internally?
And someone figured out you could just send that on the first request and skip auth entirely?
If I he for that right, this is a security review failure since people perennially try that optimization and have it end poorly for reasons like this. It’s safer, and almost always less work, to treat all calls equally and optimize if needed rather than having to support an “internal” call type over the same interface.
The more comments I read about it in HN, the less comfortable I feel about this decision.
Next.JS is more than fine for 99% of web apps, and the fit only gets better the bigger your web app/platform. In general it's probably the framework that will give you the most bang for your buck.
Next.js is a bad choice for a lot of apps, javascript is slow at a lot of things.
Next.js would be a terrible choice for any app that has any non-trivial compute, for example.
Never buy the hype.
Buy boring and tested.
I recommend finding something else. In our case we moved that code to what is now react router 7 but eventually all the react code we have will get replaced by Vue in some manner. We mostly moved away from react as a whole over time
The security posture for the code running in the browser is very different from the code running on a trusted backend.
A separation of concerns allows one to have two codebases, one frontend (untrustworthy but limited access) and one backend (trustworthy but a lot of access).
In this case, it’d also be interesting to try to figure out how a fix would look like in that model. You could have some way for a type-checker to tell the requests apart such as a union type for Client|Edge|Server requests but you’d need a way to assert that the header couldn’t be present on all of them, which suggests the real problem is using in-band signaling. It seems like a solid argument for type-checking since making the relationship clear enough to validate also makes it harder for humans to screw up.
I’d rather just factor common logic into a function and call it in the handler for every route that needs it. Boring, repetitive - but easy to understand and debug.
It probably is a good idea to have some kind of thin middleware layer that adds an extra layer of auth protection, so that it’s more difficult to accidentally do something like allowing access to /api routes for users that aren’t logged in. But for reasons that are obvious in this context, you should never rely entirely on URL-based logic to protect access to resources.
I hope Next's downfall sends a signal to the quality lib maintainers and changes direction (e.g. Remix and a f'd up router, TanStack w/ Start).
SSR frameworks make me vomit.
The stuff you mention was “born” at the backend and was then used to render frontend html at the very beginning then css, then JS etc…, but going from a frontend framework like React to the backend is an entirely different beast.
"Drink this soda pop and see, you have pretty friends and look how happy you are!"
"Drive this car and be a successful business person and have a great house and family!"
"Use NextJS and be as successful and popular as those tech bros at X and YT".
Those frameworks have some small use cases (e-commerce, semi dynamic content delivered to low end devices with lots of JS later on for analytics), but most of the time old school SSR (RoR, Django, ASP.NET MVC, ...) or SPA (Vue with Vue router) are the more appropriate solutions.
Hype driven development is a very real thing.
You just add a plugin into Vite and gain SSR, streaming, server functions, API routes with minimal configuration. You basically just add a ssr.tsx and client.tsx file into your existing TanStack Router application and it becomes full stack with full type safe.
If you want to go back to a React SPA just remove the plugin and config it back to SPA.
I built an app with it recently and it has an amazing DX.
Best part you can literally run it anywhere. It builds for any platform with a single configuration.
Hypes up AI coding, hypes up AI for security in particular, then immediately faceplants onto a critical auth bypass.
Add a single header 'x-middleware-subrequest' and it allows you to completely bypass any self-hosted Next.js middleware, including authorization.
This is beyond damning.
It's also exactly the reason why the whole Javascript ecosystem is really showing how immature it is and the hype and euphoria of Vercel is contributing to its clumsiness.
They are now also pushing "Vibe Coding", which is a hot air hype parade, about to be brutally hit with reality when others are deploying production code that is riddled with hundreds of security vulnerabilities.
A delightful golden age for professional security researchers.
Absolutely agree.
> It's also exactly the reason why the whole Javascript ecosystem is really showing how immature it is and the hype and euphoria of Vercel is contributing to its clumsiness.
I would hardly say the whole JS ecosystem is immature. There's tons of mature projects that take security very seriously and are written by highly skilled programmers.
> They are now also pushing "Vibe Coding", which is a hot air hype parade, about to be brutally hit with reality when others are deploying production code that is riddled with hundreds of security vulnerabilities
There are certainly many fresh programmers entering the ecosystem and "vibe coding" among other hyped trends are able to ride that wave. It's pretty clear that those hyping it are either new themselves (don't know better), or cater to an audience of new programmers. Those in the latter group are doing it to farm engagement, and/or are really out of touch from what real software systems look like/require.
The silent majority of moderate to highly experienced JS programmers know that these LLMs produce shit code outside of boilerplate and small demos. It's very easy to tell if you try to use them on anything else.
It is concerning on many levels though that new programmers are being guided off a cliff like this. Programming influencers and companies advocating for "vibe coding" and the like should be called out for sabotaging the next generation of programmers.
I'd just use Koa and keep it simple.
However, there is one argument you could make regarding the massive amount of complexity which Next takes on trying to blur client and server execution. That’s prone to creating confusion around validation and control flow, which is a notorious source of security vulnerabilities and it looks like this might be another one as it appears to be related to how they try to transition from edge execution to server-side.
So less a Next-specific point than recognizing that poor architecture is an ongoing risk. This kind approach has been tried and generally failed to deliver in it’s promised repeatedly over the decades because it only saves time building out a quick demo. Once you have a real app, with multiple people working on it, you really want a clear definition of what runs where because it’s much easier to reasonable about security, performance, and reliability if you don’t have layers of abstraction trying to pretend unlike things are alike.
My memory fails me - I can’t recall a vulnerability in the JVM ecosystem that allows an attacker to circumvent auth entirely with such trivial ease. Can you name an example?
Interviewer:
How do you handle vibe changes in vibe coding?
Candidate:
I can handle any type of vibe change.
Interviewer:
This is exactly what we are looking for.
I think this discussion is bringing a lot of unrelated angst out of the wood works that is beyond the level of rationality warranted
I think its rightful to be skeptical of Vercel’s incentives to vendor lock, and how long it took to deal with this vulnerability. Thats all independent of most of what I’m reading here
Is it the rest of the deploy infra? The vanilla app you can push to Heroku or any of its clones.
The deploy infrastructure is quite nice. Nextjs is surprisingly low config, even if you forego the Vercel deployment route it’s not difficult to generate a static site or docker container
It uses plain HTML, CSS, and JS for components (no React, Vue, Svelte, etc.—just simple components any skill-level can grok) in an easy-to-learn API and pairs that with a batteries-included Node.js back-end built on top of Express. The server automatically does old fashioned server-side rendering in routes (literally a callback function mapped to a URL pattern w/ req and res objects).
This is not "just another JS framework." I intentionally designed it to not behave like Next.js and other JS frameworks (I take "never trust the client" very, very seriously).
* Reported to the maintainers privately
* Patch published and CVE issued before wider disclosure
* Automated fix PRs created within minutes of public disclosure (and for folks doing proactive updates, before)
The above is _really_ excellent. Compare that to Log4j, which no CVE and no patch at the time it became public knowledge, and it's clear we've come a long way.
Supply chain security isn't a solved problem - there's lots we can still improve, and not everything here was perfect. But hats off to @leerob and everyone else involved in handling a tough situation really well.