Kudos to the Vite maintainers!
It all slowly adds up where you think a simple $10 VPS with 2 gigs of ram is enough but it's not, especially if you want a team of 10-30ish to work sporadically within the same box.
There can be a lot of major wins by rewriting these programs in more efficient languages like Go or Rust. It would make self hosting more maintainable and break away from the consulting class that often has worse solutions at way higher prices (for an example, one consulting group sells software similar to postiz but for $2k/month).
For a server built in the cloud those cycles could actually be taken up by other things, freeing the system and bringing costs down.
For a client computer running electron, as long as the user doesn't have so many electro apps open that their computer slows down noticeably, that inefficency might not matter that much.
Another aspect is that the devices get cheaper and faster so today's slow electron app might run fine on a system that is a few years away, and that capacity was never going to be taken up by anything else on the end user's device.
at least 100x slower than it needs to be?
I feel the same way about tools like ESLint and Prettier as well after discovering Oxc https://oxc.rs/
Compare this to essentially any modern business app, the product being sold has very little relationship with CPU cycles, or the CPU cycles are SO cheap relative to what you're getting paid, no one cares to optimize.
With Kotlin/Spring Boot, compilation is annoyingly slow. That's what you get with modern languages and rich syntax. Apparently the Rust compiler isn't a speed daemon either. But tests are something that's under your control. Unit tests should be done in seconds/milliseconds. Integration tests are where you can make huge gains if you are a bit smart.
Most integration tests are not thread safe and make assumptions about running against an empty database. Which if you think about it, is exactly how no user except your first user will ever use your system.
The fix for this is 1) allow no cleanup between tests 2) randomize data so there are no test collisions between tests and 3) use multiple threads/processes to run your tests to 1 database that is provisioned before the tests and deleted after all tests.
I have a fast mac book pro that runs our hundreds of spring integration tests (proper end to end API tests with redis, db, elasticsearch and no fakes/stubs) in under 40 seconds. It kind of doubles as a robustness and performance test. It's fast enough that I have codex just trigger that on principle after every change it makes.
There's a bit more to it of course (e.g. polling rather than sleeping for assertions, using timeouts on things that are eventually happening, etc.). But once you have set this up once, you'll never want to deal with sequentially running integration tests again. Having to run those over and over again just sucks the joy out of life.
And with agentic coding tools having fast feedback loops is more critical than ever.
For anyone that doesn't know: With sqlite you can serialize the db to a buffer and create a "new" db from that buffer with just `new Datebase()`. Just run the migrations once on test initialization, serialize that migrated db and reuse it instantly for each test for amazing test isolation.
Yea, cypress has this in their anti-patterns:
https://docs.cypress.io/app/core-concepts/best-practices#Usi...
Dangling state is useful for debugging when the test fails, you don't want to clean that up.
This has been super useful practice in my experience. I really like to be able to run tests regardless of my application state. It's faster and over time it helps you hit and fixup various issues that you only encounter after you fill the database with enough data.
This is because the kotlin compiler is not written in the way people write fast compilers. It has almost no backend to speak of (if you are targeting the jvm), and yet it can be slower at compilation than gcc and clang when optimizing.
Modern fast compilers follow a sort of emerging pattern where AST nodes are identified by integers, and stored in a standard traversal order in a flat array. This makes extremely efficient use of cache when performing repeated operations on the AST. The Carbon, Zig, and Jai compiler frontends are all written this way. The kotlin compiler is written in a more object oriented and functional style that involves a lot more pointer chasing and far less data-dense structures.
Then, if run on a non-graal environment, you also have to pay for the interpreter and JIT warmup, which for short-lived tasks represents nontrivial overhead.
But unlike header inclusion or templates, which are language level features that have major effects on compilation time, I don't think kotlin the language is inherently slow to compile.
Luckily, we have invented a completely new nightmare in the form of trying to graft machine-usable interfaces on top of AI models that were specifically designed to be used by humans.
Was it? Have you forgotten namespaces and enums?
That's no less a build step than concating, bundling, minifying, etc. When people say "I'm against processing code before deploying it to a web site" but then also say "TypeScript is okay though" or "JSX is okay though," all they're really saying is "I like some build steps but not others." Which is fine! Just say that!
You're not actually suggesting that technology can't evolve are you? Especially one whose original design goals were to process basic forms and are now being used to build full-blown apps?
It's absolutely wild to me that with everything that has happened in the last 2 decades with regard to the web there are still people who refuse to accept that it's changed. We are building far bigger and more complex applications with JavaScript. What would you propose instead?
Not meant as a gotcha but I'm surprised because people always tout it as being so much faster than Next. (4m with Turbo would have to be a crazy huge app IME)
> Wasm SSR support: .wasm?init imports now work in SSR environments, expanding Vite's WebAssembly feature to server-side rendering.
While the process was relatively slow, I really appreciate the extra effort that the team have put on even this minor feature add. They not only guided me towards more compatible and idiomatic approach, but also added docs and helped keeping the code up to date before merging.
I like Vite as a tool, but knowing that the Vite folks actually care about helping others learn and contribute is awesome.
Next started with Turbopack alpha as a Webpack alternative in Next 13 (October 2022) and finally marked Turbopack as stable and default in Next 16 (October 2025). They also ran sketchy benchmarks against Vite back in 2022 [0].
Next's caching has a terrible history [1], it is demonstrably slow [2] (HN discussion [3]), RSCs had glaring security holes [4], the app router continues to confuse and relied on preview tech for years, and hosting Next outside of Vercel requires a special adapter [5].
Choosing Next.js is a liability.
0 - https://github.com/yyx990803/vite-vs-next-turbo-hmr/discussi...
1 - https://nextjs.org/blog/our-journey-with-caching
2 - https://martijnhols.nl/blog/how-much-traffic-can-a-pre-rende...
3 - https://news.ycombinator.com/item?id=43277148
There are several much better options right now. My favourite is Tanstack Start. No magice, great DX
5 or so years ago, Next was a pretty solid option to quickly build up a non SPA, when combined with the static export function. It wasn't ideal, but it worked and came batteries included. Over time it's become more bloated, more complicated, and focused on features that benefit from Vercel's hosting – and static builds can't take advantage of them.
These newer features seem of limited benefit, to me, for even SPAs. Why is there still not a first class way of referencing API routes in the client code that provides typing? Once you reach even medium scale, it becomes a mess of inteprolated string paths and manually added shared response types.
JSX is a nice server side templating language. There a lot of people who aren't dependency conscious, and a lot of people who love react, and there is quite a bit of overlap in those two groups. I've used bun + preact_render_to_string for server side JSX templates before and it was nice. When I did it seemed that bun somewhat embraced react, and I could imagine react being the path of least resistance to server-side JSX there for some of the folks in the aforementioned groups.
Fetch index.html -> Fetch JS bundle -> Evaluate -> Fetch /users/me
You do:
Fetch index.html (your page is rendered at this point) -> rehydrate with client side JS for interactivity in the background
It's a pretty smart solution I think, and many people are still sleeping on the whole SSR topic.
Imagine a page that loads html during the first load, and then performs client-side routing during subsequent navigations. Is it an SPA? Is it not an SPA?
And to sell Vercel on top.
See Sitecore Cloud, Sanity, Contentful,....
I checked sitecore cloud to have special integration for nextjs and reactjs. But it also support vanilla js as well.
Are there really anyone who is exclusive to nextjs?
(haven't tried it myself)
I still don't understand how people used to think scripts like this are the proper way to bundle an app.
https://github.com/facebook/create-react-app/blob/main/packa...
vite is great, is all I am saying
I'm a big believer in fully reviewing all LLM generated code, but if I had to generate and review a webpack config like this, my eyes would gloss over...
The LLM generated vite config is 20 lines
I used to maintain a build workflow library [1] a lifetime ago; while our frontend build needs have evolved way beyond it, I can't avoid the feeling that we overengineered a little too much.
The PRC (aka server functions) demo [0] is particularly interesting — end-to-end type safety (from DB to UI) is a major milestone for JavaScript. We've been doing a lot of RPC design work in that space with Telefunc (tRPC alternative) [1] — it's a really hard topic, and we're looking forward to collaborating with the Void team. (Also looking forward to contributing as the creators of Vike [2].)
[0]: https://www.youtube.com/watch?v=BX0Xv73kXNk (around the end of the first talk) [1]: https://telefunc.com (see the last PR) [2]: https://vike.dev
The big news regarding Void Cloud is that it all seems to be built on Cloudflare workers. The landing page is very light on info atm too. [0]
I am super excited that they are MIT open sourcing Vite+ however. In that realm, they are obviously targeting Bun as their main competition. Unfortunately for Bun, if they are forced to help Anthropic more than they can focus on OSS, they might lose their current (perceived?) advantage.
> Unfortunately for Bun, if they are forced to help Anthropic more than they can focus on OSS
Curious: is that speculation, or based on observation?
Does Oxc also support TS runtime features like constructor parameter properties and enums? I seem to recall in the beta that they had enabled --erasableSyntaxOnly, presumably because Rolldown / Oxc didn't support doing a full transform.
For that matter, TypeScript's version of decorators ("experimental decorators") also works: https://playground.oxc.rs/?options=%7B%22run%22%3A%7B%22lint...
What's not supported is the current draft proposal for standardized ECMAScript decorators; if you uncheck experimentalDecorators, the decorator syntax is simply passed through as-is, even when lowering to ES2015.
Do you know what the status is on using Rolldown as a crate for rust usage? At the moment most rust projects use SWC but afaik its bundler is depreciated. I usually just call into Deno for builds but would be nice to have it all purely in Rust.
Just upgraded to 8 with some version bumping. Dev server time reduced to 1.5s from 8s and build reduced to 35s from 2m30. Really really impressed.
I think it is the only tool in the JS ecosystem that has not broken after a few years.
A great QoL change. One less place to duplicate (and potentially mistake) a config.
{
// ...
"imports": {
"#assets/*": "./src/assets/*",
"#/*": "./src/*"
},
// ...
}
The second one (#/*) is similar enough to what I usually use (@/*), and it's supported in Node since v25.4.0! Yet when I try to import the file at projectRoot/src/router/index.ts using: import router from "#/router/index"
VS Code shows an error: "Cannot find module '#/router/index' or its corresponding type declarations."Now, imports from e.g. "#assets/main.css" work, so I could work around this issue - but this is what I keep experiencing: the native variant usually kinda works except for the most common use case, which is made unnecessarily awkward. For a long time this is what ESM used to feel like, and IMO it still does in places (e.g. directory imports not working is a shame).
// See https://github.com/vitejs/vite/discussions/14652
esbuild: {
loader: "jsx",
include: /.*\.jsx?$/,
exclude: [],
},
optimizeDeps: {
esbuildOptions: {
loader: {
".js": "jsx",
},
},
},
Note the comment at the top. I had no idea how to come up with this config by checking the documentation pages of vite and its various related tools. Luckily I found the GitHub issue and someone else had come up with the right incantation.Now this new vite uses new tools, and their documentation is still lacking. I spent half an hour trying to figure out how vite (and related tools that I had to navigate and try to piece a coherent view of: esbuild, oxc, rolldown, etc.) might be convinced, but gave up and stayed with vite 7.
Someone could respond with a working solution and it would help, sure, but these tools sure as hell have documentation issues.
Though sometimes oxc complains about JSX in JS when running vite, but it still works fine.
Another instance is the use of rollupOptions.output.manualChunks that now has to be rewritten, maybe that would be less frustrating to fathom.
I wonder how much of the Rollup bundling magic has been ported to Rolldown.
One thing that always made this kind of switch to Rust has always been that Rollup has become so sophisticated that's hard to replace with something new.
https://github.com/vitejs/vite/issues/21939
Build filesizes have doubled... No build speed gain is worth that...
I would take a slightly longer build time for half the filesize any day
What about finally stop using node.js for server side development?
In fact it still does, I only use node when forced to do so by project delivery where "backend" implies something like Next.js full stack, or React apps running on iframes from SaaS products.
Well that's where they went wrong.
Node.js has been extraordinarily useful for building build tools. We're outgrowing it's capacity and rightfully moving to a compiled language. Also faster tooling is essential for establishing a high quality feedback loop for AI agents
Fast all the way down, especially when coupled with REPL tooling.
It just shows that people don’t value the actual performance of what they’re running.