- How do you measure bundle size and make sure it remains small? (e.g. make sure that a PR doesn't accidentally import a massive dependency)
- How do you measure bundling speed/performance, does it add significant time to the first request? To subsequent requests? Is it linear, exponential, etc. with the number of LOC? Again like in the previous point, how do we make sure there are no regressions here?
- How does this work, well, with absolutely anything else? If I want my front-end in React? Vue? etc.
Here is a link to the source where esbuild is used.
https://github.com/denoland/fresh/blob/main/src/server/bundl...
I personally think it would be better to bundle at deployment time so that the bundles don't need to be regenerated each time a new process starts up or on demand when a request comes in for one of the bundle files.
ES modules have great support but import maps don't. Your website won't work on iPhones if you launch with them today. They're close though. Give it a month and it should work.
Which is another way to say it's a form of compilation..... [0]
[0]: https://en.wikipedia.org/wiki/Source-to-source_compiler
It’s not a new idea.
It doesn't. Fresh uses Preact and that's it.
So the bundle size is zero.
It's not just HTML but also JS that gets sent.
That is, your browser loads the entry point file, then parses it for imports and loads the files referenced from there, then parses those for imports and so on. This process is not free. In particular, even when modules are cached, the browser still will make some request with If-Modified-Since header for each file, and even to localhost that has time overhead cost. This impact is greater if you are developing against some cloud dev server because each check costs a network round-trip.
However this may only come up when you have apps with many files, which Google apps tended to do for protobuf reasons.
I've only seen this from afar when using the Maps JS API.
If you want a de-googled approach for "only code you need" check out qwik by Misko Hevery who worked on a bunch f JS related things and a few others. The concept is "resumability".
(not sure if that's what you entirely meant since your example was the maps api)
In the JavaScript world, a significant chunk of energy is directed inwards, solving problems created by using JavaScript!
You also have completely different performance requirements compared to most other languages. If I ship a python app I don't have to worry about reducing the length of variables names to shave off a few bytes, or bundling multiple files together to reduce the number of http requests. Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
The closest comparison outside of the browser would be to the container ecosystem, which also runs code in an environment agnostic way, and there's plenty of complexity and volatility there (podman, buildah, docker, nerdctl, k8s, microk8s, k3s, k0s, nomad, docker swarm, docker compose, podman compose, et cetera).
And as someone who has worked on both, I can tell you that the container ecosystem is way better and way more deterministic. `Dockerfile` from 10 years back would work today as well. Any non-trivial package.json written even a few years ago would have half the packages deprecated in non-backward compatible way!
There is another similar ecosystem of mobile apps. That's also way superior in terms of the developer experience.
> Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
And that's not what I am objecting to. My concern is that the core JS specification is so barebones that it fragments right from the start.
1. There isn't a standard project format 2. There isn't a single framework that's backward compatible for 5+ years. 3. There isn't even an agreeement on the right build tools (npm vs yarn vs pnpm...) 4. There isn't an agreement on how to do multi-threaded async work
You make different choices and soon every single JS project looks drastically different from every other project.
Compare this to Java (older than JS!) or Go (newer than JS but highly opinionated). People writing code in Java or Go, don't expect there builds to fail ~1-5% of the times. Nor are the frameworks changed in a backward-compatible way every few years.
Isn't that the same as shipping native binaries? You don't know what version OS or libraries it will run on. That's why you do stuff like link with the oldest glibc you want to to support.
One package (not ours) suddenly fails to build about 40% of the time. Looks like a parallel access problem, node-gyp poops with ”Unable to access foobar.tlog” because some other step is using the same file
Fixed elegantly by adding a while(failed){npm install}
Because trying to debug the build for a package you didn’t create just isn’t worth it
The languages where the build tools remain static are often also the languages where innovation lags behind or where no real alternative exists. C and C++ projects often use standards that go back literal decades for compatibility reasons, and rely on apt/dnf/pacman to install their dependencies. Java is stuck on nine year old tech in most production systems because what if upgrading to Java 9 will break AncientProprietaryHackedTogetherLibrary. Python seems to be moving away from the pip vs conda wars, though the ML space seems to be reintroducing conda into newer projects; to run popular software, I've had to install at least two conda packages and pip (and then disable the auto load in my bash shell because all of them made the shell prompt take literal seconds to come up).
Go/Rust/.NET and other more recent languages have a single package manager+compiler+build tool+publishing system combination that's changing so rapidly no alternative could be written. I guess you can manually script calls to the compiler, linker, and download scripts, but I doubt this will be maintainable. I wonder how long it'll take for GCC Rust and official Rust will run into trouble in this space.
The Javascript ecosystem certainly seems to be the wildest when it comes to reinventing the wheel (and inventing new steps) to make new build systems, but every language either has too much of that or too little.
Have you ever tried building v8 from source? Go ahead, give it a whirl and come back in a few days and let me know how it went.
Or .net? Which one you ask? .net framework? .net core? Mono? Which version? Which framework? Which OS? Enjoy tweaking assembly xml files?
Python? You mean one of the main driving forces behind the invention of containers because dependency installation is such hell?
Go? Well, actually it is better. So, that's one.
My point is, js isn't unique in having fragmentation. It is a bit unique in its pace of innovation, but that's a good thing since it's also probably the most backwards compatible ecosystem in existence.
Is that all this is?
They are trying to innovate and coming up with differentiators and reasons to use the platform. If you had to ask me when I met Ryan 5 years ago in JSConf EU before he introduced deno - I would have assumed they'd have 30% market share by now (of server JS) but Node has been able to "catch up" to complaints quickly enough (I think) and Deno's selling points like edge computing and fast startup aren't super important for msot devs in most use cases in practice and there are other runtimes for different clouds (like cloudflare workers).
That said - it is still really good they are trying to innovate and while I find the marketing speak shitty and somewhat in bad faith - I still think it's really good they're innovating and I'm very much in favor of that and hope they find something important enough to solve to get big.
For example: I’m making a web app with Svelte in TypeScript and I’m trying to test a part of its code. To do that, I have to build the app first because TypeScript needs transpiling which in turn needs bundling etc…
personally I've been using vanilla es6 for years and not bundling, because I dont care about mobile safari, and I love it.
Am I missing something? This might not be terrible if it becomes the standard to host your own mirror internally.
`npm install` is equivalent to `https://deno.land/manual@v1.31.1/tools/vendor` in that they both fetch your dependencies and store them locally, so your app can run without downloading the deps.
The just-in-time builds section of the linked article describes an approach where you dynamically bundle, at request time. If your server already has all the deps vendored then it won't need to fetch them at runtime and your app will stay up even if the URLs go down.
If you are building something that demands high availability you probably want to host the dependencies yourself though. Which is easy, you just copy them and serve them as static files (assuming their license allows that use).
It's not always simple in every module system though. Currently, I want to figure out how to create a mirror for our Electron codebase, but it's tough because some of the modules fetch gyp native headers that live in other locations (including the Electron core packages themselves) and NPM doesn't always know what to do. The Electron core header URLs flake every 2-3 weeks or so and inevitably we lose a lot of engineering time.
Hoping Deno continues to gain steam and makes this simpler since everything is URLs all the way down.
Like you mentioned- mirrors could become more common, and relying on HTTP makes it incredibly easy to host your own mirror. And import-maps mean you can mirror anything and everything in your dependency tree
Let's not forget, back in the day every major site relied on a client-side request to a jQuery CDN :)
Why? For any sufficiently complex software system, a build system serves as a reducer whose input is something that is more convenient for developers, ie huge codebase with tons of utilities and annotations, and whose output is something more optimized to run on the end users' devices.
It's good to do such optimization because there will be, at least for a successful project, many orders of magnitude more EUs than devs. And an automated solution can do much more optimization than any team of devs could ever hope to do manually.
And that's before you get into obfuscation, although I can't tell whether that's necessary more for user security or just protecting IP.
(Not a web dev, I write in a compiled language in my day-to-day.)
One more thing to know, to update, to break, to configure, to consider when debugging. The existence of source maps proves just one aspect of the pain this indirection and complexity introduces. I’m not necessarily arguing the trade offs don’t make it worth it, merely that there is a cost and there are good reasons we’d want to avoid it if, all else being equal, we can.
As software engineers we ought to question if we're going in the right direction, and "more complexity" is not something I agree is better
Having to manually kick off a build process is another thing that I have to go do that takes me out of the flow and is another point where things can go wrong.
When you run a Deno script, it has a build step that it does internally, so I don't have to think about it, and I don't have to configure anything. It just works because it was designed that way. I don't know why more language runtimes aren't.
Even when I want to compile a JavaScript bundle to run in the browser and there is an explicit build step, `deno bundle` is far simpler and more pleasant to use than the mess of npm packages I would have to worry about in the Node world.
But, since the devs don't care, there's only so much finger wagging I can do
If in terms of money it’s not significant, for the company it makes no sense to devote costly labour time to speed it up.
I feel the same about C and C++. And Java. And Fortran, Pascal, Lisp, Kotlin, Swift, Rust, Forth...
This is a wonderful book, "The School of Niklaus Wirth: The Art of Simplicity"
https://tkurtbond.github.io/posts/2022/07/04/the-school-of-n...
Esp this chapter on the Wirthian way of designing compilers https://www.researchgate.net/publication/221350529_Compiler_...
See also the widely read (on hn), https://prog21.dadgum.com/47.html
I run one Node/Js server and several Nginx/Php servers.
When Node was released, it had better handling of multiple long-lived connections. Nowadays, support for SSE on Node trails all other servers, and the dream of "Isomorphic" code that doesn't need to be rewritten has not panned out (in JS, at least).
The main reason I could imagine someone choosing Deno now is that it is the tool they know best (such as someone fresh out of college). Which may not be a bad reason, but it is hardly the best tool for the job.
When you run the vite dev server it uses ESM, but when you build it uses rollup, because serving ESM is slow and with larger apps the client browser is going to make a bazillion requests. Wouldn't you rather traverse the dependency graph one time and bundle your code into modules so that everyone who visits your site doesn't force their browser to do it over and over again? Sure those dependencies will be cached between views or refreshes, but the first load will be slow as shit, then you still need to "code-split", just now you're calling it "islands".
I'd rather have an equally slow experience on first load, and then much better performance forever, compared to having something that constantly invalidates the entire cache.
Even with newer versions of http just transferring lots of small files is noticeably slower (few percent if I remember correctly).
Take a look at a few optimization it's able to do that the Deno guys will never even be able to dream of (otherwise they will reinvent Node.JS lol) [1]. The worst part is that the guy who created Deno is the same person that made Node.JS, if you don't like NodeJS I'm not sure why someone would be betting all in another of his projects, specially considering second-system syndrome is real and painful [2]. Deno is already suffering from feature creep, just recently starting to support package.json, which I find hilarious. Soon they will reinvent CPAN [3] and believe they just hit into something extremely innovative.
Does reading about CPAN remind you of something? Something that could be the same for JavaScript? Like a package manager for NodeJS?
[0]: https://vitejs.dev/guide/why.html#why-bundle-for-production
[1]: https://vitejs.dev/guide/features.html#build-optimizations
You can also use Deno to run your bundling tools, but again, what happens in Deno stays in Deno and does not reach the browser.
I can’t remember where I read it, I think it’s in the official docs.
[0]: https://vitejs.dev/guide/why.html#why-not-bundle-with-esbuil...
In addition to that, often there are other concerns addressed at build time such as linting.
I do think, based on the requirejs code that commonjs/browserify didn't really need to be compiled anyways.
Also fwiw, the technique mentioned here is a way a colleague and myself introduced babel to a large company as well, we just transformed + reverse proxy cached in dev. And fwiw, webpack basically does this anyways these days.
That sounds like an oxymoron to me. I have honestly no idea what they mean by that. To me, a browser is client-side software, so saying you want to run server-side JS on it doesn't make any sense. They mention it several times in the article but I simply can't follow.
Could someone with a deeper understanding ELI5 this to me?
With that said, I've seen people argue against node APIs and this desire to only use web APIs on the server. I don't get that. Node's API is generally pretty good and using JS as a replacement for Python/Ruby/etc. locally is rather excellent today. You don't need neutered APIs to also write code that works in both client and server. Unless you're selling cloud native bullshit (ahem, Nextjs)
Basically the point being that the browser "version" of JS has a lot of limitations w.r.t dependency resolution and standard library usage, and that by either using a bundling tool or whatever is being proposed here you can avoid those. In that way you end up writing "server-side JS," basically NodeJS style JS, for the browser.
Major added benefit is that it allows you to use the same libraries/packages/whatever on both client and server. That's highly convenient.
Modern (post Quake) games make server authoritative but allow server logic to run locally.
What modern JS app do is something hybrid client/server rendering. It's akin to moving/transmitting code from server for faster rendering.
I think they use it for offline web apps and to fix problems with server side rendering (usage of resource, time to first render).
It's definitely an improvement but the title is misleading.
I do think that new libraries should probably go the other direction with Deno first and Node/npm as a separate build target. I've started also reaching for Deno first for a few shell scripting chores where I need more than bash...
#!/usr/bin/env -S deno run ...
Which has been pretty handy.For example, Fresh has a “build process” whose cost is paid for by the user [1]. You want to do these things before the user hits your page, and that’s the nice thing about CI/CD. You can ensure correctness and you can optimize code.
In the interest of losing the build step, a tradeoff is made for worse UX for developer experience (DX). Rather, I would recommend shifting the compute that makes sense to the build step, and then give developers the optionality to do other work lazily at runtime[2].
[1]: https://github.com/denoland/fresh/blob/08d28438e10ef36ea5965...
[2]: https://vercel.com/docs/concepts/incremental-static-regenera...
I think it would be better to do bundling in your CI/CD. esbuild supports incremental builds, so using that + code splitting would be one way of speeding up builds.
With their current bundling design, if they believe bundling is fast enough for users to not be negatively impacted, wouldn't it also be fast enough to not slow down development/deployment by having it in a build step?
Reducing build times (or eliminating the build step) by moving things to runtime is a great idea for a debug build/mode. But why is it a good idea not to have separate release build to optimise for runtime performance?
Would be a much better use of their time than writing this nonsensical bs
My understanding is that the client side JS is a result of backend compilation. How does this work if the backend is dynamically generating those JS files? `getPosts()` can return a different JSX based on what `getPosts()` returns. No?
Or do people just want to YOLO it and let it crash in prod?
I really think with some love we could just go back to writing html/js/css directly. Maybe it is just that I fail to see the point of nodejs.
As someone who is a bit of an outsider to webdev, it looks like enough power to make most webapps I'd want to make. I think the only question in my mind is what the benefits/drawbacks of Deno+Fresh vs something like SvelteKit.
So please, if you own a framework like this, make sure a script tag with a CDN link is easily copyable.
A lot of very useful features require a build step because it generates classes on the fly based off what you typed in the HTML.
I doubt it.
People moved on from jQuery and/or vanilla because they needed to produce more sophisticated apps. And even in those days of yore, for any non-trivial project you still needed to concatenate and minify your code.
BTW Preact can be used without a build step.
One can also argue that Minification is also not really that important with widespread new compression algorithms like Brotli.
EDIT: Also, see this very good argument in favor of multiple files: https://news.ycombinator.com/item?id=34997759
If you serve 10 million users every byte counts, but 10? Use the cdn.
No one is building and ending up with bundles that are reducing the bloat of the web, you can’t tree-shake your way out of bad practices. Articles and real lived experiences show us that the web is still bloated.
And why are we transpiling anything? If people want to flirt with building, I wish JavaScript engineers would just build an implementation that compiles to machine code intermediate representation.
Which is it? Do you want to be a scripting language or a programming language that compiles to something? It’s so gross to me.
This is ridiculous. Just aesthetics. JS compiles to machine code when you run it "just in time". It's even relatively efficient considering it doesn't need static typing.
The "build step" is just for reducing the size of the payload. It is possible a binary representation would make it even smaller but not by much. Not worth the added complexity
https://www.amazon.com/High-Performance-Web-Sites-Essential/...
https://www.amazon.com/Even-Faster-Web-Sites-Performance/dp/...
This article is strictly about JavaScript. What about all the server-side rendering frameworks?
Huge assumption that everything is built one way.
I mean if we wanna get really pedantic about it then yes there will always be a build step no matter what you do, one could argue saving the file and alt-tabbing to the browser is a build step, but that's not the point is it? The idea is to lower that friction as much as possible and JIT is perfect for that
I hope builders start adding it (at least to dev instances) to decrease the magic.
I was trying to use import maps, but it's not trivial go create actually.
There are always problems with Node lagging behind browsers though that makes developing hard (no WebSocket support by default for example, crypto module is also not included)
I'm using Vite with Sveltekit, which is great because it compiles files separately, but still doesn't generate import maps, but uses imports with relative and absolute filenames.
FE development has some unique challenges, but in my experience a lot of people who work in this domain try to find their own solutions to problems that have already been solved decades prior. There's a reason the build chains are fragile and a nightmare to configure, that package lists are out of date the moment they're published, and that it takes a sustained effort to maintain a project viable even if you're not adding features or fixing bugs. It's absurd, and it's the status quo.
To take this into other areas of development (like BE for example) simply because that's what you're familiar with... it really is a special kind of masochism.
They are not fetched every time you run the app.
copying the code to the server becomes the build step.
except now you have no chance to lint the code before shipping it
"but I can lint it on my machine"
good, then you have a build system, and you may as well just get the optimized stuff on the server, since server startup time depends on your code size at some point or another, and you pay for that.
> Interest in Node.js grew since its inception.
They should mention the source of this, I guess is Google Trends.
Creating a whole fork simply to not build TypeScript, reinvent a worse package management system and a useless security harness.
Hard coding URLs is significantly worse than having a package.json file:
- you don't need to write the full URL to import a module
- you have a quick overview of which modules are installed and for which reason (dev dependencies)
- you can easily create an immutable list of dependencies
> And what's useless about the security harness
Because most apps will have to enable all flags (file system and network) anyway and because huge security holes like symlinks breaking out of the harness were present not too long ago.
But the security features are stupid on their face.
If you can’t trust your own code, why should users?
It’s too naive anyway. Why would I grant carte blanche to any entire feature instead of per dependency?
So Deno started with a bad idea, and then implemented it half-baked.
Which is it? Do you not trust your own code or do you? You don’t? Why not? Or why do you only trust a subset of it? If you do only trust a subset of it, why have you denied or granted the entire feature?
It’s useless. It’s one of the dumbest software features I’ve seen in my life.
Trust a dependency and pin its signature.