JavaScript did deliver its promise of unbreakable sandbox and nowadays browser runs JavaScript, downloaded from any domain without asking user whether he trusts it or not.
WASM builds on JavaScript engine, delivering similar security guarantees.
So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.
So now Google Chrome is secure enough for billions of people to safely run evil WASM without compromising their phones, and you can copy this engine from Google Chrome to server and use this strong sandbox to run scripts from various users, which could share resources.
An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM. There's no clear winner here, I think, for now, there are pros and cons for every approach.
There's more to it than just the sandbox security model. The JVM bytecode doesn't have pointers which has significant performance ramifications for any language with native pointers. This limitation was one of the reasons why the JVM was never a serious compilation target platform for low-level languages like C/C++.
E.g. Adobe compiled their Photoshop C++ code to WASM but not to the JVM to run in a Java JRE nor the Java web applet. Sure, one can twist a Java byte array to act as a flat address space and then "emulate" pointers to C/C++ but this extra layer of indirection which reduces performance wasn't something software companies with C/C++ codebases were interested in. Even though the JVM was advertised as "WORA Write-Once-Run-Anywhere", commercial software companies never deployed their C/C++ apps to the JVM.
In contrast, the motivation for asm.js (predecessor to WASM) was to act as a reasonable and realistic compilation target for C/C++. (https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-n....)
So the WASM-vs-JVM story can't be simplified to "just security" or "just politics". There were actual different technical choices made in the WASM bytecode architecture to enable lower-level languages like C/C++. That's not to say the Sun Java team's technical choices for the JVM bytecode were "wrong"; they just used different assumptions for a different world.
It is interesting to ask why that is the case, from my point of view the reason is that the JVM standard library is just too damn large. While WASM goes on a lower-level approach of just not having one.
To make WASM have the capabilities required the host (the agent running the WASM code) needs to provide them. For a lot of languages that means using WASI, moving most of the security concerns to the WASI implementation used.
But if you really want to create a secure environment you can just... not implement all of WASI. So a lambda function host environment can, for example, just not implement any filesystem WASI calls because a lambda has no business implementing filesystem stuff.
> An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM.
I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster. Or custom WASM runtimes heavily tied to the hardware they run on to make better JIT code.
I imagine a future where WASM is treated like LLVM IR
Aren't its VM implementations routinely exploited? Ranging from "mere" security feature exploits, such as popunders, all the way to full on proper VM escapes?
Like even in current day, JS is ran interpreted on a number of platforms, because JIT compiling is not trustworthy enough. And I'm pretty sure the interpreters are no immune either.
They also took much longer to develop than whatever you could cook up in plain html and javascript.
The JVM is not fundamentally insecure the same say as neither is any Turing-complete abstraction like an x86 emulator or so. It’s always the attached APIs that open up new attack surfaces. Since the JVM at the time was used to bring absolutely unimaginable features to the otherwise anemic web, it had to be unsafe to be useful.
Since then, the web improved a huge amount, like a complete online FPS game can literally be programmed in just JS almost a decade ago. If a new VM can just interact with this newfound JS ecosystem and rely on these to be the boundaries it can of couse be made much safer. But it’s not inherently due to this other VM.
This is an oversimplification — there's nothing about the JVM bytecode architecture making it insecure. In fact, it is quite simpler as an architecture than WASM.
Applets were just too early (you have to remember what the state of tech looked like back then), and the implementation was of poor quality to boot (owing in part to some technical limitations — but not only).
But worst of all, it just felt jank. It wasn't really part of the page, just a little box in it, that had no connection to HTML, the address bar & page history, or really anything else.
The Javascript model rightfully proved superior, but there was no way Sun could have achieved it short of building their own browser with native JVM integration.
Today that looks easy, just fork Chromium. But back then the landscape was Internet Explorer 6 vs the very marginal Mozilla (and later Mozilla Firefox) and proprietary Opera that occasionally proved incompatible with major websites.
The practical reasons have more to do with how the JVM was embedded in browsers than the actual technology itself (though Flash was worse in this regard). They were linked at binary level and had same privileges as the containing process. With the JS VM the browser has a lot more control over I/O since the integration evolved this way from the start.
I'm sure there's a big long list of WebKit exploits somewhere that will contradict that sentence...
Unlike the JVM, WASM offers linear memory, and no GC by default, which makes it a much better compilation target for a broader range of languages (most common being C and C++ through Emscripten, and Rust).
> Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser.
WASM is bytecode, and I think most implementations share a lot of their runtime with the host JavaScript engine.
> In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.
The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.
Indeed, graphics pioneer and all-around-genius Ivan Sutherland observed (and named) this back in 1968:
"wheel of reincarnation "[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.
"Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter."
https://www.catb.org/jargon/html/W/wheel-of-reincarnation.ht...
- Wasm has verification specification that wasm bytecode must comply to. This verified subset makes security exploits seen in those older technologies outright impossible. Attacks based around misbehaving hardware like heartbleed or rowhammer might still be possible, but you, eg, can't reference memory outside of your wasm's memory by tricking the VM to interpret a number you have as a pointer to memory that doesn't belong to you.
- Wasm bytecode is trivial (as it gets) to turn into machine code. So implementations can be smaller and faster than using a VM.
- Wasm isn't owned by a specific company, and has an open and well written specification anyone can use.
- It has been adopted as a web standard, so no browser extensions are required.
As for computation on clients versus serves, that's already true for Javascript. More true in fact, since wasm code can be efficient in ways that are impossible for Javascript.
As far as I understand, in WASM memory is a linear blob, so if I compile C++ to WASM, isn't it possible to reference a random segment of memory (say, via an unchecked array index exploit) and then do whatever you want with it (exploit other bugs in the original C++ app). The only benefit is that access to the OS is isolated, but all the other exploits are still possible (and impossible in JVM/.NET).
Am I missing something?
Both Java and .NET verify their bytecode.
>Wasm bytecode is trivial (as it gets) to turn into machine code
JVM and .NET bytecodes aren't supercomplicated either.
Probably the only real differences are: 1) WASM was designed to be more modular and slimmer from the start, while Java and .NET were designed to be fat; currently there are modularization efforts, but it's too late 2) WASM is an open standard from the start and so browser vendors implement it without plugins
Other than that, it feels like WASM is a reinvention of what already existed before.
WASM makes that safe, and that's the whole point. It doesn't increase the attack surface by much compared to running Javascript code in the browser, while the alternative solutions where directly poking through into the operating system and bypassing any security infrastructure of the browser for running untrusted code.
Java was an outsider trying to get in.
The difference is not in the nature of things, but rather who championed it.
And otherwise, WASM is different in two ways.
For one, browsers have gotten pretty good at running untrusted 3rd party code safely, which Flash or the JVM or IE or.NET were never even slightly adequate for.
The other difference is that WASM is designed to allow you to take a program in any language and run it in the user's browser. The techs you mention were all available for a single language, so if you already had a program in, say, Python, you'd have to re-write it in Java or C#, or maybe Scala or F#, to run it as an applet or Silverlight program.
From 2001,
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."
https://news.microsoft.com/2001/10/22/massive-industry-and-d...
There never was a wasm vs applet debate.
- The security model (touched on by other comments in this thread)
- The Component Model. This is probably the hardest part to wrap your head around, but it's pretty huge. It's based on a generalization of "libraries" (which export things to be consumed) to "worlds" (which can both export and import things from a "host"). Component modules are like a rich wrapper around the simpler core modules. Having this 2-layer architecture allows far more compilers to target WebAssembly (because core modules are more general than JVM classes), while also allowing modules compiled from different ecosystems to interoperate in sophisticated ways. It's deceivingly powerful yet also sounds deceivingly unimpressive at the same time.
- It's a W3C standard with a lot of browser buy-in.
- Some people really like the text format, because they think it makes Wasm modules "readable". I'm not sold on that part.
- Performance and the ISA design are much more advanced than JVM.
It's just an IDL, IDL's have been around a long time and have been used for COM, Java, .NET, etc.
As well as the security model differences other are debating, and WASM being an open standard that is easy to implement and under no control from a commercial entity, there is a significant difference in scope.
WebAssemply is just the runtime that executes byte-code compiled code efficiently. That's it. No large standard run-time (compile in everything you need), no UI manipulation (message passing to JS is how you affect the DOM, and how you ready DOM status back), etc. It odes one thing (crunch numbers, essentially) and does it well.
The issue with those older technologies was that the runtime itself was a third-party external plugin you had to trust, and they often had various security issues. WASM however is an open standard, so browser manifacturers can directly implement it in browser engines without trusting other third-parties. It is also much more restricted in scope (less abstractions mean less work to optimize them!) which helps reducing the attack surface.
That is nonsense. WASM and JS have the exact same performance boundaries in a browser because the same VM runs them. However, WASM allows you to use languages where it's easier to stay on a "fast-path".
WASM on its own isn't anything special security-wise. You could modify Java to be as secure or actually more secure just by stripping out features, as the JVM is blocking some kinds of 'internal' security attacks that WASM only has mitigations for. There have been many sandbox escapes for WASM and will be more, for example this very trivial sandbox escape in Chrome:
https://microsoftedge.github.io/edgevr/posts/Escaping-the-sa...
... is somewhat reminiscent of sandbox escapes that were seen in Java and Flash.
But! There are some differences:
1. WASM / JS are minimalist and features get added slowly, only after the browser makers have done a lot of effort on sandboxing. The old assumption that operating system code was secure is mostly no longer held whereas in the Flash/applets/pre-Chrome era, it was. Stuff like the Speech XML exploit is fairly rare, whereas for other attempts they added a lot of features very fast and so there was more surface area for attacks.
2. There is the outer kernel sandbox if the inner sandbox fails. Java/Flash didn't have this option because Windows 9x didn't support kernel sandboxing, even Win2K/XP barely supported it.
3. WASM / JS doesn't assume any kind of code signing, it's pure sandbox all the way.
Also no corporate overlord control.
Obsuscation and transpilation are not new in jsland
Google App Engine (2008) predates Lambda (2014) by 6 years!
I was never quite sure why we got the name “serverless”, or where it came from, since there were many such products a few years before, and they already had a name
App engine had both batch workers and web workers too, and Heroku did too
They were both pre-docker, and maybe that makes people think they were different? But I think lambda didn’t launch with docker either
Serverless refers to the software not being a server (usually implied to be a HTTP server), as was the common way to expose a network application throughout the 2010s, instead using some other process-based means to see the application interface with an outside server implementation. Hence server-less.
It's not a new idea, of course. Good old CGI is serverless, but CGI defines a specific protocol whereas serverless refers to a broad category of various implementations.
Heroku is a few seconds:
> It only takes a few seconds to start a one-off dyno process or to scale up a web or worker process.
Lambda created Firecracker to be snappier:
> The duration of a cold start varies from under 100 ms to over 1 second.
I think App Engine is in the same ballpark as Lambda (and predated it). Fly.io uses Firecracker too:
> While Fly Machine cold starts are extremely fast, it still takes a few hundred milliseconds, so it’s still worth weighing the impact it has on performance.
but WASM is yet an order of magnitude faster and cheaper:
> Cloudflare Workers has eliminated cold starts entirely, meaning they need zero spin up time. This is the case in every location in Cloudflare's global network.
WASM is currently limited in what it can do, but if all you're doing is manipulating and serving HTML, it's fantastic at that.
It was the heydays of SPAs, light backends, and thick frontends.
“Serverless” is a great way to say “you don’t need to be a backend dev or even know anything about backend to deploy with us”
And it worked really really well.
Then people realized that they should know a thing or two about backend.
I always really hated that term.
App Engine is PaaS: You provide your app to the service in a runnable form (maybe a container image, maybe not) and they spin up a dedicated server (or slice of a server) to run it continuously.
Lambda is Serverless: You provide them a bit of code and a condition under which that code should run. They charge you only when that thing happens and the code runs. How they make that happen (deploy it to a bajillion servers? Only deploy it when it’s called?) are implementation details that are abstracted from the user/developer as long as Lambda makes sure that the code runs whenever the condition happens.
So with PaaS you have to pay even if you have 0 users, and when you scale up you have to do so by spinning up more “servers” (which may result in servers not being fully utilized). With Serverless you pay for the exact amount of compute you need, and 0 if your app is idle.
Backend returns 4xx/5xx? The server is down. Particular data is not available in this instance and app handles this error path poorly? The server is down. There is no API to call for this, how do I implement "the server"?
Some people still hold the worldview that application deployment is similar to mod-php where source files are yoloed to live filesytem. In this worldview, ignorant of complexities in operations, serverless is perfectly fitting marketing term, much like Autopilot, first chosen by Musk, chef's kiss.
This sounds.. not right. Honestly,this is an essential feature for allowing workloads like hot reloading code cleanly.
I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security. Additionally, you can emulate codegen or hot reload, by dynamically reloading the entire Wasm runtime and preserving the memory, but the user experience will be clunky.
I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
I kind of dislike WASM. It's a project lacking strong direction and will to succeed in a timely manner. First, the whole idea is conceptually unclear, its name suggests that it's supposed to be 'assembly for the web', a machine language for a virtual CPU, but it's actually an intermediate representation meant for compiler backends, with high-level features planned such as GC support. It's still missing basic features, like the aforementioned hot reload, non-hacking threading, native interfacing with the DOM (without Javascript ideally), low-overhed graphics/compute API support, low-level audio access etc. You can't run a big multimedia app without major compromises in it.
> I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security.
JIT here is referring to compiling native code at runtime and executing it. This would be a huge security compromise in the browser or in a wasm sandbox.
> I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
It's not because it's baked into the design and instruction set. You can read some more about how it works here: https://webassembly.org/docs/security/
> Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
Yes, and like with Wasm, the engine is responsible for JITting. But giving the user the power to escape the runtime and emit native code and jump to it is dangerous.
Browsers definitely use a form of JIT-ing for WASM (which is a bit unfortunate, because just as with JITs, you might see slight 'warmup stutter' when running WASM code for the first time - although this has gotten a lot better over the years).
...also I'm pretty sure you can dynamically create a WASM blob in the browser and then dynamically instantiate and run that - not sure if that's possible in other WASM runtimes though, and even in the browser you'll have to reach out Javascript, but that's needed for accessing any sort of 'web API'.
I (and the article) wasn't referring to this kind of JIT. I was referring to the ability to dynamically create or modify methods or load libraries while the app is running (like `DynamicMethod` in .NET).
Afaik WASM even in the browser does not allow modifying the blob after instantiation.
The thing you are referring to puzzles me as well. I initially thought that WASM would be analogous to x86 or ARM asm and would be just another architecture emitted by the compiler. Running it in the browser would just involve a quick translation pass to the native architecture (with usually 1-to-1 mapping to machine instructions) and some quick check to see that it doesn't do anything naughty. Instead it's an LLVM IR analog that needs to be fed into a full-fledged compiler backend.
I'm sure there are good technical reasons as to why it was designed like this, but as you mentioned, it comes with tangible costc like startup time and runtime complexity.
Since it is generally implemented as part of the javascript engine, it inherits a lot of stuff that comes with it like sandboxing and access to the APIs that come with it. Standardizing access to that is a bit of an ongoing process but the end state here is that anything that currently can only be done in Javascript will also be possible in WASM. And a lot more that is currently hard or impossible in Javascript. And it all might run a little faster/smoother.
That makes WASM many things. But the main thing it does is remove a lot of restrictions we've had on environments where Javascript is currently popular. Javascript is a bit of a divisive language. Some people love it, some people hate it. It goes from being the only game in town to being one of many things you can pick to do a thing.
It's been styled as a Javascript replacement, as a docker replacement, as a Java replacement, a CGI replacement (this article), etc. The short version of it is that it is all of these things. And more.
Nowadays, the few times I need to build something for the web I use leptos which has a much nicer DX and even if it didn't reach 1.x yet, it feels more stable that chaining like 5 tools to transpile, uglify, minify, pack, ... your JS bundle.
I'm unsure of the source for this Law, but it certainly proves correct more often than not.
"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
The general pattern is called the Inner-Platform Effect.
PHP was literally copy/past code snippets into a file and then upload it to a hosting provider.
I don't build for WASM but I'll bet the money in my pocket to a charity of your choice that its harder for a beginner.
If somewhat complex apps like Figma can run almost entirely within user's browser, then I think vast majority of the apps out there can. Server side mostly is there to sync data between different instances of the app if the user uses it from different locations.
The tooling for this is in the works, but is not yet mature. e.g Electric-SQL. Once these libraries are mature, I think this space will take off.
Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
WASM could succeed as well. But mostly in user's browser. Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.
CGI empowers users and small sites. No one talks about it because you can't scale to a trillion add impressions a second on it. Serverless functions add 10 feet to Bazoz's yacht every time someone writes one.
Incidentally, I think that's why local-first didn't take off yet: it's difficult to monetize and it's almost impossible to monetize to the extent of server-based or server-less. If your application code is completely local, software producers are back to copy-protection schemes. If your data is completely local, you can migrate it to another app easily, which is good for the user but bad for the companies. It would be great to have more smaller companies embracing local-first instead of tech behemoths monopolizing resources, but I don't see an easy transition to that state of things.
Works well local-first, and syncs with the cloud as needed. Flutter space lends itself very well to making local-first apps that also play well in the cloud.
In a way - yes - it's almost like it was before the internet, but mostly because other ways to distribute and run applications have become such a hassle, partly for security reasons, but mostly for gatekeeping reasons by the "platform owners".
It is actually kinda funny to read cries about "enshitiffication" and praises for more web-based bullshittery on the same site, although both are clearly connected and supporting each other. Good material for studying false consciousness among dev proletariat.
Might be true, but both will be more than fast enough. We develop Blazer WASM. When it comes to performance, dotnet is not the issue
Still, you'll be coding your front-end with Wasm/Rust, so get in on the Rust train :)
However it’s likely that generations who weren’t making websites in the days of Matt’s script archive don’t even know about cgi, and end up with massive complex frameworks which go out of style and usability for doing simple tasks.
I’ve got cgi scripts that are over 20 years old which run on modern servers and browsers just as the did during the dot com boom.
Figma and other's work because they're mostly client-side applications. But I couldn't, for example, do that with a supply chain application. Or a business monitoring application. Or a ticketing system.
It depends on what you're actually building.
For the business applications I build SSR (without any JS in the stack, but just golang or Rust or Zig) is the future.
It saves resources which in turn saves money, is way more reliable (again: money) and less complex (again: money) to syncing state all the time and having frontend state diverge from the actual (backend) state.
Business applications don't care about client side resource utilisation. That resource has already been allocated and spent, and it's not like their users can decide to walk away because their app takes an extra 250ms to render.
Client-side compute is the real money saver. This means CSR/SPA/PWA/client-side state and things like WASM DuckDB and perspective over anything long-lived or computationally expensive on the backend.
Recently I wrote an .e57 file uploader for quato.xyz - choose a local file, parse its binary headers and embedded xml, decide if it has embedded jpg panoramas in it, pull some out, to give a preview .. and later convert them and upload to 'the cloud'.
Why do that ? If you just want a panorama web tour, you only need 1GB of typically 50GB .. pointclouds are large, jpgs less so !
I was kind of surprised that was doable in browser, tbh.
We save annotations and 3D linework as json to a backend db .. but I am looking for an append-only json archive format on cloud storage which I think would be a simpler solution, especially as we have some people self hosting .. then the data will all be on their intranet or our big-name-cloud provider... they will just download and run the "app" in browser :]
This doesn't follow. If Figma has the best of the best developers then most businesses might not be able to write just as complex apps.
C++ is a good example of a language that requires high programming skills to be usable at all. This is one of the reasons PHP became popular.
Unfortunately I got a little of a burnout for working some years on it, but I confess I have a more optimized and more to the point version of this. Also having to work on Chrome for this with all its complexity is a bit too much.
So even though is a lot of work, nowadays I think is better to start from scratch and implement the features slowly.
The problem is: Figma and Linear are not local-first in the way people who are local-first proponents explain local-first. Both of them require a centralized server, that those companies run, for synchronization. This is not what people mean when they talk about "local-first" being the future, they are talking about what Martin Kleppman defined it as, which is no specialized synchronization software required.
I would guess WASM is a big building block of the future of apps you imagine. Figma is a good example.
Sure, it allowed a large ecosystem, but holy crap is the whole JVM interface to the external world a clunky mess. For 20+ years I have groaned when encountering anything JVM related.
Comparing the packaging and ecosystem of Rust to that of Python, or shudder C++, shows that reinvention, with lessons learned in prior decades, can be a very very good thing.
You can't easily publish a library in WASM and link it into another application later. But you can publish it as C++ source (say) and compile it into a C++ application, and build the whole thing as WASM.
What are the scenarios where you really really want libraries in WASM format?
Not only JVM, also CLR, BEAM, P-Code, M-Code, and every other bytecode format since UNCOL came to be in 1958, but lets not forget about the coolness of selling WASM instead.
The other point is that WASM is way more open than any of the mentioned predecessors were. They were mostly proprietary crap by vendors who didn't give a shit (flash: security, Microsoft: other platforms) so inevitably someone else would throw their weight around (Apple) to kill them, and with good reason. WASM is part of the browser, so as a vendor you're actually in control regarding security and other things, and are not at the mercy of some lazy entity who doesn't give a damn because they think their product is irreplaceable.
Because of the sandbox nature of WASM technically it could even run outside an operating system or in ring0 bypassing a lot of OS overhead.
Compiling to WASM makes a whole range of deployment problems a lot simpler for the user and gives a lot of room for the hosting environment to do optimizations (maybe even custom hardware to make WASM run faster).
Massive scaling with minimal resources is certainly one important enabler. If you were, e.g., to re-architect wikipedia with the knowledge and hardware of today how would you do it with wasm (on both desktop and mobile). How about a massive multiplayer game etc.
On the other hand you have the constraints and costs of current commercial / business model realities and legacy patterns that create a high bar for any innovation to flurish. But high does not mean infinitely high.
I hate to be the person mentioning AI on every HN thread but its a good example of the long stagnation and then torrential change that is the hallmark of how online connected computing adoption evolves: e.g., we could have had online numerically very intensive apps and API's a long time ago already (LLM's are not the only useful algorithm invented by humankind). But we didnt. It takes engineering a stampede to move the lazy (cash) cows to new grass land.
So it does feel that at some point starting with a fresh canvas might make sense (as in, substantially expand what is possible). When the cruft accumulates sometimes it collapses under its own weight.
I hate WASM heavy websites as often they have bloat of javascript and site is very slow, especially during scrolling, zooming due to abuse of event listeners and piss poor coding discipline.
I kinda miss sometimes server rendered index.php
If you're generating bindings for some legacy disaster and shipping it to clients as a big WASM blob you're going to hell.
Currently it is a huge PITA to have to update and redeploy your AWS Lambda apps whenever a Node.js or Python version is deprecated. Of course, usually the old code "just works" in the new runtime version, but I don't want to have to worry about it every few years. I think applications should work forever if you want them to, and WASM combined with serverless like Lambda will provide the right kind of platform for that.
>Just in Time (JIT) compilation is not possible as dynamic Wasm code generation is not allowed for security reasons.
I don't follow -- is the Wasm runtime VM forbidden from JITing? (How could such a prohibition even be specified?) Assuming this is the case, I'm surprised that this is considered a security threat, given that TTBOMK JVMs have done this for decades, I think mostly without security issues? (Happy to be corrected, but I haven't heard of any.)
As far as model goes, the serverless one is not a different model. It is still a flavor of the CGI concept. But the underlying tech is different. And not that much. It is only serverless for you as a customer. Technically speaking, it runs on servers in micro-VMs.
Those are orthogonal matters, and even if such tech as the middleware mentioned get some wind, the execution model is still the same and is not new.
> If we go back to thinking about our Application Server models; this allows us to have a fresh process but without paying the startup costs of a new process. Essentially giving us CGI without the downsides of CGI. Or in more recent terms, serverless without cold starts. This is how Wasm is the new CGI.
^ It's not a frivolous claim.
> Wasm improves performance, makes process level security much easier, and lowers the cost of building and executing serverless functions. It can run almost any language and with module linking and interface types it lowers the latency between functions incredibly.
^ Not unreasonable.
I don't agree that its necessarily totally 'game changing', but if you read this article and you get to the end and you dont agree with:
> When you change the constraints in a system you enable things that were impossible before.
Then I'm left scratching my head what it was you actually read, or what the heck you're talking about.
> Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
There's... just no possible future, in which AWS and Azure just go away and stop selling something which is making them money, when a new technology comes along and makes it easier, safer and cheaper to it.
> I kind of like this variety of headline for it's ability to stimulate discussion but it's also nonsense. CGI can be any type of code responding to an individual web request, represented as a set of parameters. It has basically nothing to do with wasm
*shakes head sadly...*
...well, time will tell, but for alllll the naysayers, WASM is here to stay and more and more people are using it for more and more things.
Good? Bad? Dunno. ...but it certainly isn't some pointless niche tech that no one cares about is about to disappear.
CGI enabled a lot of things. WASM does too. The comparison isn't totally outrageous. It'll be fun to see where it ends up. :)
It's amazing how just one sentence can be so utterly wrong.
WSGI actually predates rack by several years: first WSGI spec was published in 2003 [0], rack was split from Rails in 2007 [1].
Flask is not an "application server", it is one of the web frameworks that implements WSGI interface. Another popular framework that also implements it is Django. Flask is not the first WSGI implementation, so I'm not sure why author decided to mention Flask specifically. It's probably one of the most popular WSGI implementations but there is nothing special about it, it hasn't introduced any new concepts or a new paradigm or anything like that.
I'm not sure if the rest of the article is even worth reading if the author can't even get the basic facts right but for some reason feels the need to make up total nonsense in their place.
That doesn’t mean there weren’t good technical reasons, but that’s not necessarily the driver,
For example, ssl is obviously good, but ssl required also raises the cost of making a new site above zero, greatly reducing search spam (a problem that costs billions otherwise).
WASM is basically the new Microsoft Common Language Runtime, or the new JVM etc.
But OPEN!
I let chatgpt do the tedious work, have a look at a minimal example:
https://chatgpt.com/share/6707c2f3-5840-8008-96eb-e5002e2241...
Before WASM you could already compile code from other languages into JavaScript. And have the same benefits as you have with WASM.
The only benefit WASM brings is a bit faster execution time. Like twice the speed. Which most applications don't need. And which plain JavaScript offers about two years later because computers become faster.
And you pay dearly for being these two years ahead in terms of execution time. WASM is much more cumbersome to handle than plain JS when it comes to deployment, execution and debugging.
In IT we see it over and over again that saving developer time is more important than saving CPU cycles. So I think chosing WASM over plain JS is a net negative.
Sure, native JS is easier still. But there is a huge wealth of code already written in languages that are not JS. If you want a web app that needs this code, you'll develop it many times faster by compiling the pre-existing code to WASM than by manually rewriting them in JS, and the experience will be significantly better than compiling that code to JS.
If you are referring to asm.js you must be joking. asm.js was basically a proof of concept and is worse in every way compared to WASM.
Like parsing time overhead alone makes it a non-option for most large applications.
You seem to imply you should just do it in plain JS instead for "deployment, execution and debugging" benefits. Imagine if you could be free to use those python ML libs in any language of your choice, that alone is enough of an argument. No one is going to reimplement them in JS (or any other environemtn) unless there is a huge ecosystem movement around it.
Look into the history of WASM. They did try compiling everything into JS with asm.js, but then sensibly decided to do things properly. I don't know why anyone would object to proper engineering.
asm.js (the spiritual precursor to WASM) worked pretty much the same, and an awful lot of languages were compiled to it.
WASM does provide a more predictable compilation target to be sure, but I don't think it actually opens any new possibilities re what languages can be compiled.
For some of us it's much easier than dealing with Javascript though (for instance debugging C/C++ in Visual Studio is much nicer than debugging JS in Chrome - and that's possible by simply building for a native target, and then just cross-compile to WASM - but even the WASM debugging situation has improved dramatically with https://marketplace.visualstudio.com/items?itemName=ms-vscod...)
What actually WASM brings is predictable performance.
If you're JS wizard, you can shuffle code around, using obscure tricks to make current browser to run it really fast. The problem is: JS wizards are rare and tomorrow browser might actually run the same code much slower if some particular optimization changed.
WASM performance is pretty obvious and won't change significantly across versions. And you don't need to be wizard, you just need to know C and write good enough code, plenty of people can do that. Clang will do the rest.
I agree that using WASM instead of JS without reasons probably is not very wise. But people will abuse everything and sometimes it works out, so who knows... The whole modern web was born as abuse of simple language made to blink the text.