That could mean we lose hackability and the ability to write extensions or even scrape the web without a BigCo webcrawler's level of infra investment. Is everything going to turn into an opaque single page app? Technically, webassembly is really cool, but I worry about where the browser is headed.
HTML and CSS standards were always at least 10 years too late. If you wanted something that looked modern you've always had to use browser-specific extensions, plugins or, at best, "beta" features.
Seriously, I know that nowadays many devs have started straight with web technologies and don't have a lot of experience besides that but if that's your case do yourself a favor and give a quick look at a proper UI toolkit like Qt. Would it blow your mind if I told you that you can create a complex treeview without having to use third-party libraries or reinventing half the wheel yourself? Crazy, I know.
If anything this "historic separation" and people who still hold onto it might be one of the reasons the modern web is such a messy patchwork of technologies, if web engineers had accepted that an open source Flash was actually the endgame and not a problem we might have saved us some time and some trouble.
To be clear I'm not saying that it's a bad idea per-se, I personally believe that the web is way too complicated as it is, I just feel like it's similar to arguing that "literally" shouldn't be used to mean "extremely" in modern English. Sure, I get your point, but I don't think it's a battle you can win.
For example I’m a huge fan of the declarative UI trend going on in the web community. Now, granted, Microsoft came out with MVVM and XAML way before React or Angular existed, but native UI libraries are so historically imperative that declarative UI wasn’t making much of a dent in the native sphere.
But even though these older native frameworks are imperative, they are much more powerful than anything like React out of the box. They’re much more comparable to component frameworks like Bootstrap, SemanticUI, AntD, etc. But even still, those libraries don’t come anywhere close to the power of Qt or UIKit in iOS. If you’ve used both, I can’t imagine you saying that it’s even close.
But it’s precisely because of the tension between the web as a document platform vs. the web as an application platform. The web has never doubled down on being an application platform, it always keeps its document roots in the background in some way. Personally, I’m torn. I love the transparency and inspectability of the web. Literally yesterday I opened up the dev tools inn twitter and I learned about its timeline architecture just from inspecting the requests. This is the biggest thing I miss when I’m working on a native app.
But WebAssembly may be the web finally doubling down on being an application platform. Whether that’s good or bad is irrelevant at this point. It’s the direction it’s headed in.
No, not any more than C#'s giant standard library blows my mind.
Keeping the web small is a strategic decision -- it may look like chaos, but it's really just that we realized that embracing 3rd-party libraries is a better architectural decision than polluting the core spec with features that will be outdated in a few years.
QT doesn't have these problems because QT doesn't have to be infinitely backwards-compatible. If QT makes a mistake, it can fix it in the next version. The web can't do that, so we have to be more careful. If anything, my biggest criticism of the web is that we move too quickly and stick too many "modern" features in. I would have been happy to drop Classes and Arrow Functions from the JS spec, and I would have been happy to drop `sticky` from the CSS spec. As an engineer, I absolutely don't want a hamburger menu as a core HTML component.
I've had people argue to me that HTML needs more 2-way data binding and element types -- they want the ability to tie a list to a JSON object or something, instead of needing to render out separate `<li>` elements. These people are missing the point.
HTML is your user-presented state. We have a few elements that break this convention, but for the most part we want your final HTML to be static and human-readable. It's not for you, it's for your users. And 2-way data bindings would get in the way of that.
For laying things out and creating complicated lists, we have third-party libraries/frameworks -- and honestly, they work fine. If you think the web is overcomplicated or has too many frameworks right now, just wait until we start stuffing a new first-party component into it every time any design trend becomes popular.
And even if we did add all of that nonsense, people would just ignore those components anyway. No manager I've ever worked with has ever said to me, "you know, pure HTML select inputs look great and I don't mind them rendering differently in different browsers." I would be reimplementing them in JS anyway just to get the styling consistent between IE and Firefox. The good HTML components I can actually use are the low-level ones like simple input fields -- because they're small and simple enough that they can be styled and incorporated into larger custom-built solutions.
Heck, I've had managers complain to me about scroll-bar styling before. But sure, they'd totally be happy with a giant, pre-built, monolithic tree-view component that's using mid-2000s styling.
QT is a third party library that wraps the underlying OS primitives.
A really fancy tree view in JS is 10KB, maybe 15KB, given desktop+mobile support, proper accessibility, and good support for theming.
And that is if you are getting really fancy. A simple one is just some DIVs and a CSS animation for opening and closing.
That is assuming you even bother with JS, since you can do a treeview in the browser with CSS alone[1].
UI toolkits exist for the web as well, and they tend to be rather small, browsers ship with a lot of UI primitives after all!
The UI portion of code for my B2B web-app with all custom UI elements and with mobile+desktop support is around 100KB unminified.
Now the libraries I pull in to do the real time database streaming and authentication to my back end? Those dwarf the UI code (they are hundreds of KB each), but the native versions of those libraries are far larger (10x)[2], than the JavaScript equivalents.
> Would it blow your mind if I told you that you can create a complex treeview without having to use third-party libraries or reinventing half the wheel yourself?
The browser comes with a lot of wheels included, very few are getting re-invented unless someone wants indulge themself.
> To be clear I'm not saying that it's a bad idea per-se, I personally believe that the web is way too complicated as it is
Get rid of the weird things that happen to make advertising possible and most websites are pretty simple. Even complex web-apps are pretty easy to understand now days, though you'll have to learn whatever data-binding technique the dev team decided to use, but that is the same case for modern desktop applications.
[1]https://codepen.io/kobusvanwykk/pen/NqXVNQ
[2]https://stackoverflow.com/questions/41750553/what-is-the-est...
I thought HTML was developed with HATEOAS in mind. So I don't see the seperation as a technical limitation but a powerful abstraction. Flash was what people wanted but HTML was what a good web needed (imagine the internet with only Flash/WebAsm from the beginning).
We have globally networked local Networks of locally networkable Networks and most of us naively think that this Global Network of Local Networks networking Local Networks actually functions as a universally internetworking Internetwork of universally internetworkable Internetworks internetworking internetworkable Internetworks internetworking internetworkable Networks[([¹]1)], because we trust the people who've labeled both IPv4 and IPv6 as the "Internet Protocol" despite it not in any way, shape or form having the ability to perform what the people who came up with the concept of internetworking (no, NOT the ARPA people.) actually meant by the term.[2]
As the simplest proof of this fact: We've almost reached 2020 and all of our 'solutions' for multi homing don't ACTUALLY work in a not-pants-on-head-dysfunctional despite the fact that we've faced that issue since, to quote Wikipedia, "In 1972, Tinker Air Force Base wanted connections to two different IMPs for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony)."
Our 'Inter'net at the moment still resembles just a global network, aka just plain old phone calls, with extra steps. Which explains both the mess we call the WWW, and the global dominance of large telcos.Can't give an AS number and BGP access to just anyone[([²]4)] , as we know ever since L0pht Heavy Industries testified before the US Senate, over 2 decades ago, back in 1998.[3] NOTHING has fundamentally changed about this since. We've just written bigger and bigger macros ignorant of why we keep having to fight windmills & having to reinvent the wheel over and over again.
[1 (¹ yes, I did intentionally phrase that in & as a reference to a certain Cisco exam question.)]
[2 see https://news.ycombinator.com/item?id=21108796, but also see http://ict-arcfire.eu/index.php/rina/ & http://ict-arcfire.eu/wp-content/uploads/2018/06/OCARINA-ind... ]
[3 https://youtu.be/VVJldn_MmMY]
[4 (² Of course, I don't advocate that we should. No. I do however argue that each of these situations shouldn't exist in the first place.)]
The kind of jerks who liked adding Javascript to block right-clicking, blocking "Paste" into password fields, etc, are going to absolutely love using the browser-in-a-browser product to deliver their "website".
For the inevitable replies: Yes-- you can already do this with minified Javascript. WASM, being targeted for performance, is just going to make this kind of asshattery faster.
I can easily imagine a "platform" WASM module which acts as a runtime for other WASM modules built by "app" developers. This Platform module can be easily cached by FAANG or other big commercial interests, similar to AMP by Google (maybe even be pre-bundled into the browser?). The only way to discover, download and run these other apps is through this curated Platform module. All this could be rendered through something like the Canvas API instead of the DOM, which again is managed on a low level by the Platform and in turn exposes higher level API's for the Apps. The Platform also has built in support for Ad networks, tracking, etc., which cannot be disabled without disabling the whole ecosystem of apps. And of course, like any good play/app store, it is completely incompatible with anything else, leading to new levels of Balkanization of the web.
I hope that this isn't the case, and I'm completely wrong about this. But I just can't shake the feeling that as a community, we are championing WebAssembly as purely a performance win, without considering how big commercial interests might seek to exploit this new technology.
Edit: typo with AMP
Every webpage pays for the traffic in some way, and everybody tries to save web traffic as much as possible (optimizing images, videos, minifying JS, CSS ...). It makes no sense to expect, that websites would turn the opposite way just for fun.
I would be very glad, if you can emulate a computer in a browser, so you can e.g. use VirtualBox or VMWare comfortably in your browser. Still, it will be sandboxed (it can not turn off your computer, or clear your hard drive, etc.).
I think the web is the most open, independent, secure and versatile platform today. And I hope it will become even better and more powerful in the future.
Browsers are large and complicated software packages for a reason. Best of luck to whoever tries to compete with it in a browser in a browser.
To be honest, it's a unique property of the web for you to be able to easily read the source; however, users of other platforms such as Qt are not expected to read the source. The web is simply moving in the direction of all other UI platforms, so I don't quite understand the outrage. Yes, it sucks, but it'll make apps faster than the Electron mess we have now.
What won't be great are sites that do this to enforce ad viewing. But if it becomes common enough, you can expect ad-blockers to act on the frame-buffer of that virtual browser.
Anyway, the most annoying will be the stupid people that do that just because "Even Google does it!" "It's webscale!", or whatever people will be saying 5 years from now.
Sounds like a slippery slope to me. What makes you think users will be satisfied with load times for something like that, which also probably will have a frustrating-to-use interface?
This is what people should be worried about, not replacing JS. It became a kind of popular hot-take for a while to say that separation of concerns was a mistake, and that's not how apps get built in the real world, and what we really need is a way to encapsulate all of our DOM and CSS in JS. We need to start pushing back against that idea and keep emphasizing that separation of concerns is really important for end users.
HTML is the interface you write to. It's not a document layout language for authoring, it is a render target that is understandable and manipulable by the end-user. It's a fantastic idea that enables a lot of user-land innovation, and it's one of the biggest reasons why the web is still a relatively good platform to interact with as an end-user.
A lot of architecture decisions on the web haven't aged well, but separating content, styling, and logic was a fantastic architecture decision that is still as relevant today as it ever was. And the rise of the web as an application platform has only made it more important, not less.
React.js has been the most significant JavaScript library in the past decade. Over 50% of JS developers on the web are writing HTML inside JavaScript. In recent years, CSS-in-JS libraries like Styled-components are becoming standard.
We’re killing HTML templates and writing JSX. We’re killing CSS classes and building styled-components. It’s all compiling to HTML, CSS and JS in the end. We just get to do our jobs faster. Why people have a problem with that is beyond me.
Separation of concerns happens because the render target is a over complicated mess. So separate the good parts out of the mess as much as possible but the mess is still there.
I'm with you on speration of concerns but HTML is not for rendering; it's for structuring and adding semantics to your documents. (CSS is for presentation control and JavaScript is for behavior.)
It is a matter of time till we see desktop application hosted in a wasm enabled browser.
The core dynamic of the web, more than document sharing, has been decentralization and removing lock-in and gatekeepers around content distributed through the Internet. As long as we continue to feel that is true (which becomes less believable as browser diversity declines) I feel we should be excited every time the web expands the scope of what can be done with it. It seems likely that 100 years from now, WASM will still exist, and it will allow people to create and distribute software without gatekeepers. If it never existed, or nothing like it did, it seems like the future would have been markedly worse for creators.
Each of these enabling browser technologies that get into the standards should be heralded as having potentially massive positive counterfactual effects on our ancestors and future selves (WebXR being another notable example, landing soon.)
I'm not trying to be overly cynical here, all of these topics remind me of high school when I'd download mildly-sketchy executables, decompile them, and try figure out whether or not they were safe to run. I would love to poke around with reverse engineering wasm, but I'd rather not have to do that every time I visit a new website.
(Trackers using these kinds of tactics was only a matter of time between trackers and blockers...)
There's some incentive to use HTML as it's indexable but it doesn't seem to be affecting mobile, although maybe mobile is in large part successful because of the web? Like the fact that you can post a tweet or a facebook post or a youtube video everywhere. Not sure that HTML is required for that and if it's not then it seems at least possible HTML will die.
Fortunately advertisers are renowned for their self-restraint, and can be trusted to not abuse this capability to simply shove in as many ads as they possible can, wherever they can, with masses of unblockable trackers underneath it all.
What, exactly, is "content"? Are binary formats content? (Audio, Video, etc?) Are textual comments and notes on other text content, or is only the work being commented on content? Are languages that rely on unicode encodings content? Must content be static or can it change frequently, even second by second? (Stock quotes, for example). Must things be easily scrapeable to be content?
Furthermore, must we lock down the structure of content from now until the end of time as HTML or ASCII, or should future generations be allowed to define what they mean by content?
There may be merit in a separation of content from browser, but we should not artificially limit ourselves to content that only fits into our legacy notion of what a browser is. If the browser must become a general purpose OS that can host whatever content people can dream up, I'm all for it.
I think JavaScript and WebAssembly are not so bad programming languages, but I do think that scripts in web pages are overused (regardless what programming language is used, which isn't the issue).
Opaque single page apps are also bad for URL. Also I often use curl when I want to download a file (or in one case, to stream audio), so do not want to have to deal with the web browser to do that.
Also, executing JavaScript when scraping isn't that difficult, depending on what you're using to scrape. Node.js has Puppeteer: https://github.com/puppeteer/puppeteer
I’d love to see a XAML implementation for wasm, because it’s cleaner than html/css.
Looking forward to .NET Core running in warm.
I could see hackability actually improving since data and API access could be well-defined.
I could see Publishers adopting wasm so their websites are more under their control.
But most of all, software development getting out of the scripting business, which has such a wide range of misuse that it becomes expensive and unwieldy to maintain.
If your "web"site can execute arbitrary code, and/or violates HTML guidelines (required for web crawlers or accessibility software to work, or for browsers to tweak the layout) and/or requires JavaScript / Flash / Java to run properly, then it might be part of the Internet, but it's certainly not part of the World Wide Web!
The WWW is a network built on a set of protocols. HTML is a markup language that allows for (indeed, that is is designed to facilitate) embedding and linking binary and executable content, including javascript, flash, java, audio, video, as well as marking up text. HTML is one, but not the only, content-type which can be distributed across the WWW. This is fundamental to the design of HTTP and the intent of HTML and the web itself.
By your rationale, no site created after the addition of the <SCRIPT>, <APPLET> or <OBJECT> tags in HTML could be considered a part of the WWW, which would exclude the entirety of the web after HTML 3.2. This is "not even wrong" levels of ridiculousness.
Fast, safe, and portable semantics:
* Fast: executes with near native code performance, taking advantage of capabilities common to all contemporary hardware.
* Safe: code is validated and executes in a memory-safe [2], sandboxed environment preventing data corruption or security breaches.
* Well-defined: fully and precisely defines valid programs and their behavior in a way that is easy to reason about informally and formally.
* Hardware-independent: can be compiled on all modern architectures, desktop or mobile devices and embedded systems alike.
* Language-independent: does not privilege any particular language, programming model, or object model.
* Platform-independent: can be embedded in browsers, run as a stand-alone VM, or integrated in other environments.
* Open: programs can interoperate with their environment in a simple and universal manner.
Efficient and portable representation:
* Compact: has a binary format that is fast to transmit by being smaller than typical text or native code formats.
* Modular: programs can be split up in smaller parts that can be transmitted, cached, and consumed separately.
* Efficient: can be decoded, validated, and compiled in a fast single pass, equally with either just-in-time (JIT) or ahead-of-time (AOT) compilation.
* Streamable: allows decoding, validation, and compilation to begin as soon as possible, before all data has been seen.
* Parallelizable: allows decoding, validation, and compilation to be split into many independent parallel tasks.
* Portable: makes no architectural assumptions that are not broadly supported across modern hardware.
If webassembly is truly able to meet this goals, together with good debugging and tools, it will become the the universal way to represent computation across devices and platforms.
Really cool.
About 5 years ago I tried to use an optimized javascript file to encode some audio in browser before uploading, it was painfully slow and very hardware intensive. But using wasm in the vmsg [1], it distributes the LAME encoder allowing for in-browser MP3 encoding.
And secondly cloudflare allows for their workers to be written in WASM, allowing for more processor intensive apps (like resizing an image) to be completely on the edge.
I see it as eventually fulfilling the portability goals that java applets in the browsers use to have, but instead of one you have many companies agreeing on the implementation.
[1] https://github.com/Kagami/vmsg
[2] https://blog.cloudflare.com/webassembly-on-cloudflare-worker...
For image uploading, resizing is best done in the browser, so you are not sending large files over a limited mobile connection. The code is ugly (it also needs to fix image rotation by reading EXIF information because browsers are broken, and if supporting IE11 then there is some other voodoo).
I've read articles here and there seen some in person demos, and honestly struggle to understand what it is / how it would / works relative to the current state of JavaScript frameworks.
Often I'm approaching it from a JavaScript framework (React/Vue/Angular) approach as I'm a bit of a noob to the industry and that's generally my day job working on web applications.... and while I read about WebAssembly I wonder about state management and someone tells me "oh you still need to something to do that" and I'm a bit lost on ... how that would work / why I would or wouldn't use one of those frameworks anyway, etc. So many examples I've seen are one off simplified (and for good reason I like those for demos) widgets but ... I'm not sure I've seen them as an application / understand how that would work.
Obviously I'm missing a lot here and feel like on HN I'm often talking to folks who aren't so much front end web devs who are excited about the efficiencies and such but ... not sure how this plays out in a practical sense / relative to the state of web applications as they are now.
https://hacks.mozilla.org/2017/02/a-cartoon-intro-to-webasse...
>> Wait, so what is WebAssembly?
>> WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser. So when people say that WebAssembly is fast, what they are comparing it to is JavaScript.
A better definition:
> WASM is a binary instruction format for a stack-based virtual machine[1]
This goes into some details that could answer the question raised in this thread:
> WebAssembly modules will be able to call into and out of the JavaScript context and access browser functionality through the same Web APIs accessible from JavaScript.
More useful details:
>> Engineers from the four major browser vendors have risen to the challenge and collaboratively designed a portable low-level bytecode called WebAssembly. It offers compact representation, efficient validation and compilation, and safe low to no-overhead execution. Rather than committing to a specific programming model, WebAssembly is an abstraction over modern hardware, making it language-, hardware-, and platform-independent, with use cases beyond just the Web. WebAssembly has been designed with a formal semantics from the start. [2]
More details from Wikipedia:
>> Wasm does not replace JavaScript; in order to use Wasm in browsers, users may use Emscripten SDK to compile C++ (or any other LLVM-supported language such as D or Rust) source code into a binary file which runs in the same sandbox as regular JavaScript code. ... There is no direct Document Object Model (DOM) access; however, it is possible to create proxy functions for this. [3]
I hope this helps.
It's the closing keynote talk at this year's PyCon India. From the description:
> In this talk, I live-code a simple stack machine and turn it into an interpreter capable of running WebAssembly. I then use that to play a game written in Rust.
https://codelabs.developers.google.com/codelabs/web-assembly...
So far, most of the good demos of it seem to be for gaming. But I think once b2b type companies figure out what it's capable of, a lot will change.
wasm-bindgen docs have some examples to get you started.
WASM requires a lot more knowhow to reverse engineer; I've had to do it a few times for CTFs and some blockchain-related tools that use them, and it's a lot trickier compared to JS.
You have to be willing to edit it, start renaming variables to describe what they hold, rename them again if they get something unexpected assigned, rename functions once you have a good idea of what they're trying to do, and repeat until everything has a name.
But in practice, WASM files will be exported from one tool automatically, and imported into another tool automatically. It is similar to SVG or PDF. You know what to do with it, but you don't care what is inside (what each character inside means).
From my (biased) perspective, one of the greatest potentials is for new languages with improved features, syntax, and semantics. These languages can be just as effective as javascript performance-wise to accomplish the same goals, without having to transpile to javascript, which comes with its own real costs. The interest is there, given the 100+ languages that came out as a 'replacement' for JavaScript since its inception. We'll see if language theory can usher in a better language for the web built on WebAssembly.
The other great boon is simply having more performant services or utilities that can be written in any language. If you like C syntax for doing everything non-ui then you can go ahead and write C no problem, and run it on the client's machine. Maybe we'll even get to run fully concurrent client side apps, at which point Java would be a wonderful language to jump into those capabilities, given they have a robust handle on it from a language standpoint.
WebAssembly In Action (2017) https://www.youtube.com/watch?v=DKHuEkmsx3M
> not sure how this plays out in a practical sense / relative to the state of web applications as they are now.
How I see it it either there will be one big framework (or language) that will have all the things and compiles to WebAssembly or there will be JavaScript libraries that ship compiled WebAssembly which you can interface with.
We are still a long way from having a React like framework which compiles to WebAssembly let alone a thriving community for it.
Basically you're supposed to use it for some performance-critical parts in your application. You're not supposed to use it instead of React.
Wasm proper isn't really a technology for noobs. This is like asking "Does anyone have an introduction to x86_64 machine code, the ELF linker spec and the SysV ABI for noobs?". It's sort of the wrong part of the problem. What you want in that case is "C programming on linux for noobs".
So try googling for "emscripten tutorial" or (if you swing closer to rust) "wasm-bindgen tutorial". There are lots of other languages with wasm targets too, but quality tends to vary a lot.
But really IMHO the reason to use wasm isn't performance. Javascript interpreters are REALLY good these days for routine code. You use wasm when you need to target a big codebase in some other language to a browser, either because it's already written or because the problem area for the code isn't well suited to JS.
Less so an entire overhaul of web applications as it is a powerful tool to be deployed for things we don't do in the browser now because there is no way to / it would be a bear to do in JavaScript due to ... JavaScript being JavaScript ;)
Of course the amount of WASM to traditional JS or anything else could vary from application to application.
Granted we're predicting the future here so obviously it could be off.
Albert Einstein
Going to need some clarification on how that's the case.
No, we can't, it's a valid criticism, it's not going to go away. Minified JS is bad, webassmbly is worse.
This to me makes the browser vastly preferable to native apps. I didn't realize that the desktop app I use to easily translate languages[0] sends every keystroke to Google Analytics until I had to bother installing a proxy. Meanwhile this analysis is just an Opn-Cmd-I away in the browser.
[0]: https://apps.apple.com/us/app/translate-tab/id458887729
(Disclaimer: I've never looked at Google's analytics .js files and that may not be possible for some technical reason unknown to me)
That's not true. There's nothing "free and open" about the tracking code embedded in every modern site, or the javascript blobs you get when you visit Google or Facebook. Minified/obfuscated Javascript is no different from a binary blob, except that it's much less efficient. Your chances of reverse-engineering one of those is about the same as reverse-engineering a wasm blob. Just because one is technically "human-readable" plaintext and the other binary doesn't make a difference, since you can't actually read either of them.
Just wait 5 more years that 80% of the web switch to React / Vue / TheNewHypeSPAFramework and with or without WASM, you will be unable to browse "js off".
The blame here is not on WASM but on the abuse of client side rendering and "everything as an App" when most page are just barely interactive documents.
The Web succeeded where Flash / ActiveX / JavaApplet / Sliverlight failed because:
- it was open
- it was document oriented.
And that we tend to forget a bit too easily about it.
Minified Facebook or Google trackers were never libre or meant to be easily reversed. Web apps like Google Drive aren't free either just because you can run it in a browser on Linux. You aren't supposed to (legally?) be able to modify it and nor would you be able to in many cases where they try. It's just as proprietary as Microsoft Office. There are proprietary tools to do even more advanced obfuscation on top of minification (adds red herring code paths that do nothing), which some JavaScript malware vendors use to protect their implementations.
What we really want is libre JavaScript/WASM where vendors include permissive licenses and source maps or links to download the high level source. That's free software. The "free and open" web never really existed de jure; publishers' laziness to obfuscate created a de facto free and open web. Libreness depends on access to high level source, not reverseability, or else Photoshop is free too because you can attach a debugger to it.
WASM just exposes the truth that the web was an app store all along.
You can convert the binary files to/from the text (lisp like) format with readily available tools.
Also, the binary format is easily parsed -- made a parser with katai(sp?) struct in like an afternoon.
If all websites made their source available as well as distributing the binary, there wouldn't be a problem.
It stopped working as enforcement ages ago, though, so ️.
More importantly, however, anything GPL must make source available and reasonably accessible. There is no such guarantee or even expectation for random programs on the web.
Just because you _can_ use the compliation step to (go some way to) hide your source doesn't mean you _have_ to. And relying on your secret sauce being private while you publish it in obfuscated form for all the world to decypher feels like a losing strategy.
I don't think there's any particular reason that WASM has to be more obfuscated than JS. You can already throw a WASM file into a bytecode-to-text translator which is about as useful as deobfuscating a minified JS file, and I assume decompiling/debugging tools will only get better in the future.
For a long time now, I've been thinking of a future where your OS properly isolates all the programs that run on it and even gives us the ability to have direct control over how programs interact with the rest of the system. OS's seem too mired in backwards-compatibility requirements to make big changes like that any time soon, but that's basically the way our browsers already work. Download some code, and execute it (relatively) safely because it's sandboxed from the rest of the system. Our browsers are basically the new OS, and this time around we can do it right using what we learned from OS's (and hopefully backport these browser features into the next generation of OS's).
For example, an app asks for a filesystem handle. You can hand it one that refers to a real location on your OS fs, or you can hand it a completely virtual fs that won't affect anything else on the system.
Whenever an app asks for a resource, being able to hand it a virtual or sandboxed one instead is a huge gain for user-control.
If you really think this is true, look into Google's recaptcha blob.
"By the end of 1990, the first web page was served on the open internet, and in 1991, people outside of CERN were invited to join this new web community.
As the web began to grow, Tim realised that its true potential would only be unleashed if anyone, anywhere could use it without paying a fee or having to ask for permission.
He explains: “Had the technology been proprietary, and in my total control, it would probably not have taken off. You can’t propose that something be a universal space and at the same time keep control of it.”
So, Tim and others advocated to ensure that CERN would agree to make the underlying code available on a royalty-free basis, forever."
Can you see how rude it is to not do the same ?
React for example shifting towards functional programming makes FE apps simpler and predictable.
If you dislike the HTML/CSS UI layer then WA is indeed an alternative. However you need to reimplement everything, like text selection, right click, focus, accessibility, dropdowns, etc, because all you have is a <canvas> to draw on.
But! WA will eventually have DOM access, that will definitely open up the landscape to create new frontend frameworks.
The two largest are Yew (Rust), Vugu (Vue-esque but with Go instead of JS), and Blazor (C#)
https://github.com/yewstack/yew
https://github.com/aspnet/Blazor
All are perfectly viable for production apps as of today and not much more difficult than writing React, given you have some familiarity with their implementation language.
Technical question: 1.) What is the speed of WebAssembly on iOS WebKit and Android WebView? 2.) Is it feasible to write an entire app UI in something like Qt and target WebAssembly? 3.) Android, iOS, Windows versions of the app are Qt apps natively or possibly through the device's WebKit.
Is this possible today? Is there a better UI library than Qt for this?
I can get lit-element with no build process going to prototype something in a single .html file in probably 30 seconds. I don't think XCode/iOS developers can compete with the simplicity. Define initial state, alter it through events, pull data through fetch(), write HTML. I know for a fact iOS development isn't that simple.
EDIT: Here's an article from yesterday about this: https://developers.google.com/web/updates/2019/12/webassembl...
The "arrival" of WebAsm to w3c only means that w3c had finally woken up from eternal slumber and realized that everyone has already implemented The listed features.
I guess in a sense you already have the JS engine installed with, say, Chrome, which is essentially a runtime.
Why can't other runtimes come prepackaged? What am I missing here?
As if that's not already happening today with obfuscated and minified javascript
Also, the use cases for javascript and webassembly don't overlap enough for one to replace the other in most cases. Javascript is a text-based scripting language you can write in any editor (the sprawling morass that is the current js development ecosystem notwithstanding,) but WASM requires knowing another language and having a compile step, which adds friction and complexity. You can't really replace a language with a bytecode.
It may mean that other languages can be deployed in the browser as easily as javascript, which may or may not be good depending on your point of view.
I personally look forward to the day when Hacker News embeds the Arc runtime as WASM and replaces all of their javascript with Arc code.
That's exactly what I hope happens. There are so many good languages out there that having to be stuck with javascript for the modern web is a crime.
> WASM requires knowing another language and having a compile step, which adds friction and complexity. You can't really replace a language with a bytecode.
I'm not sure that requiring a build step is a real problem, for web 'applications' at least. Plenty of JavaScript frameworks use a build step anyway.
For example, if Adobe delivers its new mobile Photoshop directly via iPad Safari will Apple have a way of 'nerfing' WebAssembly to stop them?
Companies will ask their developers to deliver WASM resources in the name of performance. As a notable side-effect, it will become harder to review how websites work.
Yes, I'm sure there will be reverse-compilation tools for WASM, but still.
It is possible to disable half of the paywalls on the web just by looking at the JS code. I can see why certain parties would push really hard to introduce Webassembly. The performance argument is just a pretext because today's jit compilers for Javascript are really good.
In practice, there's nothing that WebAssembly offers that could hinder analysis even further. If websites want to be transparent they could provide the sources (akin to providing unminified/unobfuscated JavaScript).
And minified JS isn't particularly more reviewable I think
webassembly is encouraging websites to dump megabytes of binary code in browsers.
Obfuscated, analysis-resistant code to ensure that people cannot disable ads and tracking.
It's appalling that we are accepting this.
For example, it doesn't support arbitrary computed goto:s. Instead it supports block-based control flow, where the semantics can either create blocks or jump out of them to lower down blocks. This makes it possible to construct a CFG statically, which isn't possible if it did support arbitrary computed goto:s.
Please
Pretty much all relevant browsers support wasm (and I suspect data for the few non-compliant Chinese browsers may be outdated, they’re definitely not last released in 2016 or 2017). Do you have some other idea of wasm-capable?
There are more features coming to WebAssembly in the browser, including hopefully a way to feature detect in the WebAssembly binary instead of the way it's done now, which is to compile as many binaries as you need for combinations of features, feature test in JS, and then pick which wasm module to load.
https://github.com/WebAssembly/binaryen/blob/master/src/tool...
* Sandbox. WASM is a sandbox. Think of WASM as a language agnostic replacement for Flash. It is an island in a webpage isolated from that page. This is great for security so that the opaque binary running in that WASM island cannot modify the page in such ways as to violate the same origin policy, such as turning a button into a hyperlink with a user's personal details attached as query parameters on a malicious third party URL. It also means code executing in WASM cannot interact with the surrounding page, which means it isn't a JavaScript replacement. The developers of the WASM standard have been very clear that WASM will not ever be a JavaScript replacement.
To be fair there is work being done, several years in the making, to provide web-like technologies to WASM instances, such as a DOM. This enhancement eases some concerns of overhead, see the next bullet point, but they won't break security or escape the sandbox. This enhancement might go so far as allowing the containing page interaction with the WASM instance, but I suspect this would be limited, if at all, and only cross the sandbox barrier in one direction. There is huge interest in WASM and page interaction, but this is all coming from developers who want alternatives to JavaScript. There is no prevailing business interest in advanced WASM interaction.
* Overhead. Since WASM is a self-contained sandbox the incoming binary needs everything an application binary would otherwise need to execute as an application. For example if a WASM instance wants to pretend to be web page instance then it has to include its own DOM library, interaction code, presentation, and absolutely everything else. If accessibility is a concern you would likely have to reinvent how that works in your WASM instance. While this is of great interest for developers who hate JavaScript there isn't a lot of business value in that.
* Performance. So far WASM has not been able to significantly outperform JavaScript. In many cases WASM code performs slower than JavaScript. Without a large performance differentiation there is little business justification to invest in WASM. Flash's claim to fame is that for most of Flash's life it did significantly outperform JavaScript by an order of magnitude. JavaScript never got faster than Flash, but it did almost completely close the performance gap. Once JavaScript got fast Flash started dying.
---
There is potential business value in WASM, but you have to be willing to abandon any consideration that you are executing in a web page. Selling the idea to business owners that you are executing in a web page, but you need to imagine that you aren't is a tough sell. Here are some ideas that might work better in WASM than the typical web environment:
* Document interaction with digital signatures from physical tokens.
* Streaming media players with embedded DRM.
* Certificate negotiation for an end-to-end encrypted messaging tunnel transmitted via web page.
It can do so via JavaScript glue for now, but there was spec proposals to allow direct access to the DOM.
I bet in few years we will run wasm natively on processor.
The processes would have to translate the wasm into some form of register-based operations anyway, and a JIT compiler might be able to do this more efficiently since it can see a bigger picture than the processor.
Wasm is in this interesting spot where it's relatively low-level, but high-level enough to allow proper sandboxing. It strikes me that the arrangements to accommodate that, like call indirection via tables, could be optimized on CPU level. Not necessarily in a sense of a CPU that directly runs wasm, but rather a CPU architecture which is optimized to be a target for JIT or AOT compilers from wasm.
It looks like Gary Bernhardt was pretty spot on in his talk "The Birth and Death of JavaScript": (https://www.destroyallsoftware.com/talks/the-birth-and-death...)
Instruction caches (and JITs to a degree) solve the same problem in much more general ways. That's why Azul went out of their way to create an appliance to run Java code with custom CPUs, and ended up with a pretty standard RISC for the most part.
All of that applies to WASM machines too.