On the other hand, I quite like the single-threadedness of javascript. Promises-based systems (or async/await) give us basically cooperative multitasking anyway to break up long-running (unresponsive) threads without worrying about mutexes and semaphores. I understand exactly when and where my javascript code will be interrupted, and I don't need to wrap blocks in atomic operation markers extraneously.
I've written plenty multithreaded code, starting with old pthreads stuff and eventually moving on to Java (but my own experience with threaded stuff is limited mainly to C and Java), and it can be a real pain. I guess limiting shared memory to explicitly named blocks means you don't have as much to worry about vis-a-vis nonreentrant code messing up your memory space.
That said, it is a pretty useful construct, and I see where this can benefit browser-based games dev in particular (graphics can be sped up a lot with multicore rendering, I bet).
I'm enthusiastic about SharedArrayBuffer because, unlike threads in traditional languages like C++ or Java, we have two separate sets of tools for two very separate jobs: workers and shared memory for _parallelism_, and async functions and promises for _concurrency_.
Not to put too fine a point on it, shared memory primitives are critical building blocks for unlocking some of the highest performance use cases of the Web platform, particularly for making full use of multicore and hyperthreaded hardware. There's real power the Web has so far left on the table, and it's got the capacity to unleash all sorts of new classes of applications.
At the same time, I _don't_ believe shared memory should, or in practice will, change JavaScript's model of concurrency, that is, handling simultaneous events caused by e.g. user interface actions, timers, or I/O. In fact, I'm extremely excited about where JavaScript is headed with async functions. Async functions are a sweet spot between on the one hand the excessively verbose and error-prone world of callbacks or often even hand-written promise-based control flow and on the other hand the fully implicit and hard-to-manage world of shared-memory threading.
The async culture of JS is strong and I don't see it being threatened by a low-level API for shared binary data. But I do see it being a primitive that the JS ecosystem can use to experiment with parallel programming models.
But on closer inspection of the post, this implementation seems to be highly targeted at certain kinds of compute-bound tasks, with just the shared byte array based memory. It's well-partitioned from the trad ui / network event processing system in a way that makes me optimistic about the language.
1. How is the accidental modification of random JS objects from multiple threads prevented - that is, how is the communication restricted to explicitly shared memory? Is it done by using OS process underneath?
2. Exposing atomics greatly diminishes the effectiveness of automated race detection tools. Is there a specific rationale for not exposing an interface along the lines of Cilk instead - say, a parallel for loop and a parallel function call that can be waited for? The mandelbrot example looks like it could be handled just fine (meaning, just as efficiently and with a bit less code) with a parallel for loop with what OpenMP calls a dynamic scheduling policy (so an atomic counter hidden in its guts.)
There do exist tasks which can be handled more efficiently using raw atomics than using a Cilk-like interface, but in my experience they are the exception rather than the rule; on the other hand parallelism bugs are the rule rather than the exception, and so effective automated debugging tools are a godsend.
Cilk comes with great race detection tools and these can be developed for any system with a similar interface; the thing enabling this is that a Cilk program's task dependency graph is a fork-join graph, whereas with atomics it's a generic DAG and the number of task orderings an automated debugging tool has to try with a DAG is potentially very large, whereas with a fork-join graph it's always just two orderings. I wrote about it here http://yosefk.com/blog/checkedthreads-bug-free-shared-memory... - my point though isn't to plug my own Cilk knock-off that I present in that post but to elaborate on the benefits of a Cilk-like interface relatively to raw atomics.
Douglas Crockford's strategy of taking a language, identifying a subset of it, calling it "The Good Parts" and sticking to it is a great motivation to welcome new features, let them evolve but keep your distance from them until they're fleshed out. This has pretty much been the M.O. of Javascript and IMO has worked great..
That's why callbacks, promises, async/await and all that are neither multitasking, nor multithreading. They are all about control, while multithreading is all about parallelism and is essentially a very low-level specialized thing, that nobody should be using, unless absolutely necessary.
This just isn't true. Why do you think people wrote multi-threaded applications back when almost all machines had just one processor and just one core? Threads give you concurrency as well, even if you don't want or need parallelism.
On the other hand, in the uncommon event I do have some weird javascript thing that's going to take a long time (say, parsing some ridiculously-big JSON blob to build a dashboard or something), I know I can break up my parse into promises for each phase and that I won't be locking up other UI processing as badly during that process. So: not exactly multitasking / threading as you say, but still a handy thing to think about.
Most people don't really know what typed arrays are, but they're in ES6 nevertheless.
https://github.com/nodejs/node/issues/5798
Node.js uses OpenSSL instead of the operating system's CSPRNG. The biggest argument for "WONTFIX" is "Node.js is single-threaded so OpenSSL fork-unsafety isn't a concern for us".
If JavaScript becomes multi-threaded, it's not unreasonable to expect Node.js to follow. If it does follow, expect random numbers to repeat because of OpenSSL's broken RNG.
I don't see this quote within your linked issue and, as far as I can tell, there's no discussion of multi-threading.
I think CSP's channel-based message control is a far better fit here, especially since CSP can quite naturally be modeled inside generators and thus have only local-blocking.
That means the silliness of "the main thread of a web page is not allowed to call Atomics.wait" becomes moot, because the main thread can do `yield CSP.take(..)` and not block the main UI thread, but still simply locally wait for an atomic operation to hand it data at completion.
I already have a project that implements a bridge for CSP semantics from main UI thread to other threads, including adapters for web workers, remote web socket servers, node processes, etc: https://github.com/getify/remote-csp-channel
What's exciting, for the web workers part in particular, is the ability to wire in SharedArrayBuffer so the data interchange across those boundaries is extremely cheap, while still maintaining the CSP take/put semantics for atomic-operation control.
This is where I disagree with the direction Mozilla has been going for years. I don't want the web to be a desktop app replacement with HTTP as the delivery mechanism. I'm fine with rich single page web apps, but I don't understand the reason why web apps need complete feature parity with desktop apps.
Why not let the web be good at some things and native apps be good at others?
But also I think there's a false dichotomy between "the Web should just be for documents" and "the Web should just be for apps." The Web is simultaneously an application platform that blows all other platforms out of the water for delivering content. First, there's a reason why so many native apps embed WebViews -- despite its warts, CSS is the result of hundreds of person-years of tuning for deploying portable textual content.
But more importantly, you just can't beat the URL. How many more times will we convince the entirety of humanity to know how to visually parse "www.zombo.com" on a billboard or in a text message? It's easy to take the Web for granted, it's fun to snark about its warts, and there's a cottage industry of premature declarations of its death. But I personally believe that the humble little hyperlink is at the heart of the Web's power, competitive strength, and longevity. It was a century-old dream passed on from Vannevar Bush to Doug Englebart to Xerox PARC and ultimately to TBL who made it real.
URLs are great, but they don't have to be limited to the web. Or, rather to say, the thing on the other end of the URL doesn't necessarily need to be something the browser handles directly.
I'd like to see something developed that lets you do something like:
x11://myapp.example.com
where clicking on that link in a browser launches the remote app and then renders the UI locally using X11 remoting - as opposed to trying to render the application UI in the browser.OK, I know, go ahead and say it.. X11 sucks, X11 remoting doesn't work on WAN links, etc. To which I say:
a. Fine, let's invent something better, that still avoid the need to pack every ounce of functionality in the universe, into a web browser.
and
b. That doesn't jibe with my experience anyway. Just earlier this week I was playing around and decided to launch a remote X app using X forwarding over ssh, over a public Internet link. Worked like a champ. In fact, it reminded me of how fucking awesome X11 remoting really is, and makes me long for either a resurgence of interest in it, OR (see a above) the invention of a newer, better version that everybody can be happy with.
There's also a lot to be said for delivering applications using Java Web Start as well. JWS is wicked cool technology that is tragically under-utilized. IMO, anyway. :-)
It may not be a %-age of revenue, but you definitely can't host a non-trivial webapp for free either.
You could even argue that in many webapps scaling costs are proportional to revenue, which makes it awfully similar to an app store.
> But also I think there's a false dichotomy between "the Web should just be for documents" and "the Web should just be for apps."
Yeah, I don't have a clear idea on where the web should "end", but wow... web pages able to eat all my cores and have data races seems like a line to be crossed with great caution and care.
Whether it's a website, or web-app, "installing" it is as easy as going to a URL. You can go to that same URL on your PC, phone, tablet, your friend's computer, etc. and it will run the same. It's easy to share, easy to remember, and if it takes more than 5 seconds from the time you hit enter to the time you are using it, we consider that a mistake on the creators part. Plus the ability to discover new applications is extremely easy, and interoperability with other web applications is both easy for the developer and the user.
Compare that to desktop applications where installation is still an "event", and you are lucky if it can be done faster than a few minutes. Plus there are portability issues (oh that doesn't have a windows version?), it's difficult to share (try explaining how to install software to a non-technical person...), there is DRM all over the place, and they are significantly less secure.
Even most mobile applications take 30+ seconds to install on my android phone, and have all the same issues with discoverability, cross-platform issues, vendor lockin, and permissions/security issues.
Don't get me wrong: I appreciate the portability of the web. I just worry the focus on making it more native-like will introduce more of what makes the native ecosystem so frustrating.
That ship has sailed a long time ago.
>I'm fine with rich single page web apps, but I don't understand the reason why web apps need complete feature parity with desktop apps.
Because ...?
Not the poster you're replying to, but I'll share my thoughts:
1. Trying to make the browser ideal for both browsing content, and rendering rich application UIs bloats the browser.
2. Time spent trying to make the browser a poor imitation of an X server is time that could go into making the browser better at, ya know, browsing. FSM only knows, Firefox could use a LOT more developer time spent on improving performance and reducing the memory footprint. (Yeah, I know, sometimes that those goals overlap. But not always, which is the point)
3. For all the talk about how X11 remoting doesn't work over the Internet, I've done it and it worked just fine. YMMV, but it certainly can work just fine in at least some situations.
4. Trying to create a rich experience in the browser inevitably leads to conflicts that don't exist in a desktop app. For example, typically the F1 key is the "Help" key. So if I'm sitting in a web application, and I hit F1, what happens? Do I get help for my application, or for the web browser? Likewise, can my app easily use the F11 key? No. And look at the UI inconsistency between web "apps". There's none. With desktop apps, most apps adhered (mostly) to one of a relatively small set of standards... CUA, or whatever. With web apps, the experience is all over the damn place.
I'm sure there are other good reasons, but those jump out to me.
And why do we simply trust the existing browsers? Google has very specific goals to monetize you and Chrome can be leveraged to help that goal. Microsoft's historic fight with the web and now their current changing business goals are a reminder that their web browser goals can always change, and their current model seems more like Google. Apple is always Apple. And why should we blindly trust Mozilla? They depend on external funding to keep the foundation going to pay for a lot of the complex engineering that goes into Firefox. I'm not accusing them of anything wrong, but you can look up prior controversies about their funding sources and decisions and see people don't agree it is all rosy.
I'm suggesting the increasing technical complexity is not necessarily working towards the goal of an open web because it is entrenching the gatekeepers that can make the web browsers.
It's much easier to childproof a curated app store than The WWW for example.
I'm not saying closed is better -- they're just different and that's ok. In fact I like how different they are. It means each has its own unique strengths and doesn't have to worry about trying to do it all.
It feels like something that was more interested in competing with native than offering a constrained and portable approach.
[0] http://kripken.github.io/emscripten-site/docs/porting/guidel...
My guess is that, despite the sugar coating that JavaScript's async internals have received of late, writing stable multi-threaded code with JavaScript is going to be hard.
JavaScript now has the safety of multi-threaded code with the ease of asynchronicity!
It's just a zero-copy transfer to a worker (or from a worker) but it makes sure the "sender" doesn't have access to the memory any more.
It's incredibly easy to use, avoids all the common issues and pitfalls with shared memory, and being zero-copy it's stupidly fast.
Obviously it's not a replacement for true shared memory, but i've used it in the past to do some image processing in the browser (broke the image into chunks, and transferred each chunk to a worker to process, then return and stitch them all back together).
[1]https://developer.mozilla.org/en-US/docs/Web/API/Transferabl...
Blocking has simple semantics, but it's hard to scale. Everything-is-async is a little more complicated, but scales nicely. The problem is when you combine the two, you only get the worst of both approaches.
If most things are async, and you've only got one thread, and you block that thread, then your whole app is blocked. Now you've got the scaling problems that come with blocking, and the complexity that comes with async.
RangeError: out-of-range index for atomic access
That said, 20 workers is about 11x faster than the single-threaded version.[1] https://axis-of-eval.org/blog/mandel3.html?numWorkers=20
JS is an approachable language but Node has problems with scaling and error handling of non-blocking IO. Erlang solves those problems but the language is not approachable and has a smaller ecosystem than JS. I'm imagining something like Node with "micro-workers" so developers could reuse their existing JS code, but not have to worry about scaling or non-blocking APIs.
If only Mozilla had some technology that could deal with ownership of memory...
Seriously, if rust doesn't have an ASM.js optimized target yet, it really should.
We want both compile to JS and compile to wasm to work well, the work just isn't done yet.
Having not yet played with this myself: is anyone familiar with what kind of latency overhead is involved with signaling in the Atomics API? I'm not very familiar with the API yet, so I've no idea how signaling is implemented under the hood.
The MessageChannel API by contrast (i.e. postMessage) can be quite slow, depending. While you can use it within a render loop, it usually pays to be very sparing with it. Typical latency for a virtually empty postMessage call on an already-established channel is usually .05ms to .1ms. Most serialization operations will usually balloon that to well over 1ms (hence the need for shared memory). Plus transferables suck.
>Finally, there is clutter that stems from shared memory being a flat array of integer values; more complicated data structures in shared memory must be managed manually.
This is probably the biggest drawback to the API, at least for plain Javascript. It really favors asm.js or WebAssembly compile targets for seamless operation, whereas plain Javascript can't even share native types without serialization/deserialization operations to and from byte arrays.
One place where I would like to use this is for collision detection, like in this example: http://codepen.io/kgr/pen/GoeeQw
But I'm relying on objects with polymorphic intersects() methods to determine if they intersect with each other, and once I encode everything up as arrays, I lose the convenience and power of objects.
Concurrency isn't hard - try Clojure core/async and you will find out. Shared mutable state is mind-boggingly hard
As for how heavy, they are definitely a bit heavier than i'd like. But rather than me try to describe it, [1] is a really good benchmark with results that you can run yourself if you want.