"Rest assured that Deno will remain MIT licensed. For Deno to grow and be maximally useful, it must remain permissively free. We don’t believe the “open core” business model is right for a programming platform like Deno."
There are some hints though: "If you watch our conference talks, you will find we've been hinting at commercial applications of this infrastructure for years. We are bullish about the technology stack we've built and intend to pursue those commercial applications ourselves. Our business will build on the open source project, not attempt to monetize it directly."
Does anyone have some insight into those? I haven't watch any Deno talk (maybe one actually?) so it feel a bit strange to make people watch technical talks to find hints of the monetization strategy.PS, if I was a rich investor I'd throw money at this project even as a donation, so no complain at all, but I'm very curious on the monetization plan.
Deno is competing against Node.js, which is MIT-licensed. Deno is arguably better, but it would have to be _so much better_ to get people to even give it a second look if it was commercial.
One of the funniest/best business models out there is SQlite (https://sqlite.org/copyright.html). They give it away to the public domain, but some lawyers wrongly claim that that is not enough, so they will "sell" you a warranty asserting it is all public domain.
Other than that, there is the following vague statement at the end of the post:
> The Deno company hopes to enable the millions of web programmers out there to maximally leverage their craft in other domains.
or like Joyent
vc's will issue course correction very soon.
looks like a repeat of docker.
This might be surprising, but not everyone is looking to be a unicorn.
I've been building my new multiplayer games website [1] with Deno over the last 4 months and apart from some minor growing pains, it's been a joy to use.
The lack of unnecessary package management, and the TypeScript-by-default approach makes Web dev much nicer. We're also using TypeScript on the client-side, relying on VSCode for error reporting. We use sucrase to strip the types just as we're serving the script files, so that there is no extra build time, it feels like TypeScript is Web-native and we can share typed code with the server.
[1] Not yet launched but we ran a preview past weekend with hundreds of players over WebSockets: https://twitter.com/MasterOfTheGrid/status/13757583007179735... - https://sparks.land
I notice your script files are all pretty small, have you run into any upper limits on performance or scalability so far with this approach?
> I notice your script files are all pretty small, have you run into any upper limits on performance or scalability so far with this approach?
Not that I can tell. But if we need to, we can always do a minified bundle in production later on. So far it's just nice to not have to even think about it!
If you're on Windows like me, sadly there's still a nasty bug with path mismatches between the LSP server and the VSCode extension (https://github.com/denoland/deno/issues/9744) which requires reloading the window to fix spurious errors, but I'm sure it'll be fixed soon enough.
Vscode extension is maintained by the official team and will provide the best experience. There are unofficial plugins for sublime and Vim. They use LSP too and provide a comparable experience.
Each game server (a stand-alone Deno program that might or might not run on its own machine) connects to the switchboard over websocket and authenticates itself with an API key (since people will be able to make their own game servers).
When a player wants to join a server, they POST a request to the switchboard, which gives them back a token that they can send to the game server after establishing a WebSocket connection to it. The game server checks the token with the switchboard and gets back public user account info if it's valid.
Each game server's logic is currently single-threaded. I guess we might end up offloading some work to WebWorkers later on.
A server can publish some state info through the switchboard that will be broadcasted to other servers from the same user. This is used to show player counts in game rooms from a lobby, things like that.
I run the whole thing on a couple cheap Scaleway servers, with Cloudflare in front (no AWS nor containers or anything of the sort). My previous platform, built with Node.js (https://jklm.fun) is able to sustain at least 2000 concurrent players like that, though admittedly those are board-like games which are not very demanding, unlike the games for Sparks.land which will be more fully-fledged... so we'll see how that holds up!
> Our infrastructure makes it possible to... create custom runtimes for different applications [like] Cloudflare Worker-style Serverless Functions
Fascinated to see what happens here. The serverless / edge compute paradigm fits Javascript hand-in-glove philosophically, but until now it's always felt quite clunky to me. When I've tried it out, I've always been left thinking "but this would just be so much easier with a server".
Reading this has made it click for me why that is. A new paradigm needs a new set of tools native to that paradigm.
The entire server-side JS ecosystem is currently structured around Node, a fundamentally stateful-server paradigm. You can try to abstract over it, but only so far. It's not the serverless paradigm that's clunky, per se, it's that the tools right now were built for another way of doing things.
As a concrete example - Deno has an import system based on URLs, rather than on-disk node_modules. I thought that was a cool feature for convenience, less overhead, more open and sharable packages, etc. But now I realise the full intent of it. It's much more than all that, it's a fundamental shift in the paradigm: no implied dependency on stateful disk in the runtime itself.
I think that lets you deploy to things like edge devices that don't have a hard drive, and even more ephemeral environments.
wow lots of bold statements there. And another one for the usual "JavaScript is fragmented, let's create another tool to fix 'em all.".
As someone who uses Node but doesn't closely follow the steering/proposals side of things, I can't say I had this impression of that process..
Is Node really that bad compared to how JS/ES is innovated on in the browser?
As an outsider who likes to lurk, I have the impression both ECMA and Node committees are stuck in the "we're nice and therefore right" field.
They made technological choices that broke the platform (eg. require vs import). It took ages of pain to innovate on things that matter (eg. promises, async)
I wish Deno the best, but I'll just try to stay away from JS from now on (same as I've been avoiding Python after the 2to3 migration).
Furthermore Node has its own maintenance/risk issues in production systems (think permissions), and Deno reduces those with custom built runtimes.
I cannot see it replacing Node though. Node has created a vast ecosystem that includes modules (npmjs), client side bundlers (eg webpack), serverside frameworks (eg express), etc. But because Deno is solving some of the issues for those who run sensitive code in production (eg Lambda functions) it'll most likely gonna become another VM on the public cloud providers' list.
All in all Javascript interpreter is becoming something like a JVM. Everyone wants to use it but without writing vanilla Javascript.
Most of the packages on NPM are complete garbage - most not all - (and I say this as a full time JS dev who actually likes JS), liking the technology does not have to mean liking what people do with it, and JS is the worst, both in terms of the crap on NPM and the web.
Although I would look for similar examples elsewhere to understand where it is all headed. And one of those could be the rise and adoption of the JVM. There are a bunch of old dependencies, multiple repositories (compared to just one popular npmjs), a couple of build tools (ant, maven, gradle) compared to npm, yarn, then multiple languages (invented for somewhat similar reasons as Typescript). One thing is sure that the dependencies stay somewhere for-almost-ever, and all else kind of evolves around it.
It seems that the main problem at the moment is the developer (me, you, everyone) who instead of writing 20 lines of code uses a dependency that has 10 other transient ones. You cannot fix it but rather challenge those situations at work and reduce the tech debt for good. Maybe there should be a build tool that throws errors depending on the amount of dependencies. Maybe there should be a frontend framework which will only need 10 dependencies for the helloworld instead of 1500; otherwise we kind of think that if a boilerplate template has 1500 deps then surely adding another 100 will not make much harm.
Its value is less about the existing dependency tree of libraries and more about how many apps directly depend on libraries that are npm-hosted.
There are many 200+ kloc apps out there making millions/yr each that deeply depend on libraries and frameworks hosted in npm.
I generally think the NPM ecosystem is pretty cool, but I have noticed that even simple stuff like creating a RESTful service doesn't seem to have stablised in the ways I would have expected given node's target audience, and you end up writing a lot of the boilerplate yourself. Hopefully this will result in hundreds of replies telling me how I should be doing it, but even the fact that it's not trivial to find that out is a sign of a fragmented ecosystem.
And don't do anything. I'm amazed by how often I read the source for a package I'm interested in, only to find it's about 10 or 20 lines of code.
The convenience of adding a package means that if you're not sure how to do something, you can easily just add a package to do it rather than figuring out how to do it yourself. A lot of the time, you can just read the source and adapt it rather than installing the package.
Also: if you're porting from Node, the only things that really have to change are the system/IO API calls. The import style is a bit different but that could pretty much be automated. It's still just TypeScript at the end of the day; your core logic will be the same.
Except not because lots of people prefer to write JavaScript and those other JVM languages are usually less verbose rather than more verbose.
I think this shows the lack of willingness to write a vanilla, all-compatible javascript. Also it seems people use new features without understanding how those get compiled to a lower version.
Sorry when I meant vanilla I did not mean ES9.
Plus, aren't Deno libs compatible with Node libs?
Maybe but who cares when your code does not execute instantly on Lambda and requires you to use GaalVM to convert bytecode to binary. Besides you have a lot of people who would say Kotlin is better.
Javascript, like Python wins here. Well Javascript kind of wins because it is interpreted but devs do not use it directly but rather with a bundler/compiler.
> Plus, aren't Deno libs compatible with Node libs
Not really, if your dependency relies on say `https` module then it'll not work on Deno as it does not have it. And in the case of Deno modules, they are not on `npm` to begin with but even if you import them then they are written in Typescript.
- Typescript as a first class citizen
- An actual `window` global with familiar browser APIs
- Sandboxing w/ permissions
- URL-based imports (no need for NPM)
- Bundling into self-contained binaries
- Things like top-level-await which Node.js still treats as experimental.
- Better-designed APIs than the Node standard lib (esp. when it comes to promises instead of callbacks)
To me, those aren't just minor details. This has the potential to create a new epoch in server-size JavaScript.
The structural and semantic differences between imports are a much more important discussion. Import syntax is the front end to the languages meta programming semantics. It's meta programming in the sense that your code and other code is the data, but what you're really programming is a task runner with specific instructions about how to find and satisfy the requirements to build and/or run the software.
Integrating package resolution into the language itself is a really important distinction for contemporary designs - passing that buck to external tools but simultaneously coupling them to the runtime is a mistake that I think we should learn from. Deno is a good step in that direction.
What happens when there is the next Codehaus-like shutdown, and so much source code just won't work? Or when a bad actor takes control of a domain that commonly hosted packages (perhaps through completely legitimate means, such as registration expiration), can get a completely legitimate SSL certificate for it, and responds to requests for packages with malicious code? I think the abstraction, and to some degree the centralization, of package management is generally a good thing
The fact that URL-based imports make you uncomfortable is good. Let that discomfort guide you to adopt some extra security measures to protect yourself from third-party dependency exploits
Also: Nobody prevents you from using a package manager anyway. Just because you can use urls in imports doesn't mean you have to. But it is very convenient that deno downloads exactly the code that is imported. A package manager will always just throw everything at you. Some packages in node.js try to fix this by splitting the project into submodules like @babel/core, @babel/env, .... But that made installing dependencies worse. Just let deno figure out what is required is way more elegant IMO.
Deno caches everything offline and only reloads if you use --reload flag.
I would recommend going through the manual to learn more about deno workflow.
URL-based imports are already something you can do in package.json if you really thought that was great. Top-level await really is trivial. The permissions system does very little for you since they are process-level permissions when I'm scared of transitive dep attacks. Typescript in Node is good and Deno has to catch up in tooling.
If Deno was where it is today but like 8 years ago, it would have made a splash. Now it just looks like someone made a few creature comforts to Node.
- Typescript as a first class citizen: Agreed, it's not hard to get TS set up in Node, and it's a "one and done" cost.
- An actual `window` global with familiar browser APIs: I've never needed or wanted this, though I could see how the server-side rendering crowd could benefit, but I still have to believe there would by necessity be enough difference between the browser-based window that implementing SSR is still non-trivial.
- Sandboxing w/ permissions: I honestly think this is going to be useless. As you stated, if you're really running untrusted code better to rely on process permissions. This actually reminds me of many of the overly complicated security/permissions architecture in the Java SecurityManager stuff that I rarely if ever actually saw being used.
- URL-based imports (no need for NPM): NPM certainly has its warts, but I think a lot of Deno supporters woefully underestimate how most server-side JS developers love having a single primary global package repo.
- Bundling into self-contained binaries: Again, this is nice, but also gets a "Meh, I don't really care" from me.
- Things like top-level-await which Node.js still treats as experimental: Once you learn how to do an IIFE who really cares?
- Better-designed APIs than the Node standard lib (esp. when it comes to promises instead of callbacks): Again, a minor nice-to-have, especially with Util.promisify.
This doesn't seem good?
- this doesn't make sense, its bad API design and shoe horning in a completely different paradigm is not a good idea, it should have moved somewhere less familiar and more oriented to the environment (I know that sounds weird but giving a quick answer, there are better solutions and this aint a browser)
- not bad, but seems odd to me as a language feature in some ways, there are plenty of ways to achieve this. (macOS will do this to your node scripts even itself, why do we need more of it)
- this is subject to the same problems NPM could have, but I guess it is easier? Now you have to lexically parse all files to determine an import and you also have to remember where you keep your 3rd party packages without a centralized way to know it (edit: this seems to harm the previous point)
- bundling isn't that interesting in this context as it just bundles the whole runtime with the package, which is terrible for size of packages and doesn't net a lot of benefit since they are also now tied together - if there is a security flaw you now must fix the entire binary and hopefully it still compiles. (edit: technically swift did this a long time too... but they also were reasonably sure they could include their runtime with the OS itself once ABI was stable, I am not sure if Deno has a path for this or if we just get fat binaries forever)
- top level await is a weird one to me, there are valid reasons its not in node currently. but yeah- no one likes having to write janky init code to work around this
final edit: I have a lot of opinions on this and would love to learn more about why deno is great. from what I can tell, its just a faster language of js which, imo, is great. But the points drawn from GP are just bizarre to me.
- it makes a ton of sense if you don't want to maintain two different versions of the same library. With WASM there is zero need for Node native modules anymore, so there is no need to have platform specific idiosyncrasies that go beyond Web APS's. Is the fact that it's called "window" a favourite of mine? Certainly not, but when you try to get a library running that you really need, that was originally written for the browser or vice versa, you don't care what the thing is called, as long as it's backwards compatible.
- defense in depth, multiple sandboxes are better than one
- this has to do with the window point, it's a lot easier to make code cross platform compatible if you only have to reference publicly accessible http endpoints
- maybe that's not interesting for you, but I've had to deliver command line tools as binary, and I'm super duper happy that I could do so in JS, the security I gain from controlling the runtime version is also much better than what I'd get from that being the users responsibility, besides that fact that not knowing exactly which runtime your code is gonna end up on is also a big source of security vulnerabilities
-
But what I’d like to question is why the idea of parsing everything is considered bad. Semver itself, while miles ahead of what came before, is still just lies we tell the compiler.
You can never actually rely on the fact that a miner fix will not be a breaking change and the like.
Rich Hicky had a great talk on the subject of imports and breakage, and the conclusion that I agree with was to actually load and check every function in all the libraries you use, so that if there is an incompatible change you will not be none the wiser.
I’m glad people are trying to experiment here, as package management is far from a solved problem and issues with it cause sweat, tears and blood to most devs during their careers.
> - Typescript as a first class citizen
This doesn't seem to add much besides having to install `tsc` or `ts-node` and also not having the choice of the TypeScript compiler version you use.
- An actual `window` global with familiar browser APIs
Node.js has a `global` global object and the only API I would understand having in common with the `window` object is the `fetch()` API.
- Sandboxing w/ permissions
Sandboxing is so basic that any large project will have to enable all permissions.
- URL-based imports (no need for NPM)
I would consider this a disadvantage.
- Bundling into self-contained binaries
Again, I would say that this is rarely useful in a world where a lot of operations use container technology.
- Things like top-level-await which Node.js still treats as experimental.
This is trivially solved by anonymous self-executing functions
- Better-designed APIs than the Node standard lib (esp. when it comes to promises instead of callbacks)
I think that this is the strongest advantage, however I would argue that this is not a reason to start a completely new backend platform. Also, I think that it might be a disadvantage in some high performance scenarios because Promises are much, much slower than callbacks currently.
Depends on your point of view. With TypeScript being built in, you don't have to think about using tsc or whatever version of TypeScript you have. It's just what version of Deno you use. If someone doesn't like that, then they still can have the option of using TypeScript by itself.
> Node.js has a `global` global object and the only API I would understand having in common with the `window` object is the `fetch()` API.
It also supports `addEventListener`, which is commonly used by browser code on the window object.
Just the existence of something defined as `window` makes more sense than a `global` which never existed in browsers in the first place.
> Sandboxing is so basic that any large project will have to enable all permissions.
That's pretty dismissive. Why should an app that doesn't interact with the file system be allowed to write or even read from it? I don't know how this feature can be considered a drawback. Don't like it? Don't use it. I don't see how it detracts objectively from Deno.
> I would consider this a disadvantage.
Then you can still use NPM. Others of us get the option to just import packages from a URL instead of publishing it to a central repository.
> Again, I would say that this is rarely useful in a world where a lot of operations use container technology.
Why? Building Docker images requires extra software, Linux images, time spent running apt-get or apk, time spent downloading and installing your runtime of choice, and so forth. Having Deno build a binary can give you a bit of a shortcut in that you have one tool for running and bundling code, and you don't need to deal with as many OS-level nuances to do so. Docker and k8s are there for anyone who needs something beyond that.
> This is trivially solved by anonymous self-executing functions
That's your opinion. Just promise me you don't go on to say that JS is a bad language, because people keep saying that yet are opposed to reducing complexity they consider "trivial". If using IIFE for the mere purpose of accessing syntax makes more sense to you than making `await` syntax available, then I really don't know what to tell you. What exactly is the argument for not implementing this feature besides "all you have to do is type some extra characters", to loosely paraphrase you.
> I think that this is the strongest advantage, however I would argue that this is not a reason to start a completely new backend platform. Also, I think that it might be a disadvantage in some high performance scenarios because Promises are much, much slower than callbacks currently.
> I think that this is the strongest advantage, however I would argue that this is not a reason to start a completely new backend platform. Also, I think that it might be a disadvantage in some high performance scenarios because Promises are much, much slower than callbacks currently.
I honestly have to wonder if you are joking. This is exactly why people invent new backends, new libraries, and new languages.
My only response to your point about Promises is that perhaps one shouldn't be using JavaScript if Promises are that much of a bottleneck. What you're saying is totally valid, though.
And is that really worth dumping loads of money into developing further? I just find it hard to believe people are going to bother with Deno any time soon - we’ve gone too far down the NodeJS road.
It depends on migration effort. Take typescript for example, it's very similar with JS that migrate the codebase is not that painful. If the standard library and package manager can prove to highly useful, we'll see two possible scenario that aren't mutually exclusive:
1. People migrating to Deno
2. Newer nodejs version follow what Deno has
In the end it's good for us
Sandboxing doesn't sound so unique or innovative when WASM is coming along and doing the same thing, and with a much wider audience and thus more likely to have massive traction.
As a glaring strawman if you expose eval to wasm it will not help you.
Deno's sandbox will allow you to crate a dedicated worker with no network/disk access to handle sensitive informations, or to force your application to use a specific worker as a proxy (by making it the only thing with network acess)
The other feature is TypeScript as a first class citizen which is pretty great for devs.
> Not every use-case of server-side JavaScript needs to access the file system; our infrastructure makes it possible to compile out unnecessary bindings. This allows us to create custom runtimes for different applications: Electron-style GUIs, Cloudflare Worker-style Serverless Functions, embedded scripting for databases, etc.
So it's basically more of a Redhat approach to making money from open source? They intend to build tailored services on top of Deno for companies that request them?
My read was that anyone could more easily configure a runtime to expose (or not expose) the underlying bindings that it requires, vs. just having them all in there by default.
I think "us" in that statement is the Deno community, not Deno the company.
But maybe I'm wrong.
I still find it strange that there is no mention of it in the post...
I think the point being made here is that it will be easier to create purpose-built runtimes that only provide needed bindings, making them more secure and possibly also more performant?
JS/TS as languages are here to stay, so let's give them the best possible ecosystem.
Then what are all these modules, if not a standard library? https://nodejs.org/api
What's a "standard" for you? Whatever definition, I think it implies there are > 1 implementations of it.
Node.js is based on CommonJS APIs, an initiative lead and implemented by earlier server-side JavaScript (SSJS) runtimes at the time such as v8cgi/TeaJs, Helma, etc. until Node.js dominated SSJS around 2011. Remember, SSJS is as old as the hills, starting 1996/7 with Netscape's LifeWire, and arguably has seen even longer use than Java as a mainstream server-side language.
Also not a "standard" in the above sense is TypeScript.
As to "standard library", the core packages on npmjs plus package.json are the standard library of Node.js on top of core Node.js APIs, and were developed in that spirit.
Keep up the great work Ryan, Bert & team. Exciting times!
We were consultants scaling node to production for a major international bank, circa 2016.
Love the security improvements in deno, will have to give it a look.
The leading app framework for that is Nextjs and I hope the Rauch Capital investment sígnals Vercel will be supporting Deno.
Anyone know?
$ deno run --unstable --allow-write=output.png https://raw.githubusercontent.com/crowlKats/webgpu-examples/f3b979f57fd471b11a28c5b0c91d0447221ba77b/hello-triangle/mod.ts
Download https://crux.land/2arQ9t
Download https://crux.land/api/get/2arQ9t
error: Import 'https://crux.land/api/get/2arQ9t' failed: 404 Not Found
at https://raw.githubusercontent.com/crowlKats/webgpu-examples/f3b979f57fd471b11a28c5b0c91d0447221ba77b/deps.ts:4:0
(A dependency got removed?)Another one:
$ deno run https://deno.land/v1.8/permission_api.ts
error: An unsupported media type was attempted to be imported as a module.
Specifier: https://deno.land/v1.8/permission_api.ts
MediaType: Unknown
(The site is a 404 returning status code 200... just... why?)> But JavaScript and TypeScript scripts calling into WebAssembly code will be increasingly common.
Why is WebAssembly a key concept here? How does Deno uses it?
This is analogous perhaps to C# and F# within .NET currently. C# and F# have the same BCL (base common layer) so you can use C# code in F# and vice versa. WebAssembly is like that and much more.
.NET BCL is the Base Class Library. Outside of a few classes in it that are fundamental to the runtime (e.g. Object, String, Array), it's not actually required for cross-language interop on .NET platforms. E.g. if you have two different C compilers both outputting CIL, they can interop just fine without any objects in the picture. WebAssembly interop is really more like that, and doesn't have the high-level object model layer that .NET also has (and which is used in practice).
Clicking on 'Learn more about Deno Deploy' leads to https://github.com/apps/deno-deploy , which does not tell me more.
What does 'act on your behalf' mean for Deno Deploy?
Anyone else is running into this?
Few days ago, there was new version released (87). In certain situations, when silent upgrade had been made during using the browser it displays an "Oops, something went wrong" notification with a button to refresh. If you will close and reopen the Firefox the problem will vanish. It is less kind of crash but more likely as problem to free/monkeypatch resources.
I have run into the same problem on Linux but I have quite complicated Firefox configuration with at least few extra profiles (about:profiles).
Genuine question (I assume it is, but presumably it was before with c++) - it just strikes me that once something becomes as successful as node, and given that nothing is ever perfect it might be useful to clarify why the technical insight might be better this time around - at least regarding the idioms of the underlying tech; the added experience at the architectural and implementation side a given.
I haven't fully investigated in a few years, but isn't it still true that LuaJIT is is faster than V8 JavaScript? The last I saw it was outperforming V8 by a lot. The practical use of LuaJIT is still very niche though. The lack of a comprehensive standard library, and being forever stuck on Lua 5.1 makes it even less generally appealing. I still love it for programming Nginx though..
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Why is the Mozilla Corporation an investor in a Chrome based technology startup ?
And all Servo developers according to: https://paulrouget.com/bye_mozilla.html
My only gripe with the Deno company is that taking investor funding is a double-edged sword. Yes, they'll get to hire very skilled developers. However naturally the investors want a tidy exit, and I wonder if that would be to be bought out by Amazon, Microsoft or Google.
Edit: Just realized that there's a key difference in that Deno does not have something like NPM to be bought and sold because dependencies are URLs and thus decentralized. Also, Deno itself is open-source.
I can see that Deno.listen returns an object which implements reader and writer interfaces, but it isn't clear to my how to look for events, such as disconnect or that new data is available.
I wish there were examples showing how to correctly parse frames or implement protocols.
I'm sure these things will be expanded over time, partly by programmers in the community, but from the outside, things are still a bit rough.
Well, I wish they would stop using it period, but at least in the browser it makes some sense.
Edit: to be clear, I have no beef with Typescript, Dart, Clojurescript, and the many other languages that compile into JS. It's JS itself I have issue with. I feel like it gives too much flexibility to young programmers to screw things up. There don't seem to be enough safeguards or training wheels. On large projects its my nightmare.
One of Deno's main selling points is that it runs typescript out of the box?
I know this might be hard to see, but Rust is actually in the same domain. It is also, among other things, enabling product/frontend/web engineers to build backend/native/browser-less applications.
I'd bet Rust will be more successful here, especially given its amazing ability to change itself and innovate.
Care to explain? I can imagine web programmers being productive in deno in a few minutes vs however long it takes to learn an entire new language, not to mention a language that requires memory management.
> "A language empowering everyone, but especially folks who thought that systems programming wasn’t for them.”
Many developers, I think, don’t look past web-first abstraction layers.
I can’t tell you how many times I’ve seen CLI tools which are huge chunks of node wrapping a thin bash command. They are multiple files, orders of magnitude larger than they need to be, and require external dependencies because these developers are fixated on their proverbial hammer.
It uses comet-stream instead of WebSockets.
But it's fully "joint" parallel on all cores.
In my opinion, the best part of node is (or was) that it didn't adhere to the browser APIs. That brought in some fresh air, and gave us buffers, native bindings, streams etc.
Ryan and the team should be capturing some of the value produced by their creation. And because of Deno's a developer tool, it's actually capturing the far minor part of the whole value and enable a much bigger value creation!
Trying to sell something based on FUD is always a bad sign.
However, I personally would prefer Go or .NET Core for my backend any day. We need to wait and see where it's going ...
Good luck and success anyway!
With respect, and without impugning your right to make this criticism, I find your criticism shallow.
I don't know what your particular circumstances are, but I see your view expounded a lot by developers who are getting their salaries from companies that can afford to pay them because they took VC money in the first place.
We are certainly not entitled to Deno for free (although the MIT license is in their best interests for now). I am glad they found a way to sustain its development for the near future.
No, they figured out a way to delete that problem.
VC money isn't a gift. It's a loan.
Chrome vs chromium etc
My sense is that GPL3 gets a ton of criticism on HN, but isn't it the perfect defense against freeloaders?
* license the code for proprietary use in your stack * use GPL3 if you have a non-commercial use, and are willing to accept the requirement to open source your own code.
I don't understand why this option isn't used more by open source projects that want to be able to fund themselves.
Can anyone explain? (Even better if there are examples / case studies)
One possible reason is that such dual licensing requires copyright assignment from external contributors.
It is said that they can get away with having it as Public Domain because a key part of their business is SQLite's reliability which is asserted by their large and closed source test codebase.
Also, it's nice that they're using Tokio for the HTTP server instead of a JS implementation (from what I understand). I want to see Deno near the top of the TechEmpower benchmarks.
Haha, this made me laugh hard, stopped reading
Every time I read something like this I realize how much in the minority I am. I am not a web developer. I have never written JavaScript before in my life. I hate working with “web-first abstractions”. I feel like it is just massive bloat on top of true native application. But given the popularity of things like electron, react-native, Node, and Deno I don’t speak for the majority.
And the thing is, I don’t know if I just learned web dev if I would love this new approach to software that is eating the world and I would “get it”. Or if it just exists because JavaScript developers don’t want to learn something new.
I'm a fairly seasoned developer with experience shipping things written in Java, Scala, Ruby, Python, Perl, and JavaScript/TypeScript to large and high traffic systems and services. The tooling and developer experience of working with TypeScript is still the most pleasant I've interacted with. On the UI side, it isn't even close.
Deno are in overlapy
|
|
| /---
/--- -- | /- \---
/--- \-- | /- Best |
| Coders \|- coders \
/ passionate /-|\-- in |
| about - \- world \
\ democratization - |
| of -\ -
\ technology -/ -\ -/
|---\ -/ -\ -/
---- -\ -/
--
There is just enough overlap to bring any/all of the best ideas from the non-web world to web tech.It's probably a big reason. But if you think about it from the other angle... you don't want to learn Javascript, which would be new to you :-)
I think you'd be surprised how many modern applications are written in JavaScript or Python.
One of the more prominent ones is VSCode.
There is a case to be made for programmers being at least familiar with Unix utilities and shell scripting so that they can unlock the superpowers described in that anecdote.
However, I largely agree with your sentiment - I pretty much do all of my scripting in JavaScript and Python and would not be particularly happy to have to deal with a large bash script.
https://www.destroyallsoftware.com/talks/the-birth-and-death...
That was what propelled Atom and VSCode.
What you said is valid of course when you look at Slack and such, but you ommited most important cross-platform target.
Your app can run on Windows/Linux/Mac .. and the Web.
But in a serious answer to your last sentence ” And the thing is, I don’t know if I just learned web dev if I would love this new approach to software that is eating the world and I would “get it”. Or if it just exists because JavaScript developers don’t want to learn something new.”
My opinion is the answer is a resounding “No, you will not just get it”.
This is a complicated question of course, but to distill my thoughts down
1) You do not need to view web development as a threat to your skill set. You didn’t say specifically what stack you work in, but I have to believe that you will be able to continue making a living in it.
2) Tools like Deno are specifically designed for and intended for web developers to be able to easily leverage their skill set in other areas like systems.
So if your not a web developer, and you don’t have a particular interest in learning web languages/ APIs, then don’t worry about it. Just because it’s trendy right now doesn’t make it a better approach technically speaking. It’s quite possibly worse than the approaches you know.
So what I’m saying is this tool isn’t meant for you. And that’s ok. Just because it makes HN front page and web development is huge right now, doesn’t diminish the tech you know to be good.
Last sentence of the blog post: ”The Deno company hopes to enable the millions of web programmers out there to maximally leverage their craft in other domains.”
Let's go with WebSockets as an example. You decide you need them in the browser. This will likely mean some other component will have to support them too. By now you need a pretty good reason to reimplement it over something else. If you need a handshake you may exploit the fact that WebSockets open with a standard HTTP request. Your handshake over another socket may look pretty different. DDoS services, proxies etc may support WebSockets but not arbitrary TCP or they may support it differently. A WebSocket "message" doesn't exist in plain TCP, will you make your TCP protocol work the same or will you handle frames differently?
Point is - your choice may be between just WebSockets or WebSockets AND something else.
Looks to me like they don't want to write stuff twice, and they would have had to write for the web anyway.
The web stack is not good by any measure. The native GUI toolkits are suffering form abandonment, so the web-based toolkits are among the best available (but not at the top). But if you don't have any reason to expect to create web code, your life will be better if you ignore the stack.
It takes time to learn an ecosystem, and when people know one ecosystem and not another, most people would rather do high skill, high value work in the ecosystem they know than start over as a noob in a new one.
The only thing unique about the web / JavaScript land is its ubiquity and size due to its place as the language of the browser. So anyone who can make an abstraction that lets JavaScript developers do stuff outside the browser without learning much new has a virtually guaranteed large audience of developers who are likely to think “I would love to be able to build X, I just don’t have time to learn a new ecosystem.” Those folks are then thrilled to dive in and use the new abstraction. And there are a lot of those people. And that’s why we are all stuck using Electron apps for so much stuff. :)
But that doesn’t have to be a bad thing. Electron can evolve to be less of a resource hog, and better alternatives are being tested all the time. The same is true for other non-browser applications of JavaScript.
I don’t know if this vision is reality, but I think that it may be that we’re in the early days of a transition toward the browser stack being the new “GUI.” Which is to say, back in the 80s there was a lot of debate around GUIs and whether they were a good idea etc., and while most people liked them to some degree, they also lamented the idea of losing the command line. But in the end, GUIs didn’t shrink the number of CLI tools in the world, rather they increased the size of the domain that computers are used for my making it more accessible to more people. I think that so far the web vs native debate seems to be following a similar trajectory.
The old way gets _your_ job done _effortlessly_. idk, I guess that’s not ideal in the meat world.
Most popular, I can agree. Fastest, & only one with industrial standardization process? Have they met Erlang?
edit: you have to be kidding me, downvoted to oblivion for an honest observation. Sorry I hurt javascript's feelings.