But it still has a GC :(. Rust has completely spoiled me with making it easy to minimize dynamic memory allocation and copies, and to know (almost always) deterministically when something will go away.
EDIT: I should also say that if you want to bash on Rust's lack of these things, 3 out of the 4 items I cited have solutions being actively worked on (either at planning, RFC, or implementation phase). I don't think Rust's sigils are going away any time soon, but I have no idea how you'd do that and preserve semantics anyway.
But most likely - you simply do not need a language without a GC. If you look at the sheer amount of applications written in interpreted languages, anything compiled straight to machine code is a win, even with a GC. The interpreter and runtime overhead is so much bigger that a GC does not really matter in them, unless you're talking about highly tuned precompiled bytecode that is JIT'ed like Java and .NET, or natively compiled languages like Crystal and Go. So yes, when compiling to native code, the GC can become the "next" bottleneck - but only after you just removed/avoided the biggest-one. And that 'next' bottleneck is something most applications will never encounter. I initially thought of mentioning database engines in the above list of "huge projects with heavy performance constraints", but then I realized a good number of specialized databases actually use runtimes with a GC. Hadoop stack with especially Cassandra, Elasticsearch? Java. Prometheus and InfluxDB? Go.
Just face it: there is an need for something intermediate to fill the gap of a script-like, native compiled, low-overhead, modern language, and a GC is part of this. The popularity and "I want to be cool so I hate it" trend of Go proves this, but the devops space is getting new useful cool toys at a breakneck speed, pretty much exclusively written in Go.
So I really don't get the whole GC hate. If you don't want GC, there are already many options out-there, with Rust being the latest cool boy in town. But in reality there are huge opportunities and fields of applications for languages like Crystal and Go. And most likely - you could use such a language, only you don't think you do because you have an "oh no, a GC!" knee-jerk reaction.
Absolutely. That doesn't mean I can't want predictable performance or deterministic destruction. I also think it's a shame that we waste so much electricity and rare earth minerals on keeping ourselves from screwing up (i.e. on the overhead of managed runtimes and GCs). Before, I'd have argued that it was just necessary. Having spent a bunch of time with Rust, I don't think so any more, and I'm really excited to see non-GC languages build on Rust's ideas in the future.
> Hadoop stack with especially Cassandra, Elasticsearch? Java. Prometheus and InfluxDB? Go.
Cassandra has a drop-in-ish C++ replacement (Scylla, IIRC?) which supposedly blows the Java implementation away in performance. A magic JIT (and HotSpot is really magic) doesn't make everything better all of a sudden.
In a somewhat recent panel (https://www.infoq.com/presentations/c-rust-go), the CEO of InfluxDB basically admitted that if Rust had been more stable when they started they would have been able to use it instead of Go and would have had to do far fewer shenanigans to avoid the GC penalty.
> Just face it: there is an need for something intermediate to fill the gap of a script-like, native compiled, low-overhead, modern language, and a GC is part of this.
Indeed. I'm not in denial of this. I made an offhand remark about my personal preferences and what I'd like to see from future languages. I still write a ton of Python for things where speed really doesn't matter.
> "oh no, a GC!" knee-jerk reaction
I don't think having a refreshing experience without a GC counts as a "knee-jerk reaction." I've thoroughly enjoyed not having to tune that aspect of performance, and I remarked on it. I think Crystal shows great promise, and certainly has the potential to offer easier ergonomics than Rust.
Beyond that, however, there are many uses for ownership beyond controlling memory resources. Closing a TCP connection, releasing a OpenGL texture...there are lots of applications of having life cycles built in to the code rather than the runtime.
EDIT: fixed typo
Actually two of your three examples are no longer correct: game engines often use a core GCd heap because that's how Unreal Engine works since v3, and Chrome has switched to using garbage collection in the core Blink renderer as well. The GC project is called oilpan.
The benefits of GC are so huge, that they're used even for very latency and resource sensitive apps like browsers and AAA games.
Could the answer be lbstanza when it gets there? Lbstanza.org
Regardless, for a language which is meant to operate in the same domain as ruby and be as easy and declarative, not having a GC would be a puzzling decision.
As a side note, I'm curious what areas you are programming in where the presence of a GC is such a downside. Having written almost exclusively in garbage-collected languages over the last few years, it's something I almost never think about (and happy not to). Of course I don't deny that stricter memory control is sometimes necessary.
A tracing GC means that you either have to deal with potentially long GC pauses or you need a lot of extra free memory at all times to give the GC time to catch up before running out of memory [1].
Go says it can achieve 10ms max pause time using 20% of your CPU cores provided you give it 100% extra memory. In other words, memory utilisation must be kept below 50%.
Cloud/VPS prices scale roughly linearily with memory usage. So using a tracing GC doubles your hardeware costs. Whether or not that is cheap depends entirely on what share of your costs is hardware cost and how much productivity gain you expect from using a tracing GC.
I would be very interested in learning how much CPU and memory overhead Swift's reference counting has, because in terms of productivity Swift is certainly competitive compared to languages using a tracing GC.
[1] Azul can do pauseless, but I don't know exactly what tradeoffs their approach makes. Their price is too high for me to even care.
So basically C/C++ replacement is the only niche that is left to fill. It would be even better if new language could replace even GC-languages, so I can can write fast low level libraries or websites in single language, without sacrificing productivity. That would be the Holy Grail I guess.
Some languages, like erlang, do slightly better by garbage collecting erlang processes individually, so other erlang processes can continue running during GC.
The real time capabilities are not always done in pure SW, there are some FPGAs, but when you do rely on SW, you often can not afford to spend even a few milliseconds in GC. In some case, that would mean killing or maiming someone.
And you are often tied to the HW vendor toolchain for a specific DSP, MCU,.. that is only supporting C or C++. This is a domain that is moving very slowly, currently my most optimistic time table would be able to have vendor support for Rust toolchain in 10 or 15 years but I don't foresee any GC language coming to replace the critical part written today in C or C++.
Anything where memory or interactivity needs to be tightly controlled is problematic with a gc. Not only that, but a gc doesn't scale as well with lots of threads. Ultimately you need thread local allocation since you will eventually be bottlenecked by the fact that typical allocation (with malloc, VirtualAlloc, mmap, etc) is protected by a mutex, and deallocation suffers the same fate.
Re: application domains, I've recently been doing some work in CPU/memory constrained applications (not embedded, running big >500GB jobs on HPC clusters), and a GC is unfortunately a non-starter for this kind of data processing.
I have also been watching with great anticipation the work being done on "big data" processing with Rust (https://github.com/frankmcsherry/timely-dataflow) and how that might obviate the need for a GC with the various JVM RAM-hogs which dominate that field.
There are also many areas where people work (many of whom provide the tools that programmers of GC'd languages use for their jobs) which can't admit a garbage collector.
For example, I currently deploy Django code (running on an interpreter that needs to implement, not run on top of, a GC) to a machine with a Linux kernel, running nginx, backed by another machine running PostgreSQL, with caching in Redis. None of those very important tools can reasonably offer the performance needed in a garbage collected language.
For another example, I'm typing this (quite lengthy) response in a low-latency application (a browser) which would also be difficult to implement in a garbage-collected language.
you mean like webservers, where GC has been the #1 cause of operational problems for essentially forever?
LLVM has been enabling fantastic new programming languages, and while it has support for a GC, I have not found a GC library that would be easy to embed in a new compiler/runtime environment.
Now there are dozens of LLVM-based languages (or language prototypes) that have different, incompatible implementations of GC with varying degrees of quality. If there was a relatively simple but efficient GC available, it would be much easier to implement a new language on LLVM.
At one point there was a project called HLVM, but it was targetted at implementing JVM and .NET -style virtual machines. This is not what I'm looking for and I think the project is dead now.
If anyone knows about a GC implementation for LLVM, I'd really like to take a look. If it's a part of a programming language project but would be relatively easy to rip out of the rest of the compiler/runtime, it's not a problem.
That said, I prefer languages without GC.
For me, the only viable alternative to GC are substructural type systems like in Rust's case.
Not quite LLVM, but take a look at the Eclipse OMR project.
OMR intends to provide a set of reuseable components like a GC, port-library and given more effort a jit to be reused into existing language runtimes or build a whole new language out of them.
Bear in mind that .NET can do AOT compilation and the JVM is getting it (and some other non-OpenJDK JVMs already have it).
https://groups.google.com/forum/?fromgroups#!topic/crystal-l...
https://crystal-lang.org/2016/07/15/fibonacci-benchmark.html
I'm not familiar with very many scenarios where one has a garbage collector but doesn't need to free some piece of memory when it's no longer used. Could you clarify what you mean here?
It also seems to allow tweaking for soft realtime systems, e.g. games.
Now, the greater density of concepts shorthand notation can be abused, and too much of that often shifts the cost benefit ratio further to the cost side for all but the most expert in the language, but that's a problem of too much, not on inherent with their use at all.
GC isn't terrible, though. Azul has struck an amazing balance between latency and eagerness—even if you can't afford it the technology does exist. If you don't have latency, memory restrictions, or embedding requirements, rust may be overkill.
You just annotate functions that you don't want to use the GC in with it and it'll assert that they don't use it.
The simplest solution is to add the moral equivalent of 'null' -- objects that transition to an idempotently destructable state, which solves a lot of complexity with the data flow and analysis (yay!) at the cost of some safety (boo), and nulls (louder boo).
The Crystal website itself makes a more modest claim than "fast as C" under its language goals: "Compile to efficient native code", which it clearly does.
I am using it for years already, and it is really performant, somewhere between C and Rust. I am still wondering why so few people use it.
Benchmark: https://github.com/kostya/benchmarks
Nim vs Rust: http://arthurtw.github.io/2015/01/12/quick-comparison-nim-vs...
Performance discussion: http://forum.nim-lang.org/t/2261
Embedded Nim: https://hookrace.net/blog/nim-binary-size/
Nim on LLVM: https://github.com/arnetheduck/nlvm
If you don't have to write cutting-edge games or embedded software for tiny systems, why do you have to care about allocations at all? Today's systems and RAM's are so fast that garbage collections don't really matter in most cases. Consider SBCL (compiled Common Lisp) which is almost as performant as Java and C++.
http://benchmarksgame.alioth.debian.org/u64q/lisp.html
I used to develop software in C and C++ for many years, and a garbage collector was the thing I wanted the most. GC-free programming is unnecessarily tough in most cases, except you desperately need it for games and embedded systems.
Which one is more minimalistic, 'new Foo' or a collection of various custom-tuned allocation methods? Which one is more terse, 'myList.Where(foo).Select(bar).Aggregate(baz)' or an explicit for loop?
Exactly! I cannot agree more.
I have a small test program I port to different languages to test the length of the code and the speed of the program. Of course it only represents a single use case.
* C is first, of course.
* twice as slow, come Pascal, D and... Crystal!
* x3 to x5, come Nim, Go, C++ (and Unicon).
* x6 to x9, come Tcl, Perl, BASIC (and Awk).
* x15 to x30, come Little, Falcon, Ruby and Python.
* x60 to x90, come Pike, C#, Bash.
* x600 to x1000, come Perl6 and Julia.
This list looks byzantine, I know :-) The trends I can get out of it:
* the last 2 are languages with JIT compilation, and that's horrid for short programs.
* the "old" interpreted (or whatever you name it nowadays) languages (Tcl, Perl) are not so bad compared to compiled languages, and much faster than "modern" one (Ruby, Python). (Again, this is only valid for my specific use.)
* compiled languages should all end up in the same ballpark, shouldn't they? Well, they don't. The more they offer nice data structures, the more you use them. The more they have some kind of functional style (I mean the tendency to create new variables all the time instead of modifying existing ones) the more you allocate and create and copy loads of data. In the end, being readable and idiomatic in those languages means being lazy and inefficient, but what's the point of using those languages if don't use what they offer? C forces you to use proper data structures and not re-use existing ones. It comes naturally. What is unnatural in C is to copy again and again the data, it is simpler to modify the existing one and work on the right parts of it, not to pass the whole chunks every time you need one single bit. In more evolved languages, compilation won't save you by doing some hypothetical magic tricks, it cannot remove the heavy continuous data copying and moving you instructed your program to do. And that is what made the difference in speed between C on one side, and D, C++, Go on the other side.
EDIT: There's also this: https://github.com/nsf/pnoise
Two data points (one-off timings of a few lines of code doing the same work load) just don't make for a comparison we should spend time bothering about.
Whatever you think of the benchmarks game, I don't see why we need to waste time with comparisons that don't meet that low standard:
- a few different tasks
- more than a code snippet
- a few repeat measurements
- a few different workloads
>Remember: The cake is a lie, and so are benchmarks. You won’t have 35x increase in performance all the time, but you can expect 5x or more in complex applications, more if it’s CPU intensive.
Could it be startup time? That's less of an issue when the application has started up.
A fibonacci application is not a very good benchmark anyway.
For now, if you want a fast language with the beauty and productivity of Ruby, check out Elixir [0] and its web framework, Phoenix [1]. I've been using Phoenix for a year, and it's the first framework that I've actually liked more over time. And I've been a web developer for a decade. With its recent 1.0 release, Phoenix is gaining a lot of momentum.
If you want some idea of the performance differences between Phoenix and Rails, see [2] and [3].
[1] http://www.phoenixframework.org/
[2] https://github.com/mroth/phoenix-showdown/blob/master/RESULT...
[3] http://www.phoenixframework.org/blog/the-road-to-2-million-w...
That said, it's a great language worth recommending.
Depends what we mean by fast. I have seen Erlang VM handle 100k requests per second on a distributed cluster. That's plenty fast. Moreover, because of fault tolerance, it means ability to have a better uptime, with less people on-call. "Fast" can also be measured to include that, if system goes 200k requests per second, but crashes at midnight and stays down for a few hours, the average "speed" can be quite low. In a laptop demo that's not visible, but in practice that's money and customers lost.
But if fast means, "let's multiple some matrices", then yeah can probably use Rust or C for that. It all depends on the problem domain.
Not bad really for a language that's meant to be slow at computational stuff :^)
@compile [:native, {:hipe, [:verbose, :o3]}]
I've never used elixir but I assume it has a similar performance profile to erlang as it shares the vm.
I guess it's a good thing that people like it so much, but it's really starting to feel marketing-y by now.
That's a good sign!
You know why? Because it has a great community and is very friendly for new comers. Jose, Eric and the rest of the team made that a priority and it shows. It doesn't just mean being nice on IRC, it also means putting usability first, putting more effort in how example looks, how documentation looks and so on.
If Google invented a language then proceed to push and sponsor it, by paying authors to work on it, organizing marketing, hackathons etc, then it is hard to say if it popular because of Google's backing or because it has its own merits.
Case in point, LFE (Lisp Flavored Erlang) was created by one of the original designers of Erlang, Robert Virding, has great support for a small FOSS project, true macros, but the popularity of Ruby has rocketed Elixir way ahead in terms of repositories and users. Erlang Solutions has it on the site, but it is not as touted as Elixir. People go with what they know, and let's admit it, Lisp is a great language, but not as popular in the web-dev crowd sans Clojure (which I don't see as so Lispy).
From the early looks of it, having come from industry and academia, Pony lang looks poised to muscle in on Erlang/BEAM/OTP, Elixir and LFE anyway. I personally don't like the syntax, but syntax is not semantics, and you get over it.
Popularity doesn't always win the day if you do something a bit more off the main road, and potential to earn more researching what you love: Look at qdb/k devs and jobs, and Haskell has started increasing in uptake by fintech. Go with what you like, or as Joseph Campbell said, 'Follow your bliss' and the rest will fall into place.
But don't listen to me. I spend many waking moments fiddling with J (jsoftware.com). Not actually the most loved or known PL out there. I think the array languages J/APL/K/Q will have their day due to where software and hardware are heading: Multicores, array processing (GPU/FPGA hybrids, custom computers).
As someone who has made the transition from Ruby to Elixir, I'm really, really intrigued by Crystal.
Though, upon a cursory look into the Crystal docs and community, a couple things are clear...
Elixir killed it on all the things surrounding the language.
- docs
- testing
- Slack chan, IRC, mailing list
- package manager (Hex)
- build tool (Mix)
- web framework (plug & Phoenix)
- books from major publishers (manning, pragprog, etc.)
- ElixirConf
- ancillary teaching (ElixirSips, LearnElixir, Elixir Fountain, etc.)
While the language may not be as computationally performant as some of others mentioned, all the things above lower the barrier to entry for adoption and make Elixir a more attractive language than some of the counterparts. And it's amazing that a language this young has nailed it on these fronts.
Crystal, on the other hand, is as if I'm writing Ruby.
[1] http://elixir-lang.org/getting-started/pattern-matching.html
[2] http://elixir-lang.org/getting-started/case-cond-and-if.html...
JVM+FP FTW.
I had never worked with compiled languages before I tried Crystal, but had always had a huge interest in getting into that. When I wanted to learn the compiled ecosystem I looked at languages like Go and Rust, but the learning curve for those was a bit overwhelming for a newbie. A while later I found Crystal, and much thanks to the simple syntax of the language I learned a ton of new things about compiled languages very quickly. The absolutely best part of the language is that it is written in plain Crystal, and I've been looking at their own implementations for various things a lot - something I've never done before, having worked mostly with Node, Lua and PHP before.
Nowadays I can delve into Go documentation, packages are clear to me and I just understand how things should and should not be implemented to achieve a good efficiency and performance level. The Little Go Book makes sense, the I/O package is simple and this is probably all thanks to the syntax of Crystal, the amazing language & standard library documentation but most importantly the source of Crystal being written in Crystal.
I'm currently working on building a business using Go, because I absolutely need Windows target support - something which Crystal does not yet have. But the second it gets that, I'm moving back. Don't get me wrong, Go is really great and nice to work with - but Crystal is my mentor. Please note that I have not worked with Ruby before, so the whole language was new to me.
To summarize; even if you only wish to learn, Crystal is in my personal opinion the best choice to go with.
Of course the Crystal people probably don't have the same number of developers working on it as Go did even early on.
Edit: Go took a little while to support windows, not until around July 2010. See this question from November 2009: http://stackoverflow.com/questions/1717652/can-go-compiler-b...
It does exist unlike Crystal, so you should explain what you mean by poor. I never had a problem compiling a project with CGo on Windows.
The way they interact with different types, fibers and allocation on the stack vs heap, etc. Makes sense?
Edit: To give you an example of how friendly the Crystal lang & API documentation is to developers unfamilliar with the language, let's look at the Iterator: https://crystal-lang.org/api/0.18.7/Iterator.html
It comes with a great "introduction" to what it is, what it does and gives an example of the advantages it has over the Enumerable. It also explains how you can implement your own Iterator.
We can also look at the IO module for Crystal and the io package for Go: https://crystal-lang.org/api/0.18.7/IO.html https://golang.org/pkg/io/
From a beginners standpoint, you have to see that the Crystal documentation is way more friendlier.
"However, Crystal might give incorrect results, while Ruby makes sure to always give the correct result."
https://crystal-lang.org/2016/07/15/fibonacci-benchmark.html
I believe Rust is also implemented in Rust and Go, after a few years of being implemented in C has now a compiler written in Go.
Could you expound upon that point a bit? What's the difference, in your mind?
This is a large problem space that you can glean over by using C as a layer of interaction between your language and the underlying machine, but it makes your language a: not truly a "system language" and b: it also ties you to C philosophy, API/ABI, calling conventions and so on.
Since 1.5 came out, a year ago.
It certainly is great to be able to jump right into Crystal coming from Ruby. It isn't very hard to convert most Ruby code to Crystal -- you just have to go through and "typify" everything. A few methods have different names and of course some don't exist but most of it is there.
My one grip with Crystal however, and why I haven't adopted it more generally, is that much of the "Lisp-like" features of Ruby are all but lost. Crystal makes up for some of this with macros, but it doesn't quite cut it. For example, you can't splat an array into a lambda in Crystal. Arguments have to be tuples which are compile-time bound. Little things like this feel very limiting to an experienced Ruby developer.
Blog: https://crystal-lang.org/2014/12/06/another-language.html Github Issue: https://github.com/crystal-lang/crystal/issues/681
Scala has type inference, Crystal has optional typing. In Scala, there are certain situations when the type is discernible by the compiler, and can be omitted. For example
val x = 1 + 2 + 3
the compiler infers that x is an Integer. However, omitting type information in Scala is the exception not the rule. Methods and functions, for example, must have type annotations.In practice, Crystal also infers type. But in Crystal you can omit almost any type annotation, including method and function definition. This probably poses a different challenge for the compiler authors. The Type Restrictions sections provides some more examples https://crystal-lang.org/docs/syntax_and_semantics/type_rest...
I was interested in Crystal but the lack of apps using it in production and proof of concept on the field is making me doubt its usefulness.
We've been using Crystal in production(at Protel) for more than 6 months for some heavy load APIs (100-200 req/s). We've replaced our Rails API with 64 unicorns to just 1 Kemal process and it's not even breaking any sweat while consuming nearly 100x less resource and 30x less CPU.
You can ask me about our experience.
Sure, thanks. Would love to hear that.
A blog post would probably be more appropriate since it will have a wider audience and will be good for Crystal and its community.
ex: non-scoped (everything in foo is added to the global scope)
inport foo
{
bar.do()
}
ex: scoped (everything in foo is added to the local scope, and assigned a name-space) {
bar = inport foo
bar.do()
}
I find it much easier to manage programs where there are no "hidden" global variables. It's especially hard when the included files also can include files, witch all adds to the global scope.can you make it faster than C though please? (seriously) i think it might even happen by accident in some cases already though. the places where C can be beaten for performance are, in my experience, from design choices in the C standards, users not understanding or leveraging those things for performance and the architecture of the compilation-unit/link process.
things like the struct layout rules - instead of the compiler organising things to be optimal it follows those rules for memory layout, or the calling conventions - you often have to use funky extensions to get efficient function calls.
other things are the lack of ability to hint the compiler that e.g. mathematical structures underly types that can be leveraged for optimisation. that const or functional purity can be trusted... etc.
One typical example of this was a few years ago (if I'm not mistaken) in the monitoring world, when Shinken released a Nagios-compatible engine in Python, and, basically the reactions in the Nagios community was that the modifications involved in Nagios (C) were just too important to be worth it.
(0) Does Crystal have a lot of undefined behavior like C?
(1) Does understanding Crystal programs require a lot of trial and error just like in Ruby?
(2) How good a job does Crystal do at preventing me from shooting myself in the foot?
A language isn't to be judged just by the amazing programs you can write in it. (Turing-completeness and I/O facilities have that covered.) Far more important are the totally stupid programs that you can't write in it.
To be clear, I have no strong opinions about Crystal and will probably never use it. But comments like yours are simply grandstanding and it's annoying that they are confused for contribution.
(1) Not at all, look at the source implementation of their language implementation. For example, the lexer: https://github.com/crystal-lang/crystal/blob/master/src/comp... Seems pretty clear to me.
(2) Not entirely sure what you mean since that is such a broad case, but as stated, Crystal stdlib is mostly safe.
Nice.
> (1) Not at all, look at the source implementation of their language implementation. For example, the lexer: https://github.com/crystal-lang/crystal/blob/master/src/comp.... Seems pretty clear to me.
How am I supposed to learn the language's semantics from a lexer?
> (2) Not entirely sure what you mean since that is such a broad case, but as stated, Crystal stdlib is mostly safe.
Consider this use case: I spawn five fibers. Can I send the same mutable object to all five? If so, can they attempt to mutate the object without properly taking turns? (e.g., using a mutex)
Plus of NIM:
* powerful compiler can produce C, C++, JS, ObjectiveC code
* GC can be completely removed to adapt to the program
* support parallelism via threading
Plus of Crystal:
* use of types union permit to mix types in almost every data structure, let you pondering if the langugae is really strong typed
* so similar to Ruby that porting a 100-lines library (with no fancy-metaprogramming) to Crystal is often a matter of few hours
* use of green threads suits very nicely with HTTP request/response cycle (like GO and Erlang), where using threads/processes is more memory/CPU consuming
What Crystal still lacks is parallelism, but core team are working on that.
Said that both are modern, fast, elegant languages, with a good standard library and a vibrant community.
Python syntax with macros => Nim Ruby syntax => Crystal
https://crystal-lang.org/2014/06/19/crystal-0.1.0-released.h...
The question was asked in the comments as with the reply. I'm sorry that I can't link to it directly but it's the seventh comment from the top.
Yeah, I discovered Common Lisp back in 2006 and have been using it ever since …
I discovered Go back in 2009 and have been using it ever since, too.
What does Crystal get me that these two don't?
Edit: I'm getting hella downvoted but I'm leaving this here. Ruby fanboys can't silence me!!! ;)
Choosing a programming language based on the syntax is like choosing your significant other based on looks alone. You're going to be spending a lot of time together, what's inside is what counts.
A language with a nice syntax is easier to learn, easier to read and understand, and delightful to write.
Crystal's syntax is a great differentiator between it and its statically typed, garbage-collecting competition.
It's a personal preference of course.
there is something f~*#ing wrong with the article.
If you want to talk about efficient GCed languages, you have many choices, most of which have more tooling and mindshare advantages than you language has tooling advantages.
Really, GCed languages are commodities these days. A lot of people have put a lot of work into the fundamental building blocks, and now people are just combining them in various ways.