slightly /s
kubectl apply is 1 command!
ansible-playbook is 1 command!
sh ./do-the-thing.sh
you get it
Sure, you just need to reimplement light-weight threading with preemptive scheduling prioritizing latency over throughput, extremely robust fault tolerance with a supervision hierarchy, and runtime introspection with code hotloading capabilities. Maybe you could add frictionless distributed system support as well.
No big deal, why not right?
> light-weight threading with preemptive scheduling prioritizing latency over throughput
Rust has Tokio for light-weight threading which might well be sufficient for the majority of use-cases.
> extremely robust fault tolerance with a supervision hierarchy
One could argue that Rusts compile-time guarantees together with something like the Result-type make it so that such a supervision hierarchy isn't quite necessary and a few "manually implemented" error-boundaries are sufficient. This is also true for errors like network-hickups.
> runtime introspection with code hotloading capabilities. Maybe you could add frictionless distributed system support as well
Fair enough points.
I don't think your attitude against the OP is justified though.
Tasks for backend systems are usually pretty homogeneous. I'm not sure how in such cases the overhead of preemption is in any way better than cooperative multitasking.
I haven't seen hot loading for Rust (but a quick search shows there's some out there), and I'm not sure how amenable Rust is to dlopen and friends to force the issue.
Erlang (and Elixir) have a constrained language that allows for BEAM to be effectively premptive in a way that a Rust concurrent runtime can't be. At every function call, BEAM checks if the process should be preempted, and because the only way to loop is recursion, a process must call a function in a finite amount of time. A Rust runtime cannot preempt, if you need preemption, you've got to use OS threads, which limits capacity, or you need to accept cooperative task switching.
Also, some of us are as anti-typing as you are pro-typing. :)
Assuming ample experience with both, how does one reach this conclusion?
I have yet to see a project of any size that needs to be worked on by multiple teams and is written in an untyped language not descend into dumpster fire.
In my case I prefer to work with Elixir because of the community, as I find easier to work professionally with Elixir than some other mainstream languages, as mostly projects follows the same good practices, use the same tools and have good documentation.
[1] - https://gleam.run/
That said - if you don't benefit from what BEAM has to offer - I agree Rust is a really attractive alternative.
nil is an especially big problem. Any value could be nil, and this will absolutely bite you over and over. nil even allows you to use square brackets for some reason (some_nil_value[:some_key]) which is a great way to disguise the actual issue.
There is optional type checking with Dialyzer, which is good but has some problems. The warning output can be really hard to read, and unless you're diligent in using it across most of your project, it's not very useful, because you'll end up with 'any' values all over.
Square bracket dictionary accesses are a code smell, because you should be using %{^key = val} = dict or Map.fetch(map, key) or rarely Map.fetch!(map, key).
If you do that, managing typing in Elixir just boils down to defining structs to differentiate cases where dictionary A and dictionary B contain similar keys but strictly are not interchangeable.
That access acts on nil is unfortunate, but it's necessary for things like get_in.
Runtime instrospection: the ability to log into a running application to inspect its state, start and stop processes
Compile times: elixir compiles very quickly and has tight feedback loops where as rust compiles slowly and has long feedback loops
Elixir also doesn't get in the way that the borrow checker does, allowing a programmer to just get on with work and not become saddled with related debugging.
Ecosystem: data pipeline processing via GenStage, Broadway, Flow -- wow. Rust developers should take note of what can be achieved in Elixir. However, Rayon and crossbeam are fantastic. Elixir cannot compete with Rust on performance in the category of pipeline processing, but it has very high marks in other very important categories that need to be considered for professional development.
I don't think that fault isolation is as compelling an advantage over Rust as it is other languages. Rust makes defense programming a regular part of development, unwrap used sparingly. Faults hardly happen because the program isn't designed to crash. In the rare event that a well-designed Rust program crashes, it's probably managed by an orchestrator that will restart. Both well-designed Elixir and well-designed Rust applications can enjoy very long uptimes if that is a goal.
This is just a starting point of discussion.
Because many times you value fault-tolerance and distribution more than performance.
I found others like Lunatic before, but cannot remember right now.
You might be interested in Gleam[1].
You might be interested in Gleam[1].
Ecto is prolly one of the best ways to interact with db. Genuinely curious, what other ORMs have you used?
That's very common when you don't know DBs. But DB savy developers usually claim the opposite, because the syntax is more familiar.
Because Rust brings none of the benefits of the BEAM ecosystem to the table.
I was an early Elixir adopter, not working currently as an Elixir developer, but I have deployed one of the largest Elixir applications for a private company in my country.
I know it has limits, but the language itself is only a small part of the whole.
Take ML, Jose Valim and Sean Moriarity have studied the problem, made a plan to tackle it and started solving it piece by piece [1] in a tightly integrated manner, it feels natural, as if Elixir always had those capabilities in a way that no other language does and to put the icing on the cake the community released Livebook [2] to interactively explore code and use the new tools in the simplest way possible, something that Python notebooks only dream of being capable of, after a decade of progress
But they do not not stop there, the documentation is always of very high quality, even for stuff not coming from the core developers, and they also regularly release educational material that is worth a hundred times a gain in speed.
They've set a very high quality standard and I noticed how much it is important only when I stopped programming daily in Elixir and went back to other more hyped or establish ecosystems.
That's not to say that Elixir is superior as a language, but that the ecosystem is flourishing and the community is able to extract the 100% of the benefits from the tools and create new marvellously crafted ones, that push the limits forward every time, in such a simple manner, that it looks like magic.
Going back to Rust, you can write Rust if you need speed or for whatever reason you feel it's the right tool for the job, it's totally integrated [3][4], again in a way that many other languages can only dream of, and it's in fact the reason I've learned Rust in the first place.
I must also say that the work done by the Rust community looks refreshing as well, if you look at the way rustler works it was very well thought and made writing NIFs, something that seemed arcane and distant, only for the proverbial mad professor to try, a breeze. Kudos to them.
But the opposite IMO is not true, if you write Rust, you write Rust, and that's it. You can't take advantage of the many features the BEAM offers, OTP, hot code reloading, full inspection of running systems, distribution, scalability, fault tolerance, soft real time etc. etc. etc.
But of course if you don't see any advantage in them, it means you probably don't need them (one other option is that you still don't know you want them :] ). In that case Rust is as good as any other language, but for a backend, even though I gently despise it, Java (or Kotlin) might be a better option.
[1] https://github.com/elixir-nx/nx https://github.com/elixir-nx/axon
[0]https://github.com/ityonemo/zigler [1]https://podcast.thinkingelixir.com/83
This is where Rust falls short of C#: scaling to the issue at hand. C# can build you a beautiful app at a high-level but also lets you dick with pointers and assembly at a low level. Rust insists on defaulting to pass-by-move and an arcane trait system that hold it back from being usable in large projects.
Github link for others: https://github.com/maciejgryka/regex_help
IIRC one threat was Rust sharing memory with BEAM which could exhaust it and cause OOM crash?
The only proper solution is to audit and understand the code you're running, but hoping it's fine often works too; maybe formal methods, but to a first approximation, nobody uses those. Did you audit all of ERTS and/or OTP? I'm guessing probably not, but it's there to review if you run into a problem.
IMHO, it's not worrying about if BEAM will crash; worry about it not crashing instead. If your Rust NIF ties up a scheduler with an infinite loop, that has the potentially to lock up the whole BEAM once another scheduler needs to do something that requires full cross scheduler coordination.
BEAM can certainly crash on OOM; although I recommend setting a ulimit to ensure it will, because when it crashes, you can recover. I've also run into situations where instead of crashing or being killed by the OS OOM killer (which is close enough to crashing), the OS gets into some tricky to debug state where your application is neither functioning nor killed. Sometimes, you even get into a state where BEAM is making progress, but very slowly; that's a fate worse than death.
If you follow the Erlang philosophy, you'll have a recovery strategy from crashes or other deaths. Heart can be used to turn completely blocked into death, although I never used it professionally. But you've still got to worry about working but not well.
I go into this in the article a little but but the ruslter team has made dirtycpu and dirtyio macro's to help reduce the risks.
The Rust / BEAM memory sharing problem does exist, but it's not nearly as bad as in more traditional C NIFs, because almost all C programs leak memory due to bad manual memory management. Hence all the buzz about Elixir+Rust.
I showed them a piece of software that was mighty fast, Internet enabled and GUI intensive. They liked the software but asked where did you get this particular screen control from. You've got to see their faces when told that the whole software was written by a single person in Delphi.
EDIT: thanks for pointing out where in the article this is talked about.
> Change `#[rustler::nif]` to `#[rustler::nif(schedule = "DirtyCpu")]`
> This tells the Rustler and BEAM to automagically schedule this in a way that won't block the entire world while it works. Again amazing, this is called a DirtyNif and is way more difficult to work with when you are manually using this via C.
Essentially, regular NIFs have to be extremely fast (< 1ms) because the VM can't preempt them - they run on the same scheduler threads the BEAM itself uses. Dirty NIFs solve this by running jobs in a completely separate thread pool ("dirty schedulers"). Rustler's docs explain it succinctly (https://docs.rs/rustler/latest/rustler/attr.nif.html):
> For functions that may take some time to return - let’s say more than 1 millisecond - it is recommended to use the `schedule` flag. This tells the BEAM to allocate that NIF call to a special scheduler. These special schedulers are called “dirty” schedulers.
> We can have two types of “lengthy work” functions: those that are CPU intensive and those that are IO intensive. They should be flagged with “DirtyCpu” and “DirtyIo”, respectively.
(Somewhat OT, but since I'm here: excellent article @ peregrine! I really enjoyed the read. Elixir and Rust are such a perfect fit. Plus, some of the specifics will be helpful for certain image-related things I'm actively working on, which is always nice. :) )
Further Fly builds its Dashboard internally with Phoenix LiveView. We want the Phoenix and Ruby and Laravel and more communities soon, to grow because we believe if they grow, we will too.
I remember when Fly.io used to tout Firecracker but that is just a KVM engine, along with QEMU used on a zillion hosts.
What I'd like to see are customer success stories.
Edit: looks like you have to set up your own cross-region links on DigitalOcean and Vultr. So that interests me somewhat. :)
I see this which mostly seems to be content sites. https://www.wappalyzer.com/technologies/paas/fly-io/ Same on the first forum result: https://community.fly.io/t/customer-success-stories/4882
Erlang is generally considered to be compute-slow (which is generally the case without dropping to nifs).
Either way, I wouldn't expect doing I/O in NIFs or ports to substantially improve throughput, unless you're also moving significant processing into that layer as well, or you're going to end up doing substantially the same level of marshaling work as ERTS does, just in a different language. Setting up a different set of kqueue/epoll descriptors sounds like a lot of work for not much gain too, IMHO; again, maybe io_uring would be useful, but I think you'd be better served to bite the bullet and integrate it with ERTS.
Then add efficient pattern matching for binary data, and networked servers on the BEAM are the most ergonomic than any other language.
All this stuff that the BEAM offers you out of the box can be replicated in any native language with a lot of boilerplate and ceremony.
Your "Hello, <name>" webapp in Rust will probably need two allocations and a string concatenation, while on the BEAM, if constructed as an iolist, it's a single writev syscall, using a static "Hello, " string and a shared "<name>" reference from the parsed HTTP data.