What's your opinion? Ditch Docker and put the Erlang VM on the host OS? Ditch hot code loading and swap containers the usual way? Some middle ground?
> if you can avoid the whole procedure (which will be called relup from now on) and do simple rolling upgrades by restarting VMs and booting new applications, I would recommend you do so.
Erlang grew out of the challenges faced by telecoms industries such as what do you do when blue-green isn't an option? Think an in-use packet switch that is the only point of contact between two networks. No way to take the switch down for maintenance without some interruption in service, which gets messy when dealing with timeouts. In the Armstrong thesis paper he gives another example [2]:
> Usually in a sequential system, if we wish to change the code, we stop the system, change the code and re-start the program. In certain real-time control systems, we might never be able to turn off the system in order to change the code and so these systems have to be designed so that the code can be changed without stopping the system. An example of such a system is the X2000 satellite control system developed by NASA.
This power comes at a cost, though. LYSE again:
> It is said that divisions of Ericsson that do use relups spend as much time testing them as they do testing their applications themselves. They are a tool to be used when working with products that can imperatively never be shut down.
The point being, hot code reloading is an additional feature that can come in handy but for most of HN's audience probably won't be relevant; the cost outweighs the benefits of just blue-green deploying it.
[1] http://learnyousomeerlang.com/relups#the-hiccups-of-appups-a... [2] http://www.erlang.org/download/armstrong_thesis_2003.pdf
In other words, you can compare the Erlang's virtual machine with a container itself, and everything old is new again!
Typical use cases include several gigabytes of in memory state which takes a long time to read in and get hot when redeploying or a large amount of long-running TCP connections.
For most other uses, we just do rolling upgrades in Erlang as everyone else is doing. It is somewhat simpler to get to work, and immutable architecture is to a certain extent easier to manipulate.
We're using Erlang as the primary language environment for our IoT product for a lot of reasons but one big one is: Hot code loading and a very robust release upgrade environment with a lot of control over the process (including restarting everything inside the VM if that's what we wish to do).
For our product, a digital light switch / dimmer, high uptime guarantees is a very important requirement and Erlang has it all plus many other wonderful features.
You can do a hot code load in Ruby using the Kernel#load() call. It won't alter functionality currently on the call stack, but it will change the functionality of everything not on the call stack. With some sympathetic design, you can achieve hot code loading fo high availability in ruby.
$ cat hi.rb
def method
puts "hi"
end
method
load("hello.rb")
method
$ cat hello.rb
def method
puts "hello"
end
$ ruby hi.rb
hi
hello
You must engineer your application to execute the load method and that's it.
However I wonder if this is really equivalent to what Erlang does. I remember http://rvirding.blogspot.it/2008/01/virdings-first-rule-of-p...Except you can clone the car into a controlled environment, and test the whole procedure, before doing the actual replacing.
Someone wrote a module for elixir that uses inotify (and similar) to -I think- watch .beam files for modification and perform the required hot-reloads automatically.
I would be reluctant to run this in production, and I can see situations (even in development) where this could trigger unwanted code purging and would be disastrous, but it's a pretty neat thing to have and -it seems- a must for Web Dev people.
I'm asking because I don't have enough context to know why you want to do what you're asking to do.
Working for 4 years in an Erlang environment where hotloading is the norm, makes me wish for it everywhere! Why do I have to reboot to fix kernel bugs in tcp? :(
[1] the load balancers I have access to where we host had more downtime than our hosts, so not actually helpful