If you just want reverse-proxying, you can choose between the simplicity of Caddy or the power of Nginx (or Apache). Why would I want to run a JS app (that I know will be less performant) to do that?
[1] https://expressjs.com/en/advanced/best-practice-performance.htmlThe same reason you'd use Caddy over Nginx or Apache, why use the former over the the latter when the latter already exist and are faster? Because someone who knows Go but not C can customize its codebase, just like someone who knows JS but not C or Go can modify Redbird's.
If Node.js is good enough to build and run servers then it's good enough to build reverse proxies.
If you use this logic, you would not be able to run Linux because it is written in C, correct?
The advantage of developing on as similar a platform to the projected production platform is that when you do deploy to a real environment, there are fewer nasty surprises.
I don't have the numbers to back this up, but I would be inclined to believe that a Node.js implemented reverse-proxy would outperform Apache.
- And as another comment mentions, developer productivity may be better than minor performance difference - you mentioned Caddy, same rationale applies here.
- nginx is also a bit crippled as useful features (like dynamic reconfig) are only in the proprietary nginx plus.
- This has good defaults - having to set up the seperate webserver for ACME is a pain, this is way easier.
But actually, you could have setup a pure Python server instead of nginx for the last 10 years with twisted.
Not that you should not consider uring nginx anyway: it deals with caching, load balancing, has great tutorials, it's battle tested, and has tons of pluging.
But performance wise, WSGI is not Python.
Except, it is ruby/python/php slow: https://www.techempower.com/benchmarks/#section=data-r17&hw=...
All of PHP, Ruby, and Python beat Node.js in the above-linked benchmark.
I've used redbird to create a local dev proxy that can run be via npm scripts (which were already in the repo) so our engineering team didn't have to run the entire constellation of docker containers if they were only working on frontend code, and the only setup required was `npm install` which they already needed to do
I do wish you could just `npm install nginx` though
Finally I got rid of it and replaced it with Caddy webserver
I’m not ready for full k8s, but traefik gives me ACME and detects my docker containers and routes them for me.
If I only need https redir I’ll use that or nginx.
I can Redbird being useful where you only have node as an option.
> We have now support for automatic generation of SSL certificates using LetsEncrypt. Zero config setup for your TLS protected services that just works.
I've scripted this kind of letsencrypt certs automation before with certbot and nginx, which is fine but a 'just works' declarative plugin for nginx would be much nicer.
Does anything like this exist? Anyone have experience with using it? Asking here as googling brings up a bunch of outdated forum threads etc to wade through.
Automatic routing via Redis
Could anyone explain to me how does this work and what is automatic routing in this context?
To all future users: don't call `pm2 start ...` too often one by one (like in .sh script) - it won't start all of these, just some of them (at least on raspberry, maybe faster cpu helps here), use "ecosystem" feature for starting multiple services
Would love to add some middleware to turn this into an OAuth2 proxy.
I don’t know, a last release that long ago doesn’t inspire faith in the health of the project.
But to be frank, we have used node-http-proxy and other stuff before. It doesn't end with development. People put this to reverse proxy API calls in production, and then the other stuff that this calls into - it also uses proxying to call more stuff. In general, if you find yourself using this as a "convenient, declarative way to route" in production, you _might potentially_ be digging yourself a hole. The lines between "harmless reverse proxying" and "service routing and discovery" can become thin.
If you have to serve a large static file - guess what... no other request gets processed at the same time. It's a neat idea but until we get true multi-threading in Node.js, I'll stick with my nginx, thank you very much.
Node is very happy to asyncronously stream chunks of data over HTTP. You could write a blocking server by collecting the whole response in memory and sending it out all at once, but that’s not required by Node.
This is literally the reason Node even exists, nonblocking I/O with a simple threading model. You can have a million open connections all asynchronously nibbling data as the buffers empty.
Without a proxy your forced to only work on one area of the app at once so hopefully something like this will let me run all webpack instances under the same endpoint during dev.
Would this work? Or what’s the issue you are referring to?
Last I checked, webpack does support building multiple scripts (by turning the `entry` config setting into an array or object), but they'll all have the same settings (so that intermediate results used by multiple entry points only need to be built once). If webpack since added support for multiple entire configurations behind a single webpack dev server then that would be splendid of course, but last I checked they didn't.
We have a similar problem at my company that we build our frontend React code for the browser and for node, and the latter has subtly different build settings. Our solution is to run the webpack devserver for the browser, a webpack watch mode for the nodejs version, and then we run the nodejs version using nodemon so that it reloads whenever the webpack watch mode rebuilds. The nodejs version contains an expressjs reverse proxy to forward requests for things in the `assets` directory to the webpack devserver. It's messy, but it works. My biggest beef is that it's rather dissimilar from our production setup.
Of course if we'd done node+browser from the start we'd have written our code such that the node version could be run in node without a compilation step, but we were first frontend-only and then added server rendering, and by then the code already made use of a number of webpack-isms (such as weird require("raw!css!stylus!myfile.styl") type of stuff).
The comparison is a forward proxy, which the client has, and a reverse proxy is where it's on the server side.
Load balancing, not having to run the webserver as root to bind to 80/443, caching responses, buffering, rate limiting, injecting headers, serving static assets, etc.