In a web server here is what I am looking for:
* Fast. That is just a matter of streams and pipes. More on this later. That said the language the web server is written in largely irrelevant to its real world performance so long as it can execute low level streams and pipes.
* HTTP and WebSocket support. Ideally a web server will support both on the same port. It’s not challenging because you just have to examine the first incoming payload on the connection.
* Security. This does not have to be complicated. Let the server administrator define their own security rules and just execute those rules on incoming connections. For everything that fails just destroy the connection. Don’t send any response.
* Proxy/reverse proxy support. This is more simple than it sounds. It’s just a pipe to another local existing stream or piping to a new stream opened to a specified location. If authentication is required it can be the same authentication that sits behind the regular 403 HTTP response. The direction of the proxy is just a matter of who pipes to who.
* TLS with and without certificate trust. I HATE certificates with extreme anger, especially for localhost connections. A good web server will account for that anger.
* File system support. Reading from the file system for a specific resource by name should be a low level stream via file descriptor piped back to the response. If this specific file system resource is something internally required by the application, like a default homepage it should be read only once and then forever fetched from memory by variable name. Displaying file system resources, like a directory listing, doesn’t have to be slow or primitive or brittle.
I'm no expert, but that doesn't sound right to me. Efficiently serving vast quantities of static data isn't trivial, Netflix famously use kernel-level optimisations. [0] If you're serious about handling a great many concurrent web-API requests, you'll need to watch your step with concerns like asynchrony. Some languages make that much easier than others. Plenty of work has gone into nginx's efficiency, for example, which is highly asynchronous but is written in C, a language that lacks features to aid with asynchronous programming.
If you aren't doing that kind of serious performance work, your solution presumably isn't performance-competitive with the ones that do. As you say, anyone can call their solution fast.
[0] [PDF] https://freebsdfoundation.org/wp-content/uploads/2020/10/net...
nginx is a big state machine built around epoll, and there's not much to do with the raw kernel ABI anyway (of course using safer and more powerful tools helps with the general quality of the end result, but not really with speed). it took many years for the uring ABI to emerge (and even using it efficiently is not trivial).
I'm finding it hard to make sense of your comment: I can't reconcile some of the stuff you're saying. My gut feeling is you're either ridiculously smart, so smart that defining and implementing a security rules engine for a web server is something genuinely trivial for you, and the world has a lot to learn from you. Or, you're really, really not aware of how much you don't know, so much so that you're going to end up doing something stupid/dangerous without realising.
Either way, an example of a supposedly fast web server written by you should clear it up pretty quickly.
Sorry, because this feels a little rude, and I don't mean it to be, but you're contradicting quite a lot of widely held common sense and best practise in a very blasé way, and I think that makes the burden of proof a little higher than normal.
that being said, its common for new servers to have old vulns, not many coders will go over cve reports of apache and nginx and test their own code against old vulns in those.
i do find a lot of claims about performance or security are oftend unsupported as with this server. it just says it on the readme but provides no additional contex or proof or anything to back up those claims.
my thought is that original commenter gott riggered by that, perhaps rightfully, and points out thia fact more than anything. if you want to claim high performance or security, back it up with proof.
the simple fact its in rust doesnt make it more secure. and using async in rust doesnt imply good performance. it could in both cases. wheres the proof.?
Just take a look at https://www.rfc-editor.org/rfc/rfc9110#section-5.5 to get an idea of how any choice made by a web server can blow up in your face.
I never had to write a proxy and am grateful for it. You have to really understand the whole network stack, window sizes and the effects of buffering, what to do about in flight requests, and so on. Just sending stuff from the file system is comparatively easier where you have things such as sendfile, provided you get the security implications of file paths right.
Here’s the rocket.rs source I used:
#[rocket::main]
async fn main() -> Result<(), rocket::Error> {
rocket::build()
.mount("/", rocket::fs::FileServer::from("/tmp/test"))
.ignite()
.await?
.launch()
.await?;
Ok(())
}
And do `mkdir /tmp/test && dd if=/dev/urandom of=/tmp/test/bigfile bs=1G count1` to create a 1G file, and run `time curl -o /dev/null localhost:8000/bigfile`.My nginx config:
worker_processes auto;
master_process off;
pid /dev/null;
events {}
http {
sendfile on;
access_log /dev/stdout;
error_log /dev/stderr;
server {
listen 8089;
server_name localhost;
root /tmp/test;
location / {
try_files $uri $uri/ =404;
}
}
}
launched with `nginx -c "$(pwd)"/nginx.conf -g "daemon off;"`The results for a 1GB file for me on an nvme ssd, averaged over 100 runs:
nginx: 150ms
rocket: 4 seconds
Or roughly 25x slower. Release mode makes no difference.You can definitely write slow code in rust if you’re naive about reading/writing between channels a few kilobytes at a time, which is what rocket does, vs using sendfile(2), like nginx does.
Edit: These numbers were from a few months ago... I tried it again by just pasting the above into a new project with `cargo init` and adding rocket and tokio to my deps, and it's now 2.3s in debug and 1.2s in release mode. It may have improved since a few months ago, but it's still 10x slower.
https://github.com/static-web-server/static-web-server wins the SEO (and GitHub star) battle, though apparently it is old enough to have a couple unmaintained dependencies.
I use https://github.com/sigoden/dufs as my personal file transfer Swiss Army knife since it natively supports file uploads; I will check out Ferron as a lighter reverse proxy with automatic SSL certs vs. Caddy.
This piece of fiber cable is the fastest static web server.
It is purposely barebones, but I bet you, it does almost nothing to reduce the delivery of a static website. The trick is: The website is already in its final state when it gets piped through the fiber cable, so no processing is required. The templating and caching mechanism is left open for most flexibility.
I call it an OSI layer 1 web server.
The trick is to use fiber instead of copper.
Many webservers don't care about this.
Ferron is different.
Is that a choice or just something you didn’t work on yet?
Also, your FAQ really makes you come off as incredibly patronizing.
> The web servers serve a default page that comes with NGINX web server.
so yeah, if you even refer to nginx when talking about benchmarks but leave it out, I'm going to favor adverse inference and assume that it's because nginx is faster.
I think this is really cool. More competition in this space is better not worse, I am merely curious to know how it stacks up
I'm not using any of the other servers in the benchmark so it's meaningless to me.
I run a few websites on fly.io VMs with 256mb using Rust servers that never actually exceed 64mb of usage.
For example with HAProxy you can configure separate timeouts for just about everything. The time a request is queued (if you exceed the max connections), the time for the connection to establish, the time for the request to be recived, inactivity timeout for the client or server, inactivity timeout for websocket connections... The list goes on: https://docs.haproxy.org/3.1/configuration.html#4-timeout%20...
Slowloris is more than just the header timeout. What if the headers are received and the request body is sent, or response consumed very slowly? And even if this is handled with a "safe" default, it must be configurable to cater to a wide range of applications.