Supports: CGI, Reverse Proxy.
Single threaded using I/O multiplexing (select).
You need to implement a variety of ciphers, manage certificates with expiration dates, etc... And if you wanted to implement all that yourself, people will yell at you for doing your own crypto. So yeah, you need a library. But not just that. You need a way to update your certificates, so you can't have a package (or even a single executable) that you can just run and have a server that serves static pages. You could make a self-signed certificate that lasts a thousand years, but good luck getting it accepted.
In classic HTTP TLS was layered beneath it, providing important degrees of freedom, including the freedom to not use TLS, which can be especially important for experimentation and development.
Prediction: If HTTP/3 manages to substantially replace classic HTTP+TLS, QUIC is destined to become a kernel-provided service like TCP, shunting all that complexity behind an OS abstraction and freeing user space. The fact QUIC uses UDP is an important aspect here because a performant userspace QUIC stack conflicts with classic, high-value abstractions like file descriptors and processes; abstractions which make it viable (i.e. cheap) to have a rich, diverse ecosystem of languages and execution environments in userspace. More importantly, HTTP will have come full circle.
When one uses TLS today, chances are very good that it's using something from djb, someone who "made his own crypto". Maybe the assumptions, stated as "rules", do not apply equally to everyone. For example,
"And if you wanted to implement all that yourself, people will yell at you for doing your own crypto."
In fact, before HTTP/2 existed, djb did exactly that, as a demonstration that it could be done.^1 It succeeded IMHO because it worked. People can "yell" all they want, but as above, these same people if they use TLS are probably using cryptography developed by the person at which they are "yelling". Someone who broke the "rules". Perhaps there is evidence that HTTP/2 would exist even were it not for the prior CurveCP experiment. But I have yet to find it.
The word used in the parent comment was "implement" and the suggestion is that attempts to "implement" would not succeed. Perhaps the reason they might "fail" is not a technical one. Perhaps "success" in this instance really refers to acceptance by certain companies that are making "rules" (standards) for the internet to benefit their own commercial interests. It may be possible to implement a system that works even if these companies do not "accept" it. If so, then the problem here is the companies, their fanboys/fangirls (watch for them in the comment replies), and the undue influence they can exert, not the difficulty of implementing something that works.
IMHO, getting something "accepted" by some third party or group of third parties is a different type of "success" that getting something to work (i.e., "implementing"). It's the later I find more interesting.
google isn't really concerned about creating elegant abstractions so much as they are about improving the performance of their browser talking to their server, though sometimes these do coincide
We built pianojacq.com without all that, just plain vanilla JS and a minimum of dependencies, see: https://gitlab.com/jmattheij/pianojacq
Certainly some applications need not have access to the entire HTTP suite. If the goal is not to offer a "full-featured" client then an HTTP client may not be terribly difficult, it can just fail gracefully if the other party tries to do something it doesn't support?
But loosing one of the main benefits of scripting languages for some convenience? You could argue that you use webpack anyway. Same question applies here...
If you have to use JS anyway, be glad that you can have an extremely shallow toolchain for deployment for smaller projects. I am not a JS dev, I just use it for stabbing at bits occasionally.
But yes, humans are humans and everything that is "simple" will get built upon and then become the backbone of something complex that will engulf and smother it as it evolves.
Or it can be neither. But it can't be both.
If you want fast web at all cost (and you need encryption), you get HTTP3/QUIC.
https://github.com/robdelacruz/lkwebserver/blob/main/lkstrin...
It's fun to write this (and read others' versions) the first 3 or 4 times, but it gets old quickly.
Personally, I wish HTTP messages were closer to something like ASN.1 DER; there's little in the way of string manipulation necessary for those, and all the lengths are prefixes instead of "try to find the terminator" (and don't forget to not run past the end of the buffer...)
This has also had serious security bugs because it's so hard to understand.
and never mind that Go has had generics for over a year now right? sometimes having a small, stripped down language is better than having a huge bloated monster. I would point to examples, but you know what they are.
https://github.com/robdelacruz/lkwebserver/blob/main/lkstrin...
This is distinct from checking the parameters; if lks is null then the user of the API has made an error. Some libraries may sanitise user parameters, others don’t. At any rate, an assert would be the wrong choice to check user parameters since this would result in a (recoverable) user error leading to an abort unless the assert is disabled at compile time (-DNDEBUG), returning an error would be a better choice.
I mean, you can always just use an existing string library or reuse your own. There's no reason to rewrite string operations for every project.
Curious to know how it compares to micro_httpd [1] which is about 200 lines of C. Or others like thttpd and tiny_httpd.
WARNING: select() can monitor only file descriptors numbers that
are less than FD_SETSIZE (1024)—an unreasonably low limit for
many modern applications—and this limitation will not change.
All modern applications should instead use poll(2) or epoll(7),
which do not suffer this limitation.22 years ago, I worked for Zeus Web Server, which was built entirely around one-process-per-core webserving off select(), and it was so much faster than Apache for serving static content that the developers had built a business out of it. At the time it could saturate a gigabit ethernet link off the largest HP-UX server we could find.
Sure, you should use the modern interfaces, but 1024 connections per process can get you surprisingly far.
http://canonical.org/~kragen/sw/dev3/httpdito-readme
http://canonical.org/~kragen/sw/dev3/server.s
it's a 2-kilobyte executable written in i386 assembly that can handle 20000 requests per second on my laptop, but only serves up files from the filesystem; no cgi or reverse proxy
instead of being single-threaded or preforking it just forks a child per request
Love it!
what are the best options out there for hosting websites built as HTTP-serving executables (either Windows or Linux)? is it possible to do this relatively cheaply?
I ask because I've been working on a framework[0] for building websites in a compiled language recently, and while it's been a ton of fun to build and test locally as a hobby project, I have absolutely no idea if it's even remotely financially viable to host a (small- to medium-sized) website made this way, compared to all of the managed hosting solutions out there for PHP/Node/etc.
I don't want/need to pay for a whole dedicated server—I just want to serve HTTP (eventually HTTPS) from a single executable, using one or more SQLite database files. ideally, it would cost as close to your typical shared PHP host as possible.
I have almost zero experience with "cloud" hosting—I made a small game with Node on Azure years ago, and accidentally racked up charges just playing around with it in development—so I don't know if this, or AWS, or whatever else is a viable solution for this. I've seen that it is indeed possible to host a single executable on Azure, but I haven't actually tried it myself, or determined what the pricing for this would end up being.
The web app is fronted cloudflare (free tier). On the box itself I have caddy set up as a reverse proxy with the cloudflare cert, then uvicorn serving my python asgi (starlette) app. The app uses a local SQLite db.
I’m still working out some of the operational stuff like backups and monitoring but so far I am very pleased with the setup. I’m learning a lot and for the first time I do not feel like there is some monstrous pile of complexity behind a curtain.
Setup takes some time but I have detailed notes and it gets easier every time I run through. Feel free to get in touch if you’d like to hear more details.
Otherwise, why not just get a cheap VPS and host the binary there? I’ve used Vultr and it’s $3.50/month all-in, at the low end. There are even cheaper providers, although I don’t know about their quality. I bet this option would be the cheapest.
This is the beauty of a single binary—it’s trivial to deploy!
this is the question driving the framework I'm building—it even has support for simple HTML templates, but they're interpreted, type-checked against the structs that get passed into them, and baked into the executable, all at compile time. this is all coming off of building a website for a client using PHP for the first time in over a decade—on one hand, I appreciate the relative simplicity and ease of deployment compared to modern backend stacks, but on the other hand, it's still an interpreted language, with all the baggage associated with that. I believe it is possible to take the ease of use and speed of iteration of interpreted languages, and the benefits of strongly-typed compiled languages, and get the best of both worlds—at least, for the scale and complexity of website that I want to build and maintain.
Multiple websites served by golang written server, some static sites, also gitea and Jenkins.
Cheapest AWS solution would be just EC2 instances (=basic virtual machines). t3a.nano instances cost just $3.5/mo and do not require additional load-balancers.
The modern cloudy approach would be to look into stuff like CloudFlare Workers, iirc they can run WASM, so if you manage to compile your code to that then it might work.
This is exactly the type of web server I'm looking for my project.
We have tokio to handle all the IO stuff, we have hyper to handle HTTP parsing, and we even have tungstenite to handle websocket out of the box. While I appreciate your work but it will not be practical to write C anymore in the modern age. Well, unless you need to target something LLVM isn't there yet and maybe you need some weird GCC toolchain (cough cough AVR)
Both approaches are valid and serve different purposes, you seem to have misunderstood the purpose here.
been hearing that for 20 years
Religious wars will persist