Alternatively, if the awake request is only done when a blocking syscall is done, doesn't it then suffer from the problem that a random buggy library function could request an awake without then doing a blocking syscall (due to whatever logic bug), so then when the process does a blocking syscall that it expects to block indefinitely, it instead gets a syscall with a timeout?
Wouldn't it be better for the awake syscall to take another syscall as a parameter (pretty simple to do in assembly and should be provided as a C library wrapper), in order to guarantee atomicity?
Plus in this case the awake call could be named something more intuitive (like syscall_with_timeout or whatever).
This is an interesting objection.
I find awake/awakened/forgivewkp intuitive names, but I'm not a native English speaker.
I'm not going to add the syscall parameter (I considered and discarded that option during the analysis), but I welcome suggestions for a better naming.
> doesn't this suffer from a race condition?
This is a good question I should probably clarify in the article as it has been asked before but I can't answer in that forum (see https://lobste.rs/s/fqilcv/simplicity_awakes#c_8pvo0s).
To prevent race conditions the wakeup can occur only during a blocking system call (not even all, some cannot be interrupted to avoid unintuitive side effects).
> it then suffer from the problem that a random buggy library function could request an awake without then doing a blocking syscall (due to whatever logic bug), so then when the process does a blocking syscall that it expects to block indefinitely, it instead gets a syscall with a timeout?
This is by design.
The awake idiom described in the article is pretty simple: if you book a time slice you must release it if it didn't expire.
The operating system cannot prevent userspace bugs.
> Wouldn't it be better for the awake syscall to take another syscall as a parameter
This is an option I discarded during the analysis.
It's a matter of trade offs: an additional argument would increase the complexity a lot. In particular, you would need to maintain a map of syscall->wakeups in userspace if you want to be able to `forgivewkp` the right one. And, on successful completion of a sequence of syscalls, you would have to `forgivewkp` all unexpired wakeup in such map.
Thus a single addictional parameter would largely increase the complexity both of the kernel implementation and of the user space code, making several bugs harder to reproduce.
I've been looking for examples of good comments about simplicity to help focus my own view of simple. I love the quote above.
The author of the text values correctness and ease of use over simplicity of implementation and interface.
Another way of looking at it is that it is an attempt to get something into existence, rather than waiting years for perfection and ending up with nothing. To paraphrase performance work, something is infinitely better than nothing.
I am somewhat interested in the comment, "(the mindful reader will notice that alarm is still waiting to be moved to user space… the fact is that it’s too boring of a task!)" I originally interpreted it as meaning userspace could interrupt the blocking call, but I see its actually a filesystem that interacts with kernel space.
fd = create("/dev/alarms/new", ~0, pair_ints(getpid(), ms))
Been a long time since I have looked at Plan 9; create has some interesting arguments.Much as I liked Multics, this is what happened to it.
> How many programs are you running right now? :-D
I don't even know, there are probably thousands of processes running right now, totaling hundreds of millions of lines of code.
And it all works perfectly fine, especially as long as I don't update anything. I just don't do the things that don't work. The things that don't work, they generally don't work 100% of the time. The things that do work, they generally work 100% of the time. Some software might fail randomly and frequently, in which case I might not use it either, unless failure is easily recovered from (which is often the case).
I don't need a system that is really simple and (as a consequence) super-reliable. I need a system that runs my software and that is fault-tolerant. After all, even entirely correct software cannot prevent hardware faults (which do occur).
If you exclude the guns that kill, guns are safe.
If you exclude all security vulnerabilities of the last decade, all mainstream software is secure.
> I don't need a system that is really simple and (as a consequence) super-reliable.
I think you are overlooking how pervasive is computing in your life.
But I can see how a user that have no programming experience could refuse to accept the sad state of today computing.
> After all, even entirely correct software cannot prevent hardware faults (which do occur).
You are misreading the intent here: as artifacts built from fallible humans, no software can be perfect.
But if you don't even try to keep complexity low, it will soon become unmanageable and expensive.
Still, as Gabriel said in his essays, you are right that users can be manipulated to accept and even pay for crap.
It's called marketing.
But I don't like it.
There are indeed guns which are ridiculously unsafe to use and if you just count all guns in the world and average their failure rates, then "on average" guns are less safe. The kind of gun you can legally buy, properly handled, is quite safe - as far as guns go anyway.
The point I am making is that if you just average stuff out (like with the graph) it does not reflect reality. The computers systems that work in reality have very high reliability. Those that don't work > 99% of the time are simply not deployed.
> If you exclude all security vulnerabilities of the last decade, all mainstream software is secure.
All mainstream software is "secure enough", just like all mainstream software is "reliable enough". Otherwise, we obviously couldn't use mainstream software, we would all be forced to use provably correct software that is far more expensive to develop. In practice, the biggest security problem sits at the other end of the screen and no piece of software can fix it.
> I think you are overlooking how pervasive is computing in your life.
> But I can see how a user that have no programming experience could refuse to accept the sad state of today computing.
Believe it or not, I'm an experienced programmer and that has taught me pragmatism, above all things. I could complain about the state of computing all day, but the reality is that it works. It really does. You just have to admit that. Could it be better in practice? Maybe, maybe not. There's only so much effort in the world that can be spent on improving software and actually deploying it (which is the difficult part when comes to new software).
> You are misreading the intent here: as artifacts built from fallible humans, no software can be perfect.
> But if you don't even try to keep complexity low, it will soon become unmanageable and expensive.
I'm not arguing against that, I'm arguing against what that particular graph insinuates. The idea that nothing works anymore when the sum of all unreliable parts creates a completely unreliable result. That doesn't happen in practice with the actual operating systems (and other systems) that we use.
Keeping things simple is of course desirable, but it's also not easy at all and it requires a great level of skill and care. We don't have that kind of skill to work with, at least not for the vast majority of software out there.
> Still, as Gabriel said in his essays, you are right that users can be manipulated to accept and even pay for crap.
> It's called marketing.
That's just naive. It's not like users always have a choice between expertly crafted high quality software and crap software, but then they choose crap because of marketing. They have a choice between Microsoft Office and LibreOffice, both of which are crap. They pay for Microsoft Office because it works better with what everyone already uses (Microsoft Office) or they choose LibreOffice to save money. That's just one example, but there are countless others.