I thought that was exactly what we encouraged here on HN?
That said, I also think that we live in a world where a little bit of clickbait or hype-riding is admissible. The economics of the thing make it almost mandatory. I would just acknowledge the hype, nod in disagreement, and move on. At the end o the day this is about someone tinkering with a language they enjoy. Let them have that.
Which is exactly what he did, in addition to also acknowledging the pattern of the Rust community using hype to attract attention. He didn't force anything onto anyone, he just expressed his opinion.
This is highly misleading. That is not true. At all.
Just take a sample: Do you write OO on C? No, because C is a terrible OO language. Do you write short optimized array-oriented code on C? No, because C is not APL.
People write C how people write C, the way C make them write it.
Moving into other paradigms, you see things differently.
Rust makes a lot of idioms that are inexistent on C viable: Algebraic types alone will shape the way you do algorithms by a lot. Then they push for better errors, then know exactly where things alloc, etc.
Coding in Rust is distinctly different from code in C. It is like the best way you can refactor a codebase: You come for the safety and tooling, you stay because the algorithms write better themselves.
An algebraic data type is just a c union with an auto generated flag and moving some type validation to compile time. Unions used in this manner are quite common in C. I do think the increased ergonomics and safety exist, but only when paired with other features like pattern matching. Selling algebraic data types alone as the major novel feature improvement is a bit misleading and dismissive of existing c features.
I do agree that additional compile safety in rust makes it far easier to confidently refactor without introducing bugs. Accomplishing the same in C requires a lot of unit tests which add maintenance overhead. Python is a more extreme example of that playing out. That all said, I don't think it's necessarily relevant to writing a performant scheduler.
Rust has many strengths and I endorse using it over C any day. That said, the way it's marketed feels misleading and gives experiences c developers bad vibes.
Yes, have a look at GTK, GOBject and parts of Gnome.
> Do you write short optimized array-oriented code on C?
If you want to stay close to C you'd use something like SAC [1] but no, pure C is not an array programming language.
> People write C how people write C, the way C make them write it.
C is sometimes called 'structured assembly' for a reason: it is a toolbox which can be used to construct things the way you see fit. This does mean you need to involve yourself more with certain implementation details since C itself does not force you to use any specific paradigm and as such does not provide you with the basic tenets of those paradigms. If you want to do OO in C you'll have to provide a pointer to the object you're working on in any function call related to that object since C does not assume there to be a 'current object'.
Does this mean C is the most optimal language to do OO programming or array programming? No, clearly not, this is why languages like C++/Java and APL were created. On the other hand it does mean that it is possible to do these things in C and - given the success of Gnome and GTK - doing so can be a viable proposition. The advantage of using C is that it is nearly universally portable, more so than many other languages.
So yes, use of the freedom C provides (while being unsafe) you have the ability to implement whatever complex algorithm you can envision is actually true in that it is possible to do so. You may not want to use C for these purposes but that is irrelevant when considering whether it is possible to do so.
All this news tells us is that a Rust implementation can compare to a C implementation in this field. As you say, schedulers are all about tradeoffs in the end anyway. This news unlocks us having more options, both C and Rust schedulers, meaning a better experience for the Linux community across a variety of workloads. Thus, I don't see any reason to be defensive about Rust performance being found to be comparable to C here.
For something as complex and sensitive as a kernel scheduler, i think "who" wrote the scheduler (as in how much experience writing scheduler), and what software dev. practices (especially how that thing was tested) and far better predictor than just the language used. I would actually go as far as saying that the language used might not even be a predictor at all.
No amount of rust safety would prevent things like dead/locks, quadratic algorithms in weird cases, unreleased resources etc... etc...
> All this news tells us is that a Rust implementation can compare to a C implementation in this field.
That's what the news "implied", but that's not actually what the news says. And i think that's what people are trying to call out.
The dev. implemented a prototype scheduler in rust, and in a very contrive case it does better than "a" C scheduler. The implementation are probably using different algorithm, they probably making different tradeoffs, we have no idea how safe and bug free his implementation his, and not even how safe it's (how much unsafe block is in that thing).
As an exercise to show that rust is a viable language to do kernel development in term of mature tool chain and good integration with kernel API sure. But as tool to compare C vs rust for kernel dev... this is pretty much worthless.
The whole project is a toy and they’re not trying to hide it. The mention of the language is just a descriptor for the project, not an implication that Rust is faster.
However I could see an argument that rust or another higher level language makes it easier for someone to experiment with a new algorithm and iterate faster on those ideas.
There are some user space BPF vms like https://github.com/iovisor/ubpf and Solana.
But I think you read the article and the post wrong. It's not "ha ha, C suxx", it's just... interesting.
[1] I say "tended", because presumably nowadays it's optimized for GPUs, and I've not been keeping up.
This is correct, but I don't think the article is trying to make any claim about the language being relevant for performance.
What I believe the author is showcasing is two things:
- sched_ext allows to write schedulers that outperform a default general-purpose scheduler on certain workloads (performance)
- Since a sched_ext scheduler is a userland process, it can be implemented in any language. The author likes Rust and they used Rust (ergonomics)
The headline compresses both things in one sentence, and this can create some confusion about what they intend to convey.
But the real driver is the rewrite, not the tools. In that case, what's interesting is the algorithm, not the language it was implemented in.
Reminds me of a famous YouTuber making videos about new tech. Every video starts with "a company based in <name of a country> announced..." or "researchers from <name of a country> found..." - This is annoying. Does the country matter? Do people ignore or mock inventions from countries they don't like, writing on HN that they should be reinvented in another country because other countries suck? Fortunately not. But when it comes to programming languages, they do. And this is equally ridiculous.
IMO, another important aspect of rewrite is that it's usually pretty easy to get 70% of the functionality for 40 % of the work. But as one approaches 100% feature parity , plus handling all the corner cases, the transition from prototype to production ready things equalize pretty fast.
Not to mentioned the unknown unknown that the new language might also bring.
I'm sorry, but I'm tired of this. It's like being at the skate park watching someone scream at someone else's kid. "oh my god, no you can't do that, that's not the right way to do that! you're going to hurt yourself! oh wait, you pulled off the trick and everyone is cheering! YOURE NOT SUPPOSED TO DO THAT!!!111".
Every single Rust thread is like this. There's at least three in this whole thread already. It's exhausting and weird. And this whole implication of a global conspiracy to push Rust everywhere rather than gee god, maybe people just like it and are effective with it.
Clearly, "George Soros funds Rust advocacy" /s
I think that pretty clearly summarizes the entire reason for doing this and the excitement that it works and works well.
dang it, I want to try it now. and make an article stating I did it in Zig for the clicks!
I can write a toy program that saves files to disk much faster than notepad.exe, but this is a consequence of making fewer decisions and handling fewer edge cases.
It's trivial to make fast software, especially toy software, but they tend not to survive practical applications without becoming as slow or slower than the systems they originally mimic.
That said: that it works is really cool.
...with certain workloads
i.e. it might be a bit better in a few specific cases, but a bit worse in a large number of more common workloads.
Not that I'm trying to take anything away from the work - getting on par with the well-tuned-over-many-years scheduler for any workloads is an impressive feat. But saying it's "better than the current one" without the caveats made by the original author is oversimplifying to the point of being misleading, I think.
Shouldn't this be standard on all 'desktop' operating systems, e.g. anything associated with the user gets higher priority than anything else happening on the system? Even the Amiga had such a priority-boosting system back in 1985 (otherwise multitasking wouldn't be of much use on such a slow computer, because pretty much any background task would make the UI unusable).
Simple heuristics can get you which to "reasonably" guesstimate which process are interactive and which aren't. For example, every X and descendant, or everything waiting on keyboard/controller or any other input.
I think the linux kernel historically did not want to prioritize interactive processes like windows and macos does.
The other thing is of course to reduce the CPU quanta, trading latency for throughput. I think most modern distribution do ship with better quanta for smooth GUI/desktop behavior.
Unfortunately the same problem also exists on other systems. Visual Studio even had to add a feature "Run build at low process priority" for the system to remain usable during builds:
https://devblogs.microsoft.com/cppblog/msbuild-low-priority-...
exec.library is a RTOS kernel with a round-robin scheduler and (strict) priorities.
Somewhat disappointed that it is using eBPF instead, but still interesting to learn that even such fundamental and performance sensitive parts such as the scheduler can be changed.
Aside from possibly increased scheduler latency, there’s the rather larger potential cost of outright deadlocks: what guarantees that the user scheduler task runs at all when it’s needed?
On 60 seconds of skimming the repo, I didn’t spot a specific solution to this problem. I wonder how it was addressed. Or maybe it wasn’t, since this is just an experiment.
https://www.uwsg.indiana.edu/hypermail/linux/kernel/2307.1/0...
Whether or not that's the case here, I don't know. I wouldn't expect it to be the case here though.
But even apart from that, the implementation being different doesn't matter much I think? I think it's more about there being a compelling component existing in rust, and less about whether it could be a different language or not.
I share your sentiment, but that's not really relevant here. The kernel portion of the scheduler is written in C -- as are all of the other example (and production) schedulers we wrote. The BPF verifier ensures that the program is safe and can't crash the kernel, and we have a watchdog that will boot out a buggy scheduler that fails to schedule tasks in a timely manner.
>But even apart from that, the implementation being different doesn't matter much I think? I think it's more about there being a compelling component existing in rust, and less about whether it could be a different language or not.
An understandable point of confusion, but this is not the case. The fact that the user space portion of scx_rustland is written in rust is anecdotal. We have other schedulers which are entirely contained in the kernel, and others which have rich user space logic written in C. Many of them outperform EEVDF in a lot of different scenarios. By way of example, we're running a C scheduler now for Meta web workloads because it outperforms EEVDF by several percent for both latency and throughput.
Isn't the interesting part that there's a scheduler that's way better particularly in this use case? Linux is taking in Rust here and there already, I don't think that alone is interesting and the parent point is that the same thing could have been written in C.
There are compelling reasons to choose Rust, but it's not magically fixing performance by virtue of not being C. In which case, the implementation is the story.
Of course, if the scheduler ran much slower so your CPU spends more time on the scheduler than time saved through better resource allocation, then it wouldn't help performance. But that's hopefully not a major factor here.
Can the same algorithm be written in C? Sure, but it could also have been written in assembly.