I actually have quite an opposite view: I think the Rust core team is 100% correct to make it very hard to add new "features" to the PL, in order to prevent the "language surface" from being bloated, inconsistent and unpredictable.
I've seen this happen before: I started out as a Swift fan, even though I have been working with Objective-C++ for years, considered it an awesome powerhouse and I did not really need a new PL for anything in particular in the world of iOS development. With time, Swift's insistence on introducing tons of new language "features" such as multiple, redundant function names, e.g., "isMultiple(of:)", multiple rules for parsing curly braces at al. to make the SwiftUI declarative paradigm possible, multiple rules for reference and value types and mutability thereof, multiple shorthand notations such as argument names inside closures, etc. - all that made me just dump Swift altogether. I would have to focus on Swift development exclusively just to keep up, which I was not willing to do.
Good ideas are "dime a dozen". Please keep Rust as lean as possible.
For example, you can write functions which return an impl Trait. And structs can contain arbitrary fields. But you can't write a struct which contains a value returned via impl Trait - because you can't name the type.
Or, I can write if a && b. And I can write if let Some(x) = x. But I can't combine those features together to write if let Some(x) = x && b.
I want things like this to be fixed. Do I want rust to be "bigger"? I mean, measured by the number of lines in the compiler, probably yeah? But measured from the point of view of "how complex is rust to learn and use", feature holes make the language more complex. Fixing these problems would make the language simpler to learn and simpler to use, because developers don't have to remember as much stuff. You can just program the obvious way.
Pin didn't take much work to implement in the standard library. But its not a "lean" feature. It takes a massive cognitive burden to use - to say nothing of how complex code that uses it becomes. I'd rather clean, simple, easy to read rust code and a complex borrow checker than a simple compiler and hard to use language.
> Pin didn't take much work to implement in the standard library. But its not a "lean" feature. It takes a massive cognitive burden to use - to say nothing of how complex code that uses it becomes. I'd rather clean, simple, easy to read rust code and a complex borrow checker than a simple compiler and a horrible language.
Your commentary on Pin in this post is even more sophomoric than the rest of it and mostly either wrong or off the point. I find this quite frustrating, especially since I wrote detailed posts explaining Pin and its development just a few months ago.
https://without.boats/blog/pin/ https://without.boats/blog/pinned-places/
You should have a look at Scala 3. Not saying that I'm perfectly happy with the direction of the language - but Scala really got those foundations well and made it so that it has few features but they are very powerful and can be combined very well.
Rust took a lot of inspiration from Scala for a reason - but then Rust wants to achieve zero-cost abstraction and do high-performance, so it has to make compromises accordingly for good reasons. Some of those compromises affect the ergonomics of the language unfortunately.
I'll give an example - async traits. On the surface it seems fairly simple to add? I can say async fn, but for the longest time I couldn't say async fn inside a trait? It took years of work to solve all the thorny issues blocking this in a stable, backwards compatible way and finally ship it [1]. There is still more work to be done but the good news is that they're making good progress here!
You pointed out one feature that Rust in Linux needs (no panics), but there are several more [2]. That list looks vast, because it is. It represents years of work completed and several more years of work in the Rust and Rust for Linux projects. It might seem reasonable to ask why we can't have it right now, but like Linus said recently "getting kernel Rust up to production levels will happen, but it will take years". [3] He also pointed out that the project to build Linux with clang took 10 years, so slow progress shouldn't discourage folks. The important thing is that the Rust project maintainers have publicly committed to working on it right now - "For 2024H2 we will work to close the largest gaps that block support (for adopting Rust in the kernel)". [4]
You dream of a language that could make bold breaking changes and mention Python 2.7 in passing. The Python 2/3 split was immensely painful and widely considered to be a mistake, even among the people who had advocated for it. The Rust project has a better mechanism for small, opt-in, breaking changes - the Edition system. That has worked well for the last 9 years and has led to tremendous adoption - more than doubling every year [5]. IMO there's no reason to fix what isn't broken.
I guess what I'm saying is, patience is the key here. Each release might not bring much because it only represents 6 weeks of work, but the cumulative effect of a year's worth of changes is pretty fantastic. Keep the faith.
[1] - https://blog.rust-lang.org/2023/12/21/async-fn-rpit-in-trait...
[2] - https://github.com/Rust-for-Linux/linux/issues/2
[3] - https://lwn.net/SubscriberLink/991062/b0df468b40b21f5d/
[4] - https://blog.rust-lang.org/2024/08/12/Project-goals.html
[5] - https://lib.rs/stats
If you want an edge over the people who are writing RFCs, don't write an RFC. Write a complete, production-ready implementation of your idea, with documentation and test cases, which can be cleanly merged into the tree.
Please by all means provide an implementation, but do write the RFC first. (Or in some cases smaller processes, such as the ACP process for a small standard-library addition.) Otherwise you may end up wasting a lot of effort, or having to rewrite the implementation. We are unlikely to accept a large feature, or even a medium feature, directly from a PR without an RFC.
I'd argue that this makes them pretty useless: if you just want a value that you can use like any other, then you can define a function that returns it and be done with it. Now we have another way to do it, and in theory it could do more, but that RFC has been stale for several years, nobody seems to be working on it, and I believe it's not even in nightly.
If the support would actually be good, we could just get rid of all the support crates we have in cryptography libraries (like the generic_array and typenum crates).
That said, I agree that the Rust team should be careful about adding features.
What is this way? I have been fighting with this problem for quite some time recently.
Alternatively: Rust is already the Wagyu of somewhat-mainstream PLs, don't keep adding fat until it's inedible.
Good ideas are rare and precious by definition.
At its core its a pretty simple app. I watches for file changes, and re-runs the compiler. The implementation is less than 1000 lines of code. But what happens if I vendor the dependencies? It turns out, the deps add up to almost 4 million lines of Rust code, spread across 8000+ files. For a simple file-watcher.
C/C++ are the only widely used languages without a popular npm-style package manager, and as a result most libraries are self-contained or have minimal, and often optional dependencies. efsw [1] is a 7000 lines (wc -l on the src directory) C++ FS watcher without dependencies.
The single-header libraries that are popular in the game programming space (stb_* [2], cgltf [3], etc) as well as of course Dear ImGui [4] have been some of the most pleasant ones I've ever worked with.
At this point I'm convinced that new package managers forbidding transitive dependencies would be an overall net gain. The biggest issue are large libraries that other ones justifiably depend on - OpenSSL, zlib, HTTP servers/clients, maybe even async runtimes. It's by no means an unsolvable problem, e.g. instead of having zlib as a transitive dependency, it could:
1. a library can still hard-depend on zlib, and just force the user to install it manually.
2. a library can provide generic compress/decompress callbacks, that the user can implement with whatever.
3. the compress/decompress functionality can be make standard
[1] https://github.com/SpartanJ/efsw
[2] https://github.com/nothings/stb
The mainstream game programming doesn't use C at all. (Source: I had been a gamedev for almost a decade, and I mostly dealt with C# and sometimes C++ for low-level stuffs.) Even C++ is now out of fashion for at least a decade, anyone claiming that C++ is necessary for game programming is likely either an engine developer---a required, but very small portion of all gamedevs---or whoever haven't done significant game programming recently.
Also, the reason that single-header libraries are rather popular in C is that otherwise they will be so, SO painful to use by the modern standard. As a result, those libraries have to be much more carefully designed than normal libraries either in C or other languages and contribute to their seemingly higher qualities. (Source: Again, I have written sizable single-header libraries in C and am aware of many issues from doing so.) I don't think this approach is scalable in general.
If you ignore the OS, then sure. Most C/C++ codebases aren't really portable however. They're tied to UNIX, Windows or macOS, and often some specific version range of those, because they use so many APIs from the base OS. Include those and you're up to millions of lines too.
1. This doesn't mean that C++'s fragmented hellscape of package management is a good thing.
2. "inevitably"? No. This confuses the causation.
3. This comment conflates culture with tooling. Sure, they are related, but not perfectly so.
This only works for extremely simple cases. Beyond toy example, you have to glue together two whole blown APIs with a bunch of stuff not aligning at all.
[Edit] And for completeness, Microsoft's Windows crate is 630 thousand lines, though that goes way beyond simple bindings, and actually provides wrappers to make its use more idiomatic.
Composition is an essential part of software development, and it crosses package boundaries.
How would banishing inter-package composition be a net gain?
Also I am no expert, but I think file-watchers are definitely not simple at all, especially if they are multi-platform.
Language files blank comment code
-------------------------------------------------------------------------------
C 4 154 163 880
Bourne Shell 2 74 28 536
C/C++ Header 4 21 66 70
Markdown 1 21 0 37
YAML 1 0 0 14
-------------------------------------------------------------------------------
SUM: 12 270 257 1537
-------------------------------------------------------------------------------
including a well-designed CLI.entr supports BSD, Mac OS, and Linux (even WSL). So that's several platforms in <2k lines of code. By using MATHEMATICS and EXTRAPOLATION we find that non-WSL Windows file-watching must take four million minus two thousand equals calculate calculate 3998000 lines of code. Ahem.
Though to be fair, cargo watch probably does more than just file-watching. (Should it? Is it worth the complexity? I guess that depends on where you land on the worse-is-better discussion.)
Forgive me if I'm making a very bold claim, but I think cross-platform file watching should not require this much code. It's 32x larger than the Linux memory management subsystem.
Since everyone depends on the standard library this will just mean everyone will depend on even more lines of code. You are decreasing the number of nominal dependencies but increasing of much code those amount to.
Moreover the moment the stdlib's bundled dependency is not enough there are two problems:
- it can't be changed because that would be a breaking change, so you're stuck with the old bad implementation;
- you will have to use an alternative implementation in another crate, so now you're back at the starting situation except with another dependency bundled in the stdlib.
Just look at the dependency situation with the python stdlib, e.g. how many versions of urllib there are.
I don't really know much about Rust, but I got curious and had a look at the file watching apis for windows/linux/macos and it really didn't seem that complicated. Maybe a bit fiddly, but I have a hard time imagining how it could take more than 500 lines of code.
I would love to know where the hard part is if anyone knows of a good blog post or video about it.
And since xz we know resourceful and patient attackers are reality and not just "it might happen".
Sorry but sprawling transitive micro-dependencies are not sustainable. It's convenient and many modern projects right now utilize it but they require a high-trust environment and we don't have that anymore, unfortunately.
All code is built on mountains of dependencies that by their nature will do more than what you are using them for. For example, part of cargo watch is to bring in a win32 API wrapper library (which is just autogenerated bindings for win32 calls). Of course that thing is going to be massive while watch is using only a sliver of it in the case it's built for windows.
The standard library for pretty much any language will have millions of lines of code, that's not scary even though your apps likely only use a fraction of what's offered.
And have you ever glanced at C++'s boost library? That thing is monstrously big yet most devs using it are going to really only grab a few of the extensions.
The alternative is the npm hellscape where you have a package for "isOdd" and a package for "is even" that can break the entire ecosystem if the owner is disgruntled because everything depends on them.
Having fewer larger dependencies maintained and relied on by multiple people is much more ideal and where rust mostly finds itself.
The is-odd and is-even packages are in no way situated to break the ecosystem. They're helper functions that their author (Jon Schlinkert) used as dependencies in one of his other packages (micromatch) 10 years ago, and consequently show up as transitive dependencies in antiquated versions of micromatch. No one actually depends on this package indirectly in 2024 (not even the author himself), and very few packages ever depended on it directly. Micromatch is largely obsolete given the fact that Node has built in globbing support now [1][2]. We have to let some of these NPM memes go.
[1] https://nodejs.org/docs/latest-v22.x/api/path.html#pathmatch...
[2] https://nodejs.org/docs/latest-v22.x/api/fs.html#fspromisesg...
This used to be true 5-10 years ago. The js ecosystem moves fast and much has been done to fix the dependency sprawl.
It seems that most dependencies of cargo-watch are pulled from three direct requirements: clap, cargo_metadata and watchexec. Clap would pull lots of CLI things that would be naturally platform-dependent, while cargo_metadata will surely pull most serde stuffs. Watchexec does have a room for improvement though, because it depends on command-group (maintained in the same org) which unconditionally requires Tokio! Who would have expected that? Once watchexec got improved on that aspect however, I think these requirements are indeed necessary for the project's goal and any further dependency removal will probably come with some downsides.
A bigger problem here is that you can't easily fix other crates' excessive dependencies. Watchexec can be surely improved, but what if other crates are stuck at the older version of watchexec? There are some cases where you can just tweak Cargo.lock to get things aligned, but generally you can't do that. You have to live with excessive and/or duplicate dependencies (not a huge problem by itself, so it's default for most people) or work around with `[patch]` sections. (Cargo is actually in a better shape given that the second option is even possible at all!) In my opinion there should be some easy way to define a "stand-in" for given version of crate, so that such dependency issues can be more systematically worked around. But any such solution would be a huge research problem for any existing package manager.
That, and the maven repository is moderated. Unlike crates.io.
Crates.io is a real problem. No namespaces, basically unmoderated, tons of abandoned stuff. Version hell like you're talking about.
I have a hard time taking it at all seriously as a professional tool. And it's only going to get worse.
If I were starting a Rust project from scratch inside a commercial company at this point, I'd use Bazel or Buck or GN/Ninja and vendored dependencies. No Cargo, no crates.io.
I wish crates that used Windows stuff wouldn't enable it by default.
The fact that nothing has changed in the NPM and Python worlds indicates that market forces pressure the decision makers to prefer the more risky approach, which prioritizes growth and fast iteration.
whether those factors impact how you view the result of linecount is subjective
also as one of the other commenters mentioned, cargo watch does more than just file watching
Other than people who care about relatively obscure concerns like distro packaging, nobody is impeded in their work in any practical way by crates having a lot of transitive dependencies.
That sounds like a massive security problem to me. All it would take is one popular crate to get hacked / bribed / taken over and we're all done for. Giving thousands of strangers the ability to run arbitrary code on my computer is a profoundly stupid risk.
Especially given its unnecessary. 99% of crates don't need the ability to execute arbitrary syscalls. Why allow that by default?
This more than any other issue is I think what prevents Rust adoption outside of more liberal w.r.t dependencies companies in big tech and web parts of the economy.
This is actually one positive in my view behind the rather unwieldy process of using dependencies and building C/C++ projects. There's a much bigger culture of care and minimalism w.r.t. choosing to take on a dependency in open source projects.
Fwiw, the capabilities feature described in the post would go a very long way towards alleviating this issue.
And people are still calling it "obscure concerns"...
I write in Clojure and I take great pains to avoid introducing dependencies. Contrary to the popular mantra, I will sometimes implement functionality instead of using a library, when the functionality is simple, or when the intersection area with the application is large (e.g. the library doesn't bring as many benefits as just using a "black box"). I will work to reduce my dependencies, and I will also carefully check if a library isn't just simple "glue code" (for example, for underlying Java functionality).
This approach can be used with any language, it just needs to be pervasive in the culture.
Node has improved greatly in last two years. They always had native JSON support. Now have native test runner, watch, fetch, working on permission system à la deno, added WebSockets and working on native SQLite driver. All of this makes it a really attractive platform for prototyping which scales from hello world without any dependencies to production.
Good luck experimenting with Rust without pulling half the internet with it.
E: and they’re working on native TS support
If I am making a small greenhouse i can buy steel profiles and not care about what steel are they from. If I am building a house I actually want a specific standardized profile because my structure's calculations rely on that. My house will collapse if they dont. If I am building a jet engine part I want a specific alloy and all the component metals and foundry details, and will reject if the provenance is not known or suitable[1].
If i am doing my own small script for personal purposes I dont care much about packaging and libraries, just that it accomplishes my immediate task on my environment. If I have a small tetris application I also dont care much about libraries, or their reliability. If I have a business selling my application and I am liable for its performance and security I damn sure want to know all about my potential liabilities and mitigate them.
[1] https://www.usatoday.com/story/travel/airline-news/2024/06/1...
Some of us have licensing restrictions we have to adhere to.
Some of us are very concerned about security and the potential problems of unaudited or unmoderated code that comes in through a long dependency chain.
Hard learned lessons through years of dealing with this kind of thing: good software projects try to minimize the size of their impact crater.
It's entirely possible to use Rust with other build systems, with vendored dependencies.
Crates.io is a blight. But the language is fine.
One thing is to decide to vendor everything - that's your prerogative - but it's very likely that pulling everything in also pulls in tons of stuff that you aren't using, because recursively vendoring dependencies means you are also pulling in dev-dependencies, optional dependencies (including default-off features), and so on.
For the things you do use, is it the number of crates that is the problem, or the amount of code? Because if the alternative is to develop it in-house, then...
The alternative here is to include a lot of things in the standard library that doesn't belong there, because people seem to exclude standard libraries from their auditing, which is reasonable. Why is it not just as reasonable to exclude certain widespread ecosystem crates from auditing?
(Putting aside the question weather or not that pulls in dev dependencies and that watchin files can easily have OS specific aspecects so you might have different dependencies on different OSes and that neither lines and even less files are a good measurement of complexity and that this dependencies involve a lot of code from features of dependencies which aren't used and due to rust being complied in a reasonable way are reliable not included in the final binary in most cases. Also ignoring that cargo-watch isn't implementing file watching itself it's in many aspects a wrapper around watchexec which makes it much "thiner" then it would be otherwise.)
What if that is needed for a reliable robust ecosystem?
I mean, I know, it sound absurd but give it some thought.
I wouldn't want every library to reinvent the wheel again and again for all kinds of things, so I would want them to use dependencies, I also would want them to use robust, tested, mature and maintained dependencies. Naturally this applies transitively. But what libraries become "robust, tested, mature and maintained" such which just provide a small for you good enough subset of a functionality or such which support the full functionality making it usable for a wider range of use-case?
And with that in mind let's look at cargo-watch.
First it's a CLI tool, so with the points above in mind you would need a good choice of a CLI parser, so you use e.g. clap. But at this point you already are pulling in a _huge_ number of lines of code from which the majority will be dead code eliminated. Through you don't have much choice, you don't want to reinvent the wheel and for a CLI libary to be widely successful (often needed it to be long term tested, maintained and e.g. forked if the maintainers disappear etc.) it needs to cover all widely needed CLI libary features, not just the subset you use.
Then you need to handle configs, so you include dotenvy. You have a desktop notification sending feature again not reason to reinvent that so you pull in rust-notify. Handling path in a cross platform manner has tricky edge cases so camino and shell-escape get pulled in. You do log warnings so log+stderrlog get pulled in, which for message coloring and similar pull in atty and termcolor even through they probably just need a small subset of atty. But again no reason to reinvent the wheel especially for things so iffy/bug prone as reliably tty handling across many different ttys. Lastly watching files is harder then it seems and the notify library already implements it so we use that, wait it's quite low level and there is watchexec which provides exactly the interface we need so we use that (and if we would not we still would use most or all of watchexecs dependencies).
And ignoring watchexec (around which the discussion would become more complex) with the standards above you wouldn't want to reimplement the functionality of any of this libraries yourself it's not even about implementation effort but stuff like overlooking edge cases, maintainability etc.
And while you definitely can make a point that in some aspects you can and maybe should reduce some dependnecies etc. this isn't IMHO changing the general conclusion: You need most of this dependencies if you want to conform with standards pointed out above.
And tbh. I have seen way way way to many cases of projects shaving of dependencies, adding "more compact wheel reinventions" for their subset and then ran into all kinds of bugs half a year later. Sometimes leading to the partial reimplementations becoming bigger and bigger until they weren't much smaller then the original project.
Don't get me wrong there definitely are cases of (things you use from) dependencies being too small to make it worth it (e.g. left pad) or more common it takes more time (short term) to find a good library and review it then to reimplement it yourself (but long term it's quite often a bad idea).
So idk. the issue is transitive dependencies or too many dependencies like at all.
BUT I think there are issues wrt. handling software supply chain aspects. But that is a different kind of problem with different solutions. And sure not having dependencies avoid that problem, somewhat, but it's just replacing it IMHO with a different as bad problem.
I'm curious as I don't know Go but it often gets mentioned here on HN as very lightweight.
(A quick googling finds https://pkg.go.dev/search?q=watch which makes me think that it's not any different?)
I'm not excited about Rust because of cool features, I'm excited because it's a whole new CLASS of language (memory safe, no GC, production ready). Actually getting it into the places that matter is way more interesting to me than making it a better language. That's easier to achieve if people are comfortable that the project is being steered with a degree of caution.
All that, despite JS being much older than rust, and much more widely used. Javascript also has several production implementations - which presumably all need to agree to implement any new features.
Javascript had a period of stagnation around ES5. The difference seems to be that the ecmascript standards committee got their act together.
History repeated itself, and now Typescript has even more popularity than CoffeeScript ever did, so if the ecma committee is still on their act, they're probably working on figuring out how to adopt types into Javascript as well.
More relevant to this argument, is the question if a similar endeavor would work for Rust. Are the features you're describing so life changing that people would work in a transpiled language that had them? For CoffeeScript, from my perspective at least, it was just the arrow functions. All the sugar on top just sealed the deal.
The assumption that "[Rust] stagnation" is due to some kind of "Rust committee inefficiencies" might be incorrect.
Rust and Ada have similar goals and target use cases, but different advantages and strengths.
In my opinion, Rust's biggest innovations are 1) borrow checking and "mutation XOR sharing" built into the language, effectively removing the need for manual memory management or garbage collection, 2) Async/Await in a low-level systems language, and 3) Superb tooling via cargo, clippy, built-in unit tests, and the crates ecosystem (in a systems programming language!) Rust may not have been the first with these features, but it did make them popular together in a way that works amazingly well. It is a new class of language due to the use of the borrow checker to avoid memory safety problems.
Ada's strengths are its 1) powerful type system (custom integer types, use of any enumerated type as an index, etc.), 2) perfect fit for embedded programming with representation clauses, the real-time systems annex, and the high integrity systems annex, 3) built-in Design-by-Contract preconditions, postconditions, and invariants, and 4) Tasking built into the language / run-time. Compared to Rust, Ada feels a bit clunky and the tooling varies greatly from one Ada implementation to another. However, for some work, Ada is the only choice because Rust does not have sufficently qualified toolchains yet. (Hopefully soon . . .)
Both languages have great foreign function interfaces and are relatively easy to use with C compared to some other programming languages. Having done a fair bit of C programming in the past, today I would always choose Rust over C or C++ when given the choice.
Originally Rust was written in OCaml, but eventually it got rewritten in Rust
E.g. corutines are stuck because they have some quite hard to correctly resolve corner cases, i.e. in the compiler there isn't a full implementation you could "just turn on" but a incomplete implementation which works okay for many cases but you really can't turn on on stable. (At least this was the case last time I checked.) Similar function traits have been explicitly decided to not be stabilized like that for various technical reasons but also due to them changing if you involve future features. (Like async corotines.) Sure the part about return values not being associated types is mostly for backward compatibility but it's also in nearly all situations just a small ergonomics drawback.
And sure there are some backward compatibility related designs which people have loved to do differently if they had more time and resources at the point the decision was made. But also most of them are related to the very early rust times when the team still was much smaller and there where less resources for evaluating important decisions.
And sure having a break which changes a bunch of older decisions now that different choices can be made and people are more experienced would be nice. BUT after how catastrophic bad python2->python3 went and similar experiences in other languages many people agree that having some rough corners is probably better and making a rust 2.0. (And many of this things can't be done through rust editions!)
In general if you follow the rust weekly newletter you can see that decisions for RFC acceptance, including for stabilization are handled every week.
And sure sometimes (quite too often) things take too long, but people/coordination/limited-time problems are often harder to solve then technical problem.
And sure some old features are stuck (corotines) and some but also many "feature gates" aren't "implemented stuck features" (but e.g. things which aren't meant to be ever stabilized, abandoned features, some features have multiple different feature gates etc.)
Edit: nevermind, comment is here too: https://news.ycombinator.com/item?id=41655268
Also, Zig might be a nice modern language, but it is not an option if you're aiming for memory safety.
One could also argue Rust's unsafe blocks will be harder to reason about bugs in than Zig code. And if you don't need any unsafe blocks it might not be an application best suited to Zig or Rust.
> I’m not saying the author is wrong here, just pointing out how a complex language somehow needs to be even more complicated. Spoiler: it doesn’t.
True. But I think a lot of rust's complexity budget is spent in the wrong places. For example, the way Pin & futures interact adds a crazy amount of complexity to the language. And I think at least some of that complexity is unnecessary. As an example, I'd like a rust-like language which doesn't have Pin at all.
I suspect there's also ways the borrow checker could be simplified, in both syntax and implementation. But I haven't thought enough about it to have anything concrete.
I don't think there's much we can do about any of that now short of forking the language. But I can certainly dream.
Rust won't be the last language invented which uses a borrow checker. I look forward to the next generation of these ideas. I think there's probably a lot of ways to improve things without making a bigger language.
Unfortunately that attracts the worst types. And their crapness and damage potential is sometimes not realised until it’s way too late.
I see some drama associated with Rust, but it's usually around people resisting its usage or adoption (the recent kerfuffle about Rust for Linux, for example), and not really that common within the community. But I could be missing something?
Zig is great, but it just isn't production ready.
https://news.ycombinator.com/item?id=36122270 https://news.ycombinator.com/item?id=29343573 https://news.ycombinator.com/item?id=29351837
The Ashley "Kill All Men" Williams drama was pretty bad. She had a relationship with a core Rust board member at the time so they added her on just because. Any discussion about her addition to the board was censored immediately, reddit mods removed and banned any topics and users mentioning her, etc.
Also, Zig is set to release 1.0 beta in November.
Specializations allow unsound behavior in safe Rust, which is exactly what nightly was supposed to catch.
There are only two kinds of languages: the ones people complain about and the ones nobody uses.
Much of Rust's (and almost every other large programming language) drama are problems of scale, not implementation. The more funding you wish for will indubitably create more drama.
(This is not a diss on Zig at all, I love its approach!)
Ah, like Scala you mean?
The author of the linked comment did extensive analysis on the synchronization primitives in various languages, then rewrote Rust's synchronization primitives like Mutex and RwLock on every major OS to use the underlying operating system primitives directly (like futex on Linux), making them faster and smaller and all-around better, and in the process, literally wrote a book on parallel programming in Rust (which is useful for non-Rust parallel programming as well): https://www.oreilly.com/library/view/rust-atomics-and/978109...
> Features like Coroutines. This RFC is 7 years old now.
We haven't been idling around for 7 years (either on that feature or in general). We've added asynchronous functions (which whole ecosystems and frameworks have arisen around), traits that can include asynchronous functions (which required extensive work), and many other features that are both useful in their own right and needed to get to more complex things like generators. Some of these features are also critical for being able to standardize things like `AsyncWrite` and `AsyncRead`. And we now have an implementation of generators available in nightly.
(There's some debate about whether we want the complexity of fully general coroutines, or if we want to stop at generators.)
Some features have progressed slower than others; for instance, we still have a lot of discussion ongoing for how to design the AsyncIterator trait (sometimes also referred to as Stream). There have absolutely been features that stalled out. But there's a lot of active work going on.
I always find it amusing to see, simultaneously, people complaining that the language isn't moving fast enough and other people complaining that the language is moving too fast.
> Function traits (effects)
We had a huge design exploration of these quite recently, right before RustConf this year. There's a challenging balance here between usability (fully general effect systems are complicated) and power (not having to write multiple different versions of functions for combinations of async/try/etc). We're enthusiastic about shipping a solution in this area, though. I don't know if we'll end up shipping an extensible effect system, but I think we're very likely to ship a system that allows you to write e.g. one function accepting a closure that works for every combination of async, try, and possibly const.
> Compile-time Capabilities
Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox. WebAssembly components are a much more likely solution here. But there's lots of interest in having capabilities for other reasons, for things like "what allocator should I use" or "what async runtime should I use" or "can I assume the platform is 64-bit" or similar. And we do want sandboxing of things like proc macros, not because of malice but to allow accurate caching that knows everything the proc macro depends on - with a sandbox, you know (for instance) exactly what files the proc macro read, so you can avoid re-running it if those files haven't changed.
> Rust doesn't have syntax to mark a struct field as being in a borrowed state. And we can't express the lifetime of y.
> Lets just extend the borrow checker and fix that!
> I don't know what the ideal syntax would be, but I'm sure we can come up with something.
This has never been a problem of syntax. It's a remarkably hard problem to make the borrow checker able to handle self-referential structures. We've had a couple of iterations of the borrow checker, each of which made it capable of understanding more and more things. At this point, I think the experts in this area have ideas of how to make the borrow checker understand self-referential structures, but it's still going to take a substantial amount of effort.
> This syntax could also be adapted to support partial borrows
We've known how to do partial borrows for quite a while, and we already support partial borrows in closure captures. The main blocker for supporting partial borrows in public APIs has been how to expose that to the type system in a forwards-compatible way that supports maintaining stable semantic versioning:
If you have a struct with private fields, how can you say "this method and that method can borrow from the struct at the same time" without exposing details that might break if you add a new private field?
Right now, leading candidates include some idea of named "borrow groups", so that you can define your own subsets of your struct without exposing what private fields those correspond to, and so that you can change the fields as long as you don't change which combinations of methods can hold borrows at the same time.
> Comptime
We're actively working on this in many different ways. It's not trivial, but there are many things we can and will do better here.
I recently wrote two RFCs in this area, to make macro_rules more powerful so you don't need proc macros as often.
And we're already talking about how to go even further and do more programmatic parsing using something closer to Rust constant evaluation. That's a very hard problem, though, particularly if you want the same flexibility of macro_rules that lets you write a macro and use it in the same crate. (Proc macros, by contrast, require you to write a separate crate, for a variety of reasons.)
> impl<T: Copy> for Range<T>.
This is already in progress. This is tied to a backwards-incompatible change to the range types, so it can only occur over an edition. (It would be possible to do it without that, but having Range implement both Iterator and Copy leads to some easy programming mistakes.)
> Make if-let expressions support logical AND
We have an unstable feature for this already, and we're close to stabilizing it. We need to settle which one or both of two related features we want to ship, but otherwise, this is ready to go.
> But if I have a pointer, rust insists that I write (*myptr).x or, worse: (*(*myptr).p).y.
We've had multiple syntax proposals to improve this, including a postfix dereference operator and an operator to navigate from "pointer to struct" to "pointer to field of that struct". We don't currently have someone championing one of those proposals, but many of us are fairly enthusiastic about seeing one of them happen.That said, there's also a danger of spending too much language weirdness budget here to buy more ergonomics, versus having people continue using the less ergonomic but more straightforward raw-pointer syntaxes we currently have. It's an open question whether adding more language surface area here would on balance be a win or a loss.
> Unfortunately, most of these changes would be incompatible with existing rust.
One of the wonderful things about Rust editions is that there's very little we can't change, if we have a sufficiently compelling design that people will want to adopt over an edition.
> The rust "unstable book" lists 700 different unstable features - which presumably are all implemented, but which have yet to be enabled in stable rust.
This is absolutely an issue; one of the big open projects we need to work on is going through all the existing unstable features and removing many that aren't likely to ever reach stabilization (typically either because nobody is working on them anymore or because they've been superseded).
We've had a lot of talk about sandboxing of proc-macros and build scripts. Of course, more declarative macros, delegating `-sys` crate logic to a shared library, and `cfg(version)` / `cfg(accessible)` will remove a lot of the need for user versions of these. However, that all ignores runtime. The more I think about it, the more cackle's "ACLs" [0] seem like the way to go as a way for extensible tracking of operations and auditing their use in your dependency tree, whether through a proc-macro, a build script, or runtime code.
I heard that `cargo-redpen` is developing into a tool to audit calls though I'm imagining something higher level like cackle.
> I always find it amusing to see, simultaneously, people complaining that the language isn't moving fast enough and other people complaining that the language is moving too fast.
I think people complain that rust is a big language, and they don't want it to be bigger. But keeping the current half-baked async implementation doesn't make the language smaller or simpler. It just makes the language worse.
> The main blocker for supporting partial borrows in public APIs has been how to expose that to the type system in a forwards-compatible way that supports maintaining stable semantic versioning
I'd love it if this feature shipped, even if it only works (for now) within a single crate. I've never had this be a problem in my crate's public API. But it comes up constantly while programming.
> Sandboxing against malicious crates is an out-of-scope problem. You can't do this at the language level; you need some combination of a verifier and runtime sandbox.
Why not?
If I call a function that contains no unsafe 3rd party code in its call tree, and which doesn't issue any syscalls, that function can already only access & interact with passed parameters, local variables and locally in-scope globals. Am I missing something? Because that already looks like a sandbox, of sorts, to me.
Is there any reason we couldn't harden the walls of that sandbox and make it usable as a security boundary? Most crates in my dependency tree are small, and made entirely of safe code. And the functions in those libraries I call don't issue any syscalls already anyway. Seems to me like adding some compile-time checks to enforce that going forward would be easy. And it would dramatically reduce the supply chain security risk.
Mind explaining your disagreement a little more? It seems like a clear win to me.
I can't disagree more.
In fact, I think that the current state of async Rust is the best implementation of async in any language.
To get Pin stuff out of the way: it is indeed more complicated than it could be (because reverse compatibility etc), but when was the last time you needed to write a poll implementation manually? Between runtime (tokio/embassy) and utility crates, there is very little need to write raw futures. Combinators, task, and channels are more than enough for the overwhelming majority of problems, and even in their current state they give us more power than Python or JS ecosystems.
But then there's everything else.
Async Rust is correct and well-defined. The way cancellation, concurrent awaiting, and exceptions work in languages like JS and Python is incredibly messy (eg [1]) and there are very few people who even think about that. Rust in its typical fashion frontloads this complexity, which leads to more people thinking and talking about it, but that's a good thing.
Async Rust is clearly separated from sync Rust (probably an extension of the previous point). This is good because it lets us reason about IO and write code that won't be preempted in an observable way, unlike with Go or Erlang. For example, having a sync function we can stuff things into thread locals and be sure that they won't leak into another future.
Async Rust has already enabled incredibly performant systems. Cloudflare's Pingora runs on Tokio, processing a large fraction of internet traffic while being much safer and better defined than nginx-style async. Same abstractions work in Datadog's glommio, a completely different runtime architecture.
Async Rust made Embassy possible, a genuine breakthrough in embedded programming. Zero overhead, safe, predictable async on microcontrollers is something that was almost impossible before and was solved with much heavier and more complex RTOSes.
"Async Rust bad" feels like a meme at this point, a meme with not much behind it. Async Rust is already incredibly powerful and well-designed.
[1]: https://neopythonic.blogspot.com/2022/10/reasoning-about-asy...
I believe you are proposing a language-based security (langsec), which seemed very promising at first but the current consensus is that it still has to be accompanied with other measures. One big reason is that virtually no practical language implementation is fully specified.
As an example, let's say that we only have fixed-size integer variables and simple functions with no other control constructs. Integers wrap around and division by zero yields zero, so no integer operation can trap. So it should be easy to check for the infinite recursion and declare that the program would never trap otherwise, right? No! A large enough number of nested but otherwise distinct function calls would eventually overflow the stack and cause a trap or anything else. But this notion of "stack" is highly specific to the implementation, so the provable safety essentially implies that you have formalized all such implementation-specific notions in advance. Possible but extremely difficult in practice.
The "verifier and runtime sandbox" mentioned here is one solution to get around this difficulty. Instead of being able to understand the full language, the verifier is only able to understand a very reduced subset and the compiler is expected (but not guaranteed) to return something that would pass the verifier. A complex enough verifier would be able to guarantee that it is safe to execute even without a sandbox, but a verifier combined with a runtime sandbox is much simpler and more practical.
Make it 70% of Rust in 10% of the code, similarly to what QBE[0] is doing with LLVM.
You'd probably be able to achieve that if you remove macros and some of the rarely-used features.
I've had a lot of talks with my management about that. For context, I'm on the Cargo team and have authored 11 RFCs (10 approved, 1 pending).
I feel like a lot of the pacing feels slow because:
- As the project matures, polishing whats there takes up a lot of effort
- Conversely, hitting local maximas where things are "just good enough" that individuals and companies don't feel the need to put effort to doing the last leg of work.
- Lack of coordinated teams (formerly Mozilla) doubling down on an idea to hash it out. Hopefully [Project Goals](https://rust-lang.github.io/rfcs/3614-project-goals.html) will help a little in this direction.
- As the project has grown, we've specialized a lot more, making it harder to develop a cross-team feature. It takes finesse to recruit someone from another team to help you finish out a cross-team feature. It also doesn't help we've not done a good job developing the cross-team communication channels to make up for this specialization. Again, Project Goals are trying to improve this. In-person conferences starting back up has also been a big help.
As for RFCs, we've been moving in the direction of choosing the level of process thats appropriate for a decision. Unsure how something will look? You just need approval from 2 members of the relevant team to start a nightly only experiment to flesh out the idea in preparation for an RFC. In Cargo, many decisions don't need wide input and are just team votes on an Issue. RFCs drag out when their isn't a someone from the team shepherding it through the process, the RFC covers too much and needs to be shrunk to better focus the conversation, too much is unknown and instead an experiment is needed, or its cross-team and you need to know how to navigate the process to get the vote done (we want to improve this). As for things being approved but not completed, thats a "we need more help" problem usually.
You know, I would LOVE working on Rust (not just with Rust) and be a part of some of the core team(s).
But my impression is that nobody truly has any powerful agency over things and even if you formulate a near-perfect and a PR to go with it, things would still end with several smarter people than me saying "Oh this looks really neat, we should ponder it more and test it further and merge it!" and then it never happens.
That, plus I am not sure how is the job stability situation there.
* Supports Unions (TypeScript, Flow, Scala3, Hare)
* Supports GADTs
* Capable of targeting both preemptive userland concurrency (go, erlang, concurrent Haskell, concurrent OCaml) and cooperative (tinygo, nodejs, async-python, async-rust) without code changes
* Easily build without libc (CGO_ENABLED=0)
* No Backwards compatibility promise - This eliminates geriatrics
* Cleaner syntax, closer to Go, F#, or Python
* Graph-based Borrow Checker
* Add `try-finally` or `defer` support, `Drop` is too limiting, Async drop could help.
* Fix Remaining MIR Move Optimizations and Stack Efficiency
* Culture for explicit allocator passing like Zig
* `.unwrap()` is removedBut then I thought about it more. Whatever you call it - Pin or Move - the point is to say "this struct contains a borrowed field". But we never needed Pin for local variables in functions - even when they're borrowed - because the borrow checker understands whats going on. The "Pin" is implicit. Pin also doesn't describe all the other semantics of a borrowed value correctly - like how borrowed values are immutable.
I suspect if the borrow checker understood the semantics of borrowed struct fields (just like it does with local variables), then we might not need Pin or Move at all.
If we could go back in time and have the rust project decide to never implement async, I wonder what rust would look like today. There's a good chance the language & compiler would be much nicer as a result.
If withoutboats is right [1], then Rust would never have received the industry backing to be as successful as it is now.
[1]: https://without.boats/blog/why-async-rust/ especially the section "Organizational considerations"
The decision for `async` handed a lot of power to Amazon et al.
This applies to both suggestions ("fork" and "don't use it").
Capabilities to IO can be done by letting IO functions interrupt and call an effect handler, and the caller can specify the effect handler and do access control in there.
The whole Pin situation only exists because async/await was an afterthought and didn't work well with the existing language. async/await is an instance of effects.
I'm excited to start playing with a language that has a good effect system. I am hoping on Ante, but would also like to try Roc at some point.
-- Maybe you are living a million lifetimes in parallel right now and this one is the one devoted to working on compilers? Get to it! :-)
I think this is probably where all proposed whitelist/capability proposal discussions end. It's going to be too many crates that are in that category for it to be useful.
A good first step (not sure if it's already taken tbh) would be to at least sandbox build execution. So that an attacker can't execute arbitrary code when your app is compiled.
The one point that stuck out for me is the comptime section. It approaches the topic from a security and supply-chain attacks angle, which is a way I never thought about it.
I think Rust might quickly run into the “negative trait” problem trying to get that working, while embracing an effect system like Purescripts might get you the goods in a “principled” way. Though I haven’t thought about this deeply.
Don't get me wrong: I'd like coroutines and a lot of other unstable/hidden features done as well. Function traits sound great, and I'd also like the whole Pin stuff to be easier (or gone?).
But please, "Lets just extend the borrow checker and fix that" sounds very demeaning. Like no one even tried? I am by far no expert, but I am very sure that its not something you "just" go do.
I like most of the proposed features and improvements, I mostly share the critique on the language, but I do not thing the "why not just fix it?" attitude is helpful or warranted. Theres tons of work, and only so much people & time.
There was a good blog post recently on Pin ergonomics, which I hope will lead somewhere good. It's not like they don't know that these things are difficult, and it's not like they're not trying to fix them, but generalised coroutines (for example) in the presence of lifetimes are absolutely monumentally difficult to get right, and they just can't afford to get it wrong. It's not like you can just nick the model from C#'s, because C# has a garbage collector.
As someone who has dabbled in compiler writing (i.e. I may be totally wrong), I believe that from a technical standpoint, modifying the borrow checker as proposed in the article (w.r.t. self-referential structs) is actually something you can "just do". The issues that come up are due to backwards compatibility and such, meaning it cannot be done in Rust without a new Rust edition (or by forking the compiler like in the article).
It's a bit restricted on how much you can do because they do promise compatibility with older crates, but it seems to be working out pretty well and that compatibility promise is part of why it does work.
Even if we put aside safety issues, each crate brings ~10 more dependencies by default (i.e. without any features turned on), which bloats compile times. Maybe it's better to be able to shard 3rd party crates, and not update them automatically at all?
The closest to a solution we have is dependency scanning against known CVEs.
Having per-crate permissions is, I think, the only way languages can evolve past this hell hole we call supply chain attacks. It’s not a silver bullet, there will be edge cases that can be bypassed and new problems it creates. But if it reduces the scope of where supply chains can attack and what they can do, then that’s still a massive win.
I also think you probably only need to restrict your dependencies. If you have a dep tree like this:
a
|-b
|-c
Then if crate a decides b isn't trusted, c would inherit the same trust rules. This would allow crates to be refactored, but keep the list of rules needed in big projects to a minimum. You just have to add explicit rules for sub-crates which need more permissions. Thats probably not a big deal in most cases.(You might still, sometimes, want to be able to configure your project to allow privileged operations in c but not b. But thats an edge case. We'd just need to think through & add various options to Cargo.toml.)
well... :-(
Actually, it's obvious that some authors might "turn evil" dumbly, by abusing some kind of priviledged permissions. By chance, these kinds of supply-chain risks are "easily" identified because
1) the permissions are an "easy" risk indicator, so you can priorize either to pin the version library (after validating it) or validate the new version
2) not so many libraries will use these permissions so you "have time" to focus on them
3) in these libraries, the permissions will tell you what system call/bad effects is possible, so will allow you to narrow even more the scope of investigation
So, IMHO, permissions are not really the end of all but only a tiny step.
The real problem is "how can human-size be used to subvert the program ?" For example: what is happening if the returned size "forget" or "add" 100 bytes to files bigger than 1 KB ? As a remininder, STUXNET was about some speed a tiny bit faster than planned and shown...
> The real problem is "how can human-size be used to subvert the program ?" For example: what is happening if the returned size "forget" or "add" 100 bytes to files bigger than 1 KB ? As a remininder, STUXNET was about some speed a tiny bit faster than planned and shown...
I read this argument in a similar vein to the argument against rust's unsafe blocks. "Look, C code will always need some amount of unsafe. So why bother sandboxing it?"
But in practice, having explicit unsafe blocks has been a massive win for safety in the language. You can opt out of it at any time - but most people never need to!
A 90% solution doesn't solve the problem entirely. But it does solve 90% of the problem. And thats pretty bloody good if you ask me! Sure - my safe rust decompression library could still maliciously inject code in files that it decompresses. But having checks like this would still reduce the security surface area by a huge amount.
Less implicit trust in random crate authors is a good thing. I don't want thousands of crate authors to be allowed to execute totally arbitrary code on my machine! The current situation is ridiculous.
As a frequent contributor to a number of crates, this isn‘t really true. Also, most popular crates actively deny use of unsafe.
It'd be good to track capabilities needed by libraries, so similarly to unsafe code, risky portions needing careful review are constrained and highlighted in some way.
> ast_nodes: Vec<&'Self::source str>,
Oh, that would be neat to replace the https://github.com/tommie/incrstruct I wrote for two-phase initialization. Unlike Ouroboros and self_cell, it uses traits so the self-references can be recreated after a move. Whether it's a good idea, I don't know, but the magic Ouroboros applies to my struct feels wrong. But I say that as someone coming from C++.
> if let Some(x) = some_var && some_expr { }
Coming from Go, I was surprised that something like
if let Some(x) = some_var; expr(x) { }
isn't a thing.As I said in the post you can also write this:
if let (Some(x), true) = (my_option, expr) {
But then it doesn't short-circuit. (expr is evaluated in all cases, even when the optional is None).Both approaches are also weird. It'd be much better to just fix the language to make the obvious thing work.
The same thing in C++17:
if (auto x = something();
expr1(x)) {}
else if (expr2(x)) {}
It's really neat to have the variable scoped to the if/else-clause.I don't think any language helps verifying that., and even in the ones that require it by spec, it's unclear if it's happening. Maybe you didn't really wrote a tail-recursive function because of a helper that you expected to be inlined. I guess it's easy to notice if you try to blow the stack in a unit test though.
Yeah, it seems like a pretty easy feature to add. The compiler can pretty easily calculate the maximum stack size for every (bounded) call stack. It seems super useful to compute & expose that information - especially for embedded devices.
This sounds bad, but I wonder how many features have taken this long to include in other languages. Is this really as out of step as it sounds?
There is the move-fast-break-things mentality, but is that how you want to design a language?
Seems like we are missing some middle ground step, where there are good features, maybe even done, and stable, but they aren't getting worked into the main language.
Maybe a decision making problem.
I still wish the Python core team had abandoned the Python 3 experiment and gone with Python 2.x for life, warts and all. I learned to work with the warts, including the Unicode ones. I think a lot of us did.
Is frustration with Rust on the rise? I just started using Rust few month ago and absolutely love it. I can't tell what's going on with the Rust foundation so I can only judge by reading sentiments. Nothing would kill my vibe harder than knowing smart people thinks the language isn't doing great :(
Very popular lang that is actually very nicely designed and has very good ecosystem (compilers, tools like package manager, std lib)
I know I used to crush hard on Python and also got worried when there were dissonances within the Python Foundation. But as you progress, I assume the goings-on in certain language communities will take a back-seat to thinking deeply about how to solve the problems you are professionally tasked with. At least that's my experience.
As for Rust: It's gonna be around for a while. For the past months, I've been hearing a lot of chatter about how companies are using Rust for the first time in production settings and how their developers love it.
A lot of the complaints I see are not super well thought through. For example, a lot of people complain about async being too explicit (having a different "color" than non-async functions), but don't consider what the ramifications of having implicit await points actually are.
Even in this otherwise fine article, some of those desired Fn traits are not decidable (halting problem). There's a bit of a need to manage expectations.
There are definitely legitimate things to be desired from the language. I would love a `Move` trait, for example, which would ostensibly be much easier to deal with than the `Pin` API. I would love specialization to land in some form or another. I would love Macros 2.0 to land, although I don't think the proc-macro situation is as bad as the author presents it.
The current big thing that is happening in the compiler is the new trait solver[0], which should solve multiple problems with the current solver, both cases where it is too conservative, and cases where it contains soundness bugs (though very difficult to accidentally trigger). This has been multiple years in the making, and as I understand it, has taken up a lot of the team's bandwidth.
I personally like to follow the progress and status of the compiler on https://releases.rs/. There's a lot of good stuff that happens each release, still.
[0]: https://rustc-dev-guide.rust-lang.org/solve/trait-solving.ht...
To which many sensible people respond “I don’t want to think about monads either, but is the pain point really that bad?”
arg: impl Iterator<Item: Debug>It would probably just be TS.
Some are things that will never be stable, because they're not a feature; as an example, https://github.com/rust-lang/rust/issues/90418
Yeah. This is someone who's frustrated that he doesn't wake up to headlines that read "Hey babe, new Rust feature just dropped".
If that's what he's looking for, he should probably switch to the Javascript ecosystem.
Smart people will always do that, I've found it's better to ignore the chatter and focus on your own experience.
https://github.com/rust-lang/cargo/issues/2644
Its a clusterfuck of people misdirecting the discussion, the maintainers completely missing the point, and in the end its still not even been allowed to start.
Cargo can download-only, it cant build only dependencies. If you, for whatever reason (ignoring the misleading docker examples) want to build your dependencies separately from your main project build, you are sol unless you want to use a third party dependency to do so.
I am not sure what the OP is using, but with LSP I do get the error message in my editor (nvim) before any compiling (though am pretty sure some checking in happening in the background).
> Compile-time Capabilities
Not sure how this makes any sense when Rust compiles to multiple targets. Should all libraries become aware of all the "capabilities" out there. Also, this already can be implemented using features and keep things minimal.
> Comptime
I can't make sense of what the OP issue is here.
> Make if-let expressions support logical AND. Its so simple, so obvious, and so useful. This should work: if let Some(x) = some_var && some_expr { }
The example makes no sense.
Great article apart from that.
The traditional idea "compiled language" usually means a language designed for mostly batch compilation -> the compiler is not a part of the (potential) execution runtime. "compile time" and "run time" are not the same. In Lisp it is allowed to be the same.
[1]: https://gavinhoward.com/2024/05/what-rust-got-wrong-on-forma...
javascript:(function(){var newSS, styles='* { background: white ! important; color: black !important } :link, :link * { color: #0000EE%20!important%20}%20:visited,%20:visited%20*%20{%20color:%20#551A8B%20!important%20}';%20if(document.createStyleSheet)%20{%20document.createStyleSheet("javascript:'"+styles+"'");%20}%20else%20{%20newSS=document.createElement('link');%20newSS.rel='stylesheet';%20newSS.href='data:text/css,'+escape(styles);%20document.getElementsByTagName("head")[0].appendChild(newSS);%20}%20})();Some one just has to do it.
My wishlist:
* allow const fns in traits
* allow the usage of traits in const exprs. This would allow things like using iterators and From impls in const exprs, which right now is a huge limitation.
* allow defining associated type defaults in traits. This can already be worked around using macros somewhat effectively (see my supertrait crate) but real support would be preferable.
* allow eager expanding of proc macro and attribute macro input, perhaps by opting in with something like `#[proc_macro::expand(tokens)]` on the macro definition. Several core "macros" already take advantage of eager expansion, we peasants simply aren't allowed to write that sort of thing. As a side note, eager expansion is already possible for proc and attribute macros designed to work _within proc macro crates_, for example this which I believe is the first time this behavior was seen in the wild: https://github.com/paritytech/substrate/blob/0cbea5805e0f4ed...
* give build.rs full access to the arguments that were passed to cargo for the current build. Right now we can't even tell if it is a `cargo build` or a `cargo doc` or a `cargo test` and this ruins all sorts of opportunities to do useful things with build scripts
* we really need a `[doc-dependencies]` section in `Cargo.toml`
* give proc macros reliable access to the span / module / path of the macro invocation. Right now there are all sorts of projects that hack around this anyway by attempting to locate the invocation in the file system which is a terrible pattern.
* allow creating custom inner attributes. Right now core has plenty of inner attributes like `#![cfg(..)]` etc, and we have the syntax to define these, we simply aren't allowed to use custom attribute macros in that position
* revamp how proc macro crates are defined: remove the `lib.proc-macro = true` restriction, allowing any crate to export proc macros. Facilitate this by adding a `[proc-macro-dependencies]` section to `Cargo.toml` that separately handles proc-macro-specific dependencies. Proc macros themselves would have access to regular `[dependencies]` as well as `[proc-macro-dependencies]`, allowing proc macro crates to optionally export their parsing logic in case other proc macro crates wish to use this logic. This would also unblock allowing the use of the `$crate` keyword within proc macro expansions, solving the age old problem of "how do I make my proc macro reliably refer to a path from my crate when it is used downstream?"
* change macro_rules such that `#[macro_export]` exports the macro as an item at the current path so we can escape from this ridiculousness. Still allow the old "it exports from the root of the current crate" behavior, just deprecate it.
And there's a lot of things that are weird or clunky
I honestly don't "get" the "no classes, just struct methods thing" and while, sure, C++ is kinda like that, but the ergonomics are weird. I'd much rather have the class/methods declaration as most languages do
Lifetimes are good but the implementation is meh. Most cases could do with a default lifetime.
Copy/borrow strictness is good to think about but in most cases we don't care? Copy should probably the default and then you borrow in special cases
That phone couldn't even send MMS.... You had to jailbreak it to be able to do normal stuff that the phones could do for ages back then.
Languages like C++ and python are wildly successful and don’t think anyone would call them perfect.
The dependence point is valid but not sure that is easily solvable in general. Doesn’t seem like a rust issue. See npm and python pip - blind trust is par for the course except in very rigorous environments