Now with the smarter borrow checker and ergonomic improvements most of it is unnecessary, and the error messages know how to suggest the rest.
Just a note, I agree with both of you :)
- an enum (or struct)
- its Display implementation
- its Error implementation, now just one function
And a few `impl From<OtherError> for MyError` to make the try? operator work.
For a library it's really not that much work. And even that can be simplified further to a few derive macros with another library : thiserror.
What anyhow makes easier is to collect many types of errors together and reporting the errors to the user.
https://github.com/rust-lang/rust/issues?q=label%3Arust-2-br...
• Struct literal syntax should have used C99 syntax. The language uses `name:type` everywhere except struct literals which use `name:value`, and this gets in the way of adding new syntax (type ascriptions).
• `Box` is semi-magical. Maybe it could have been a regular struct. Or maybe magical all the way to allow placement new and destructuring.
• Types are in borrowed, owned-fixed-size and owned-growable variants, but naming of them is a bit ad-hoc. There's str/String, but Path/PathBuf (instead of e.g. String/StringBuf or path/Path).
• Split between libcore and libstd is awkward to manage and not a good fit for WASM. It could have been one libstd with feature toggles (this might still happen).
• Some people think split between Eq and PartialEq is an overkill, and just makes floats annoying.
There are things that are still unsolved in Rust, like umovable types and self-referential structs. But it's hard to say they're a mistake — as far as we know, they're a necessary limitation to make other useful features work.
(I disagree personally on struct literal syntax but you're not alone, it's true.)
They then said that they'd probably do the same thing in 2021.
Now they're debating whether a 2021 edition is needed since there aren't any breaking changes with broad support except for the removal of deprecated syntax and APIs.
This is strong evidence that the answer to your question is "no".
There are also some thoughts about Rust-like languages with some differences, see https://boats.gitlab.io/blog/post/notes-on-a-smaller-rust/ as a prominent example.
That is, there's a path dependence, where the existence of try!() made the ? operator viable.
The futures design was built with epoll in mind, and now people are trying to wrap it round io_uring, they are feeling some pain. Would a different design have worked better, without massive drawbacks?
Could you provide an example? Because I don't really see it, except by requiring that non-const generics also be explicitly annotated?
Now, we can't just do
drop<T>(T)
because of DSTs, so we we'll need a new type of consuming reference. And the dual to that, an initializing reference, would also solve the problem of creating DSTs with preallocated memory.Now both could use MaybeUninit, but it would be better to just have types that vary with the CFG, so one can insure that no matter how one get to point b, the memory is now initialized.
but Rust's drop isn't for that purpose. It's not for the actual cleanup of the struct and its children it is for additional cleanup before the children are deleted. So it has to be mutable. The compiler synthesizes the "delete children" code.
(I don't understand your point about drop<T>)
A Rust enum is an actual enumeration type, which C does not have. This is far more powerful.
AKA Rust's enums are type-safe, not aliases for integers with some named constants.
Rather than Rust's, I'd say the mistake is C's enums. If you don't want enums, don't have them.
I guess that's one place where Go did something good: they didn't want to improve on C's enums with proper ADTs so they just stripped out the entire thing, an "enum" is an integer and a bunch of constants. Which you can also use to represent these non-enums in Rust though it doesn't have the iota / step convenience. A simple recursive macro might be able to handle it though.
For C compatibility you have lots of options like #[repr(C)] or #[repr(i32)] to be C compatible. So not sure what you are referring to?
I primarily live in the .Net world, but rust seems extremely close to f# in terms of compiler safety, and I'm trying to decide what programming language to learn next.
There's some highly specialist aspects you _could_ learn (like anything I suppose), but I've got enough to get by and be as productive as I'd like without needing to get to that level.
So there's no harm in learning enough to get by for fun.
Compare to e.g. Haskell, which I learned at university, I have a memory that you'd have to invest quite heavily to make it do something useful.
I would say Rust is the best day to day experience out of these 3 except for glitchy editor tooling and the long compile times. We have 1000 line services that take close to 10 minutes for a release build which is not ideal.
I would not necessarily suggest learning Rust to get a job though. We mostly stopped mentioning it in our job ads because we don’t want people applying for the “hip stack”. Usually, competent programmers with some basic understanding of how manual memory management works have no trouble picking it up as they go.
Can you elaborate on that? I'm maybe thinking of changing jobs soon and my #1 want is that I can commonly program in Rust as opposed to C. Why would you explicitly avoid mentioning Rust?
I started learning just as the async/await and futures features started causing churn and confusion in the ecosystem - it was slightly irritating at the time, but in hindsight, I'm still glad I started learning then.
[1] https://dropbox.tech/infrastructure/rewriting-the-heart-of-o...
Our core tech is in pure Rust and I have to admit that coming from Javaland it's refreshing to deeply trust your code. It's hard to convey this idea of 'if it compiles it works' but I no longer worry about showing off prototypes to people. If they compile they work.
There is a learning curve as you are introduced to new ideas and design patterns but it's worth it! Rust has made me a better programmer.
I hope the next 5 years bring compiler speed ups.
This work seems to be gaining momentum, if anything. The next version (1.44) will significantly improve the performance of programs that use async [4]. The next version of LLVM will be faster [5] (hopefully reversing the perf regressions in LLVM 10).
[1] - https://blog.mozilla.org/nnethercote/2020/04/24/how-to-speed...
[2] - https://blog.mozilla.org/nnethercote/2019/12/11/how-to-speed...
[3] - https://blog.mozilla.org/nnethercote/2019/10/11/how-to-speed...
[4] - https://ferrous-systems.com/blog/stable-async-on-embedded/
[5] - https://nikic.github.io/2020/05/10/Make-LLVM-fast-again.html
https://github.com/rust-lang/wg-traits/tree/master/minutes
https://github.com/rust-lang/lang-team/tree/master/minutes
https://rust-lang.github.io/compiler-team/
There's a youtube channel too. (Also start here: https://blog.rust-lang.org/inside-rust/2020/03/28/traits-spr... )
The important thing is the `chalk` work. And basically the rust-analyzer approach drives that.
Hah, this branch PR got merged 6 days ago into rust master: https://github.com/rust-lang/rust/pull/69406
So I would anticipate they will eventually stabilize like many other features have and become part of stable.
That is not to say it will certainly become industrially important. It depends entirely on the numbers. Today, the total number of working Rust coders may be less than the number who start coding C++ in any given week. (This is a simple consequence of the difference between a base of thousands vs. millions.) But if it can sustain exponential growth long enough, and nothing else comes up in the meantime to take the wind from its sails, it should get there.
Personally I used iced [1] a bit and found it very pleasant to use. iced is cross platform, sponsored and very active.
> Iced moves fast and the master branch can contain breaking changes!
Even in general, this crate is not even 1.0 or stable for production use. I'd rather wait until it is mature before touching it. Until then, Qt is the way to go.
Just depends what you want. A common pattern is essentially to build your app as a Rust library or CLI binary that your GUI wraps in whatever is most convenient for the GUI
I've eagerly awaited many features to make this work reasonably well, and I've been very pleased how much rust has helped my use case over the last few years.
Although lots of languages have a C FFI, that's really not enough to extend a complex codebase. Postgres has it's own setjmp/longjmp-based error handling, it's own system of allocators, it needs a way to find the right functions in an extension and call them the right way, its own way of dealing with signals, etc. There are zillions of internal structs, and the extension needs to be able to read/modify them without copying/translation. Oh, and also, postgres doesn't like threads at all, so the extension better not make any (at least not ones that call back into postgres APIs).
The only language even close to getting all of this right is rust:
* it has no runtime that causes problems with threading, scheduling, signal handling, or garbage collection
* typically GC'd languages can't operate very well on unmodified C structs; rust doesn't have a GC so it can
* rust goes out of its way to support C-compatible structs without any copying/translation, and can even treat some plain C representations as more interesting types in a binary-compatible way (like a nullable pointer in C could be treated as an Option<&MyStruct> in rust)
* the tokio library allows nice concurrency without creating threads if you use the CurrentThread runtime
* it supports changing the global allocator to be the postgres allocator
* rust has good procedural macro support, which is important because a lot of postgres APIs heavily use macros, so making the rust version ergonomic requires similar macro magic
Areas rust could be more helpful, but which I'm trying to address on my own to the extent that I can:
* Support for setjmp/longjmp. I know this is not easy, but important for interacting with C code that already uses it. I realize it would be unsafe, and that it can't be wrapped up in a safe way directly, and it would be delicate to create safe APIs that use it at all. But I still want it. I made a crate[2] that adds support, but it has a few issues that can't be resolved without compiler support.
* Better support for cdylib shared libraries that might call back into the host program that loads the library. Right now, you have to pass platform-specific flags to get it to link without complaining about undefined symbols (because they won't be resolve until the library is loaded into the host program). Also, it's difficult to test the shared library, because you have to guess at the location of the built library to be able to tell the host program where to find it before you can begin your test. I made a crate[3] to help with these things also, but it would be nice to have better support.
Oh, and one more thing on my wishlist not directly related to this project is that it would be nice to have support for datastructure-specific allocators (like a mini-allocator just for a hash table). Then, you'd be able to monitor and control memory usage by inspecting that allocator; and when you destroy or reset the hash table, you can know that the memory is freed/cleared as well (without worrying about fragmentation). Maybe these mini-allocators could also be used in other contexts, too, but it would probably be easiest to get the lifetimes to work out if it was tied to a data structure.
[1] https://github.com/jeff-davis/postgres-extension.rs [2] https://github.com/jeff-davis/setjmp.rs [3] https://github.com/jeff-davis/cdylib-plugin.rs
* Splitting the compiler into libraries so it's easier to iterate on them, while also making it easier to develop static analysis tools - http://smallcultfollowing.com/babysteps/blog/2020/04/09/libr...
* Improving the async experience - http://smallcultfollowing.com/babysteps/blog/2020/04/30/asyn...
Successful eco systems are pragmatic and idiomatically straightforward.
Everything & the kitchen sink in a language is not a recipe for success.
Every language that has a long lifespan spent a long time in feature minimal stasis too. The world won't learn a moving target.
I learned Rust a few years ago and without keeping up with the latest changes too much I still feel confident I can work on current code.
So, I think the argument that "very special features" shouldn't be counted toward language complexity/growth is wrong, IMO. I would even say that there needs to be even more focus on those features, since they tend to be not widely known, not familar, and often there is less documentation about them, so the likelyhood that they make code hard to understand is even higher.
This is not to say that those features are unnecessary. I just don't think the justification "they are not what a pragmatic programmer will see" is good.
Building C++ (or C) projects is such a clusterfuck that it's given rise to header-only libraries.
In Rust, everything is "cargo build", and adding a dependency is one line. This has only failed for me when there's a dependency on a C system library that I can't satisfy.
Is it bad to have too many dependencies? Sure, maybe. Is that an excuse to have artificial friction? No. I eagerly await the day when meson or conan or whatever becomes The C++ Dependency And Package Manager.
My other favorite part of Rust is the elitist language features like iterators, immutable borrows, functional programming, etc.
So many of these features Rust adds just fill in holes in an already extensive ecosystem.
I agree some feature development could be toned down to mitigate the moving target problem.. hell I think last years poll expressed roughly that. But, I think there's still a ton of features yet to come that merely complete what we already have.
“ as some keen HN commenters have pointed out, it looks like the rust program is not actually equivalent to the go program. The go program parses the string once, while the rust program parses it repeatedly inside every loop.”
In general for most long lived workloads, I would expect Go to be approximately 15-25% slower than an equivalent C/C++/Rust program because of the CPU overhead of the Go GC. The Go team have done a lot of great work to optimize pause times and memory consumption though.
See the discussion around it and the patches made by the community and the final results.
See the individual benchmarks: https://github.com/christianscott/levenshtein-distance-bench...
hyperfine go/out 'node javascript/main.js' rust/target/release/rust 1 Benchmark #1: go/out
Time (mean ± σ): 1.888 s ± 0.013 s [User: 2.040 s, System: 0.045 s]
Range (min … max): 1.875 s … 1.918 s 10 runs
Benchmark #2: node javascript/main.js Time (mean ± σ): 4.257 s ± 0.033 s [User: 4.295 s, System: 0.042 s]
Range (min … max): 4.221 s … 4.338 s 10 runs
Benchmark #3: rust/target/release/rust Time (mean ± σ): 874.1 ms ± 50.8 ms [User: 5.688 s, System: 0.830 s]
Range (min … max): 813.5 ms … 1001.9 ms 10 runs
Summary 'rust/target/release/rust' ran
2.16 ± 0.13 times faster than 'go/out'
4.87 ± 0.29 times faster than 'node javascript/main.js'Never felt about it this way.