Rust 1.45 will be the Rocket Release. It unblocks Rocket running on stable as tracked here https://github.com/SergioBenitez/Rocket/issues/19
This is so excellent, and I love seeing long term, multiyear goals get completed. It isn't just this release, but all the releases in between. The Rust team and community is amazing.
Or maybe it's an explosive weapon crafted with metal pipe and gunpowder. https://rust.fandom.com/wiki/Rocket
I spent a few years as a scientific programmer and this is exactly the sort of thing that just bites you on the behind in C/C++/Fortran: the undefined behaviour can actually manifest as noise in your output, or just really hard to track down, intermittent problems. A big win to get rid of it.
The overflow-is-a-bug-saturating as would be the default, and there would be a separate sat_as for "I know this saturates, it isn't a bug.". ::sigh:: Rust went through this same debate for integers, initially rejecting the argument I'm giving here but switching back to it after silently-defined-integer-overflow concealed severe memory unsafty bugs in the standard library.
Well-defining something like saturation actually reduces the power of static and dynamic program analysis because it can no longer tell if the overflow was a programmer-intended use of saturation or a bug.
Having it undefined was better from a tools perspective, even if worse at runtime, because if a tool could prove that overflow would happen (statically) or that it does happen (dynamically) that would always be a bug, and always be worth bringing to the user's attention.
So now you still get "noise" in your output, but it's the harder to detect noise of consistent saturation, and you've lost the ability to have instrumentation that tells you that you have a flaw in your code.
So I think this is again an example of rust making a decision that hurts program correctness.
> So I think this is again an example of rust making a decision that hurts program correctness.
Could you expand on other examples?
This looks very dangerous, because it essentially does the "nearest to right" thing. Say, you cast 256 to a u8, it's then saturated to 255. That's almost right, and a result might be wrong only by 0.5%. Much harder to detect than if it is set to 0.
It’s not supposed to. Type casting with ‘as’ is supposed to be lightweight and always succeed; there is no room in the type system to return an error. In case lossless casting is not possible, some value still has to be returned. Until now, this was outright UB — meaning the compiler is not even obligated to keep it consistent from one build to another. Saturating, while still not optimal, is at least deterministic.
> This looks very dangerous, because it essentially does the "nearest to right" thing.
That’s why the intention is to introduce more robust approximate conversion functions and eventually probably deprecate ‘as’ casts altogether. There has been a number of discussions about this; current disagreements seem to be about how to handle the various possible rounding modes.
For better or worse, Rust 1.0 released with the philosophy that the `as` operator is for "fast and loose" conversions where accuracy is not prioritized; e.g. casting a u32 to a u8 would always risk silently truncating in the event the value was too large to represent. Over the years the language has added a lot of standard library support for bypassing the `as` operator entirely, and I think the prevailing opinion at this point might be that if they had the to do it all over again they might not have had the `as` operator at all, instead making do with a combination of ordinary error-checked conversion methods and the YOLO unsafe unchecked methods as seen here.
Which is to say: it's not that they technically couldn't have gone with the panic approach, but (performance implications aside) I think they'd rather just start moving away from `as` in general wherever possible.
NaN to 0 is a bit more concerning, but inconvienence and compatibility need to be weighed against catching every error (no type system will catch every bug).
Not sure what you mean here, and I don't have the standard at hand ATM, but I'm quite sure this is undefined behaviour in Fortran.
But yes, I agree defined behaviour is good. Undefined behaviour is occasionally good for optimization, at the cost of gray hairs for users.
Some things are well specified. Some things are mostly specified. Some things are still very much up in the air.
Though I try to always scrutinize any floating-point / integer conversions during code reviews. The default casting of a floating point value to integer is frequently not what you want, however. In the code we do, for example, you will usually round to the nearest integer instead, we don't normally need the more fancy rounding schemes.
To be honest... struggling to see why you'd do the former, outside of situations where you're happy with saturation, though I haven't thought about it a lot. Agreed that a consistent behaviour is a big help - I can work with "Rust does x in this scenario".
99% of my development work these days is C with the target being Linux/ARM with a small-ish memory model. Think 64 or 128MB of DDR. Does this fit within Rust's world?
I've noticed that stripped binary sizes for a simple "Hello, World!" example are significantly larger with Rust. Is this just the way things are and the "cost of protection"? For reference, using rustc version 1.41.0, the stripped binary was 199KiB and the same thing in C (gcc 9.3) was 15KiB.
That is a bit extreme but it demonstrates the lower bound.
There's a lot of things you can do to drop sizes, depending on the specifics of what you're doing and the tradeoffs you want to make.
Architecture support is where stuff gets tougher than size, to be honest. ARM stuff is well supported though, and is only going to get better in the future. The sort of default "get started" board is the STM32F4 discovery, which has 1 meg of flash and 192k of RAM. Seems like you're well above that.
FYI Rust (and Go) currently don’t work on the new Apple ARM macs.
Especially in your case you'll find Rust to be a joy to use: you'll have way more confidence in your code being able to run for months without segfaults or memory leaks. And if you have a good understanding of the C memory model using Rust will be a breeze.
Worth noting that a lot of the code size is a constant addition that won't really scale with your program code.
I really hope that more enlightened vendors (hi Nordic Semiconductor) will start supporting Rust on their platforms.
At least i+1 elements, right? Or am I getting caught up by one of the three hardest problems again?
https://github.com/rust-lang/blog.rust-lang.org/commit/fe241... (should roll out in a few minutes)
1. Naming things.
2. Cache invalidation.
3. Off by one errors.
Number 3 apparently the hardest one.
———-
Original question: Out of curiosity, why the +1?
But I understand it could be more fun for the devs :-)
Features like CORS are available via third party fairings, but I could see them being incorporated in the future in the `rocket_contrib` portion. I think the goal is to keep the overall framework pretty light and put more things into the `rocket_contrib` portion.
In general, stable proc macros is an awesome step for Rust.
At the moment, I'd rather use something like actix-web instead.
I guess with younger and younger kids learning programming these, may be there can handle more? I am not sure if my son would understand all of the intricacies in his first semester.
[1] https://www.cs.umd.edu/class/spring2020/cmsc330/
[2] https://www.cs.umd.edu/class/spring2020/cmsc330/lectures/25-...
Maybe the professor for this class could assign a non-trivial project in C at the start of the semester then the same project again at the end of the semester except in Rust.
Of course they won't be able to grasp everything that Rust has to offer, but that is true of any language. I think Rust will expose them to many theoretical and practical CS concepts that they will be glad to have at least heard of during their studies.
In our degree, the first year students learn to program with Python, Racket (or OCaml depending on which teacher they get), C, Prolog, Bash, … Each of these language have way more to offer than what they can grasp. But each of them offers a different approach to programming and help the students to actually learn to program (rather than learning to write Java code, for example).
The course in question is actually called "Advanced programming". I want to experiment with a Rust course in second year as a kind of followup to both the functional programming course (the Racket/OCaml one) and the imperative programming course (the C one) that they have during the first year. If it really doesn't work, we'll change for something else or simply swap back to it being "Advanced C programming" for instance. But first, let's try to make the Rust experiment work. I really think it can benefits our students!
This is an eternal debate about Rust. I don't think it's required though. Can you appreciate functions without understanding assembly and calling conventions? I believe the answer is yes :)
Maybe take a look at this I2C lib: https://github.com/rust-embedded/rust-i2cdev
I use C++ at work, which admittedly isn't the language I use most, and macros are used quite a bit in the code base. I find they just make the code harder to read, reason about, debug, and sometimes even write. I don't see them really living up to their claimed value.
Is there something different about Rust's macros that make them better?
Ouch. Modern C++ has alternatives to C macros. Is it an old code base, or is it just written in the C++98 style?
Today people will typically use constexpr instead of #define. While macros can possibly do some funky things constexpr can't, you'd be hard pressed to find those things. (C++ supports constexpr if, constexpr functions, constexpr lambdas, math, and so on.)
If you have the time and effort, it may be worthwhile to slowly start modernizing the code base, bit by bit.
The advantage of modern day C++ is it catches errors at compile time that older versions did not and would crash while the software was running. You might improve the stability of the code base if you help out.
C Macros are lacking because they are very primitive, e.g. they have not type system. They are also hardly turing complete. Its extremely hard to write a meaningful algorithm in them. IMHO the real macros of the C++ language are the templates and constexpr, althou they are limited in other ways. E.g. its hard to extend the syntax using them or do certain things like making the calling function return. They grow ever more powerful, with their own type system (concepts) and things like std::embed and static refection so they finally feel like a real language, alsbei a clumsy, pure functional language that feels alien a C++ programmer without exposure to haskell.
Rust macros are actually meant to feel like Rust, not some ad-hoc bolted on language.
I think this may be why I'm having a hard time appreciating them. Probably half the macros I see could just be a function call. The majority of those that don't are hiding a conditional return or goto, which I find to be a net negative.
I'll probably have to use a language with good macros before I can appreciate them.
When I say hygienic, I mean that when a macro uses a variable that isn't in the parameters or isn't static, it will be a compile error as it cannot know the scope. Any variables defined inside are scoped. Only parameters parses in can be referenced from outside the scope of the macro
They're also notated different to regular functions and they can't appear anywhere so it's obvious when you see a macro that it will expand into some code that won't break any of the other code in your function.
Of course people can write really terrible macros but typically macros serve a single very simple task and should be well documented and in my experience they often are.
As for procedural macros. They are just ast in, ast out functions, but written as a library in regular rust rather than some language thrown on top. That makes it easy to reason about the code and make it safe. If you write them well, they can be very good at reporting errors in usage to users.
The std lib also provides some very straightforward yet useful macros to act as inspiration. String formatting is an inline macro. vec is a macro to quickly define Vectors. Derive debug, partial equality, default values are all procedural macros and they are very straightforward yet tedious tasks to do on your own all the time.
Given the macros people publish, I would say they've done a good job at securing them as a useful feature. I've not seen many instances in actual code bases of messy macros. For examples of great macros, see serde[0], clap[1], inline-python[2]
[0]: https://github.com/serde-rs/serde [1]: https://github.com/clap-rs/clap [2]: https://docs.rs/inline-python/0.5.3/inline_python/
easy: far easier than gradle and maven, imho even easier than go. Definitely easier than cmake.
- Having to learn weird syntax and constantly look up the reference manual (CMake) - Having to manually add source files (qmake) - Sometimes it just needs a clean and nobody knows why (Visual Studio) - Having to remember to set up debug and release builds and decide your directory layout for everything and figure out what 'a shadow build' is and who gives a crap since it all takes too much HDD space either way (qmake, CMake, make)
Also having tests built-in is really nice. Rust is the only language where I bother writing tests. Everything else makes it too hard, as if entry points into your binary are supposed to be rare and expensive.
The new API to cast in an unsafe manner is:
let x: f32 = 1.0;
let y: u8 = unsafe { x.to_int_unchecked() };
But as always, you should only use this method as a last resort. Just like with array access, the compiler can often optimize the checks away, making the safe and unsafe versions equivalent when the compiler can prove it.
I believe for array access you can elide the bounds checking with an assert like assert!(len(arr) <= 255)
let mut sum = 0;
for i in 0..255 {
sum += arr[i];//this access doesn't emit bounds checks in the compiled code
}
I'm guessing it would work like this with casts? assert!(x <= 255. && x >= 0);
let y: u8 = x as u8; // no checkHere's an example of how when it can detect it, it does the right thing: https://godbolt.org/z/hPqf69
I am not an expert in these hints, maybe someone else knows!
Something so lossy and ill-conceived should not be a two-letter operator.
I would say that my personal take of the temperature is "vaguely pro but not a slam dunk", at least from the opinions I've seen. Only one way to find out.
The numeric casts are the easy part of this.
Rust has great libraries to make life easy, eg https://docs.rs/fixed/1.0.0/fixed/ (Note: I haven't benchmarked the 4 or 5+ fixed precision libraries Rust offers to see which is best.)
That's not really true. It's more pulling from a database or csvs as backtesting is the most important part, which is also why the person you were replying to was asking about backtesting specifically.
Most firms roll out their own programming language, because before Rust existed there wasn't really a language that was a good choice for algo trading. Algo trading needs a few things:
1) It needs a financial number data type. That is, base 10 precision. Floats and doubles will not cut it when dealing with money.
2) The language needs to not implicitly do type conversions, so your types do not accidentally get converted to doubles.
3) You want provability. That is, you want guarantees that your program will run exactly the way you intended or you could lose a lot of money.
4) You hopefully want it to go fast, or backtesting could take ages. Historically super computers have been preferred, but that is probably not the case today. (This isn't even for HFT, just scalping and swing trading.)
Most in house languages in the industry are functional programming paradigm, because it allows guarantees. What you see is what you get. It allows one to write in a more mathematical way.
Today Rust takes the cake, as it is the only mainstream language that meets all the criteria, despite not being a functional programming paradigm language.
well done rust team
Can it "often" solve the halting problem as well?
The hope that this kind of optimization will happen sounds a bit fanciful for any non-trivial part of a program.
I ported a small C function to Rust recently that involved some looping, and all of the bounds checking was completely eliminated, even once I took the line-by-line port and turned it into a slightly higher level one with slices and iterators instead of pointer + length.