I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness
---
So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?
Compared with GC, borrow checking affects every function signature
Compared with manual memory management, GC also affects every function signature.
IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there
The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.
Likewise, type safety is a global property of a program
---
(good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)
For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.
And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.
#4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer
D's implementation of a borrow checker, is very intriguing, in terms of possibilities and putting it back into the context of a tool and not the "be all, end all".
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
This speaks volumes from such an experienced and accomplished programmer.
One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?
Cheers!
I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.
The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.
Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.
> > Back-references are difficult > > A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.
We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.
Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.
Java, C# and Go don't use atomic reference counting and don't have such overhead.
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).
People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.
But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.
Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.
But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
Yes.
Three months ago, when the Rust graphics stack achieved sync, I wrote a congratulatory note.[1]
Everybody is in sync!
wgpu 24
egui 0.31
winit 0.30
all play well together using the crates.io versions. No patch overrides! Thanks, everybody.
Wgpu 25 is now out, but the others are not in sync yet. Maybe this summer.[1] https://www.reddit.com/r/rust_gamedev/comments/1iiu3mr/every...
This was a problem with early versions of Scala as well, exacerbated by the language and core libs shifting all the time. It got so difficult to keep things up to date with all the cross compatibility issues that the services written in it ended up stuck on archaic versions of old libraries. It was a hard lesson in if you're doing a non-hobby project, avoid languages and communities that behave like this until they've finally stabilized.
A fear I have with larger side projects is the notion that it could all be for nought, though I suppose that’s easily-mitigated by simply keeping side projects small, iterative if necessary. Start with an appropriate-sized MVP, et al.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.
I think you should think less like Java/C# and more like database.
If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.
So I'll probably use Box here to refer to the parent
But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.
Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.
You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.
Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.
Finding the fun and time to market are top priority for games development.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
For the 4 people on HN not aware of it, this is a riff on Greenspun's tenth rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
I’m suspicious though that you could probably get away with literally just using like an in-memory duckdb to store your game state and get most of the performance/modeling value while also getting a more powerful/robust query engine — especially for like turn-based games. I’m also not sure that bevy’s encoding of queries into the type system is all that sane — as opposed to something like query building with LINQ, but I think it’s how they get to resolve the system dependency graph for parallelization
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.
The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).
Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.
I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.
Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.
Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
This said, they moved to Unity, which is C#, which is garbage collected, right?
But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.
I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.
Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.
Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.
But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
Citation needed.
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
That's interesting, did ChatGPT tell you this?
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
edit: The downvoters are missing out.
I am absolutely convinced I can find success story of web backends built with all those languages.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
Feeling passionate about a programming language is generally bad for the products made with that language.
That is the one of the first things my colleagues told me after trying Rust for a few weeks: a laaaarge number of crates under 1.0, and so many abandoned crates, still published in crates.io. Some of those have even reported CVEs due to heavy `unsafe` usage for... nothing.
I love Rust, but I have the feeling that the language (and its community) lost the point since the release of the 2018 edition.
I also don’t want to use a language with questionable hireability.
They've effectively dropped it a couple times in the past, and while they're currently putting effort in, the company as a whole does not seem to care about stuff like this beyond brief bursts of attention to try to win back developer mindshare, before going back to abandonment. It's what Microsoft is rather well known for.
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
So you mean, Rust is more of an intellectual playground, than an actual workbench? I'm curious how high the churn rate of packages in other languages is, like python or ruby (let's not talk about javascript). Could this be the result of rust being still rather young and moving fast?
> Conversely and ironically, this is why I love Go.
Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
Both. IMO rust shines as a C++ replacement, ie low level high performance. But officially its general purpose and nobody will admit that it’s a bad tool for high level jobs, it’s awful for prototyping. I say this as someone who loves many aspects of rust.
> Could this be the result of rust being still rather young and moving fast?
My hunch says it’s something different. Look at the people and their motivations. I’ve never seen such a distinct fan club in programming before. It comes with a lot of passion and extreme talent, but I don’t think it’s a coincidence that governance has been a shitshow and that there are 4 different crates solving the same problem where maintainers couldn’t agree on something minor. It makes sense, if aesthetics is a big factor.
> Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
Nothing I’ve noticed. Are you talking about GOPATH hell, from back in the day?
The only tooling I use personally outside of the main CLI is building iOS/Android static libraries (gomobile). It’s still first party, but not in the go command.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
GHC has an -fdefer-type-errors option that lets you compile and run this code:
a :: Int
a = 'a'
main = print "b"
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
1: https://web.archive.org/web/20220719130541mp_/https://commun...
2: https://web.archive.org/web/20240202140023/https://amethyst....
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).
I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".
It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/
They're distributing their game on Steam too so Linux support is next to free via Proton.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
And chances are that it won't be required.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
> The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM). > It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.Less time wasted sifting through half-baked solutions.
> Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
Hot reloading! Iteration!
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
How long ago was this and did you try JetBrains RustRover? While not quite as mature as some other JetBrains tools, I've found the latest version really quite good.
Java was my first hope. It was a bit safer than C++ but ultimately too verbose and the GC meant too much memory is wasted. Most games were very sensitive to memory use because consoles always had limited memory to keep costs down.
Next I spent years of side projects on Common Lisp based on Andy Gavin’s success there with Crash Bandicoot and more, showing it was possible to do. However, reports from the company were that it was hard to scale to more people and eventually a rewrite of the engine in C++ came.
I have explored Rust and Bevy. Bevy is bleeding edge and that’s okay, but Rust is not the right language. The focus on safety makes coding slow when you want it to be fast. The borrow checker frowns when you want to mutate things for speed.
In my opinion Zig is the most promising language for triple A game dev. If you are mid level stick to Godot and Unity, but if you want to build a fast, safe game engine then look at Zig first.
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
I feel most of the things mentioned (rapid prototyping, ease of use for new programmers, modability) would be more easily accomplished by embedding a Lua interpreter in the rust project.
Glad C# is working out for them though, but if anyone else finds themselves in this situaton in Rust, or C, C++, Zig, whatever - embedding lua might be something else to consider, that requires less re-writing.
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
That's the approach I've been taking with a side project game for the very reason alone that the other contributors are not system programmers. I.e. a similar situation as the author had with his brother.
Rust was simply not an option -- or I would be the only one writing code. :]
And yeah, as others mentioned: Fyrox over Bevy if you have few (or one) Rust dev(s). It just seems Fyrox is not on the radar of many Rust people even. Maybe because Bevy just gets a lot more (press) coverage/enthusiasm/has more contributors?
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
* They didn't select Rust as the best tool available to create a game, they decided to create a Rust project which happens to be a game
* When the objective and mental model of a solution is clear, the execution is trivial. I bet I could recreate a software which took me 3 months to develop in 3 days, if I just have to retype the solution instead of finding a solution. No matter which language
* They seem to struggle with the most trivial of tasks. Having to call out being able to utilize an A* library (an algorithm worth like 10 lines of code) or struggling with scripting (trivial with proven systems like lua) suggests a quite novice team
That being said, I'm glad for their journey:)
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
https://news.ycombinator.com/item?id=43787012
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
So I still have 100% confidence in Rust.
Is there a Rust equivalent of openai-agents-sdk?
I've ported code between engines, and that makes my productivity feel very... leisurely.
Also, it's endearing that he builds things with his brother including that TF2 map that he linked from years ago.
here's a web demo
Is it normal for Rust ecosystem to suggest software with this level of maturity?
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
What allocations can you not do in Rust?
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
Such a crappy thing for a company to do.
Imagine you all cost 100k/year to employ by the larger company (since you all apparently don't make money).
Then imagine you are all now cost 105k a year to the parent company.
It's no difference.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
For example; if you have a progressbar that needs to be updated continuously, you do what? Upon every `tick` of your Rust engine you send a new struct with `ProgressBar(percentage=x)`? Or do the structs have unique identifiers so that the UI code can just update that one element and its properties instead of re-rendering the entire screen?
But yeah my first thought here was Lua too like others said
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
Rust is a niche language, there is no evidence it is going to do well in the game space.
Unity and C# sound like a much better business choice for this. Choosing a system/language....
> My love of Rust and Bevy meant that I would be willing to bear some pain
....that is not a good business case.
Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.
Learning - Over the past year my workflow has changed immensely, and I regularly use AI to learn new technologies, discuss methods and techniques, review code, etc. The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance. While Bevy and Rust evolve rapidly - which is exciting and motivating - the pace means AI knowledge lags behind, reducing the efficiency gains I have come to expect from AI assisted development. This could change with the introduction of more modern tool-enabled models, but I found it to be a distraction and an unexpected additional cost.
In 2023 I wondered if LLM code generation would throttle progress in programming language design. I was particularly thinking about Idris and other dependently-typed languages which can do deterministically correct code generation. But it applies to any form of language innovation: why spend time learning a new programming language that 100% reliably abstracts boilerplate away, when an LLM can 95% reliably slop the boilerplate? Some people (me) will say that this is unacceptably lazy and programmers should spend time reading things, the other will point to the expected value of dev costs or whatever. Very depressing.Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
I rarely touch game dev but that made me think Godot wasn't very suitable
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.
PS: I love the art style of the game.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
Here's a thought experiment: Would Minecraft have been as popular if it had been written in Rust instead of Java?
Although points mentioned in the post are quite valid.
Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?
The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:
transform[eid].position += …
physics[eid].velocity = …But yeah, probably you don't need an ECS for 90% of the games.
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...
I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).
You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.
** value types/unsafe/pointers/stackalloc etc.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.
pub fn randomize_paperdoll<C: Component>(
mut commands: Commands,
views: Query<(Entity, &Id<Spine>, &Id<Paperdoll>, &View<C>), Added<SkeletonController>>,
models: Query<&Model<C>, Without<Paperdoll>>,
attachment_assets: Res<AttachmentAssets>,
spine_manifest: Res<SpineManifest>,
slot_manifest: Res<SlotManifest>,
) {