I took the opposite approach, and it has cause great pain. I've been writing a metaverse client in Rust. Right now, it's running on another screen, showing an avatar riding a tram through a large steampunk city. I let that run for 12 hours before shipping a new pre-release.
This uses Vulkan, but it has WGPU and Rend3 on top. Rend3 offers a very clean API - you create meshes, 2d textures, etc., and "objects", which reference the meshes and textures. Creating an object puts it on screen. Rust reference counting interlocks everything. It's very straightforward to use.
All those layers create problems. WGPU tries to support web browsers, Vulkan, Metal, DX11 (recently dropped), DX12, Android, and OpenGL. So it needs a big dev team and changes are hard. WGPU's own API is mostly like Vulkan - you still have to do your own GPU memory allocation and synchronization.
WGPU has lowest-common-denominator problems. Some of those platforms can't support some functions. WGPU doesn't support multiple threads updating GPU memory without interference, which Vulkan supports. That's how you get content into the GPU without killing the frame rate. Big-world games and clients need that. Also, having to deal with platforms with different concurrency restrictions results in lock conflicts that can kill performance.
Rend3 is supposed to be a modest level of glue code to handle synchronization and allocation. Those are hard to do in a general way. Especially synchronization. Rend3 also does frustum culling (which is a big performance win; you're not rendering what's behind you) and tried to do occlusion culling (which was a performance lose because the compute to do that slowed things down). It also does translucency, which means a depth sort. (Translucent objects are a huge pain. I really need them; I work on worlds with lots of windows, which you can see out of and see in.)
The Rust 3D stack people are annoyed with me because I've been pounding on them to fix their stack for three years now. That's all volunteer. Vulkan has money behind it and enough users to keep it maintained. Rend3 was recently abandoned by its creator, so now I have to go inside that and fix it. Few people do anything elaborate on WGPU - mostly it's 2D games you could have done in Flash, or simple static 3D scenes. Commercial projects continue to use Unity or UE5.
If I went directly to Vulkan, I'd still have to write synchronization, allocation, frustrum culling, and translucency. So that's a big switch.
Incidentally, Vulkano, the wrapper over Vulkan and Metal, has lowest-common-denominator problems too. It doesn't allow concurrent updating of assets in the GPU. Both Vulkan and Metal support that. But, of course, Apple does it differently.
WGPU uses WebGPU and AFAIK no browser so far supports "threads". https://gpuweb.github.io/gpuweb/explainer/#multithreading https://github.com/gpuweb/gpuweb/issues/354
And OpenGL never supported "threads", so anything using OpenGL can't either.
But even more common is mapping memory in "OpenGL thread" and then letting another thread fill the memory. Quite common is mapping buffers with persistent/coherent flags at init, and then leave them mapped.
WGPU goes beyond WebGPU in many ways already, and could also support threads.
The first part is true, but the second part is not. Allocation and synchronization is automatic.
Metal is older than Vulkan. So really, Vulkan does it differently.
This is really helpful for me to learn about, this is a key thing I want to be able to get right for having a good experience. I really hope WGPU can find a way to add something for this as an extension.
OpenGL was never "easy" but it was at least something a regular person could learn the basics of in a fairly short amount of time. You could go to any big book store, buy some intro to graphics programming book, and get some basic stuff rendering in an afternoon or two. I'm sure Vulkan is better in some regards but is simply not feasible to expect someone to learn it quickly.
Like, imagine the newest Intel/ARM/AMD chips came along and instead of being able to write C or C++, you're being told "We are dropping support for higher level languages so you can only write assembly on this now and it'll be faster because you have more control!" It would be correctly labeled as ridiculous.
OpenGL is only deprecated on MacOS, AFAIK, it will exist for many years to come.
I'm sure Vulkan is better in some regards but is simply not feasible to expect someone to learn it quickly.
Vulkan is often said to be more of a “GPU API” than a high level graphics API. With that in mind, the complexity of Vulkan is not surprising. It’s just a difficult domain.
Like, imagine the newest Intel/ARM/AMD chips came along and instead of being able to write C or C++, you're being told "We are dropping support for higher level languages so you can only write assembly on this now and it'll be faster because you have more control!" It would be correctly labeled as ridiculous.
IMO, it’s more like single threaded C/C++ vs multithreaded C/C++ programming. There is a massive increase in complexity and if you don’t know what you are doing, it’ll blow up in your face and/or give you worse performance. However, it’s the only practical path forward.
Anyway, OpenGL can basically be implemented on top of Vulkan. Perhaps it is regrettable that the OpenGL standard is no longer being actively developed, but nothing lasts forever.
It's abandondend by Khronos and GPU vendors, which is pretty much the same thing as deprecated unfortunately.
What I would have really preferred is Apple releasing Metal as a standard that anyone could use. Metal is a pretty nice API that's fairly fun and easy to use. I feel like if we need a "next gen" version of a graphics API, I would have had Vulkan for super low-level stuff, and Metal or DirectX for higher-level stuff.
This really doesn't mean much. The second cube (or another shape) isn't going to double the line count.
> OpenGL was never "easy" but it was at least something a regular person could learn the basics of in a fairly short amount of time.
The problem is that OpenGL no longer matches current hardware so the naive way to use it is very much suboptimal for performance. Once you move to zero driver overhead techniques for OpenGL then Vulkan is not that much harder.
> Like, imagine the newest Intel/ARM/AMD chips came along and instead of being able to write C or C++, you're being told "We are dropping support for higher level languages so you can only write assembly on this now and it'll be faster because you have more control!" It would be correctly labeled as ridiculous.
Except current Intel/ARM/AMD chips don't support C or C++ and you already have to write assembly ... or use a third-party tool to do the translation from C or C++ for you. That's also the goal for Vulkan - to provide a low level standard interface for GPUs on top of which more user friendly abstractions can be layered.
It's like the difference between "Hello World" in Python and "Hello World" in Java. Doesn't matter in the context of a serious software engineering project, but it's a significant barrier for beginners.
That's an absurd standard. "Writing an OS is easy, because once you've written enough code to get Doom to run, firing up a second copy of Doom is relatively straight-forward!"
It's those first 600 lines that are the problem. There's now a steep learning curve and lots of boilerplate required for even trivial use cases, where there wasn't before. That's a loss.
> The problem is that OpenGL no longer matches current hardware so the naive way to use it is very much suboptimal for performance. Once you move to zero driver overhead techniques for OpenGL then Vulkan is not that much harder.
Again, mad standard. "Writing Vulkan shouldn't be a problem for people who write complex zero driver overhead code for OpenGL." Great. What about the other 99% of people?
I'm not against Vulkan for those applications that truly do need it, but - frankly - most graphics programming does not need it. In giving up OpenGL we're giving up a sane, standard way to do graphics programming, and exchanging it for a choice between (i) making everyone write mad Vulkan boilerplate for control they don't need, or (ii) choosing between a bewildering array of engines and frameworks and libraries, none of which work at all similarly, many of which are still very immature.
And it's obvious which one will win - whichever proprietary, SaaS monolith emerges to clean up the mess. Losing OpenGL is a huge set back for open standards, honestly. The principal beneficiaries of depreciating OpenGL will be the shareholders of Unity, in the end.
The terrible terrible Vulkan API just kind of feels gatekeepey. They got rid of the only open easy-to-use graphics standard, made NO REPLACEMENT, and then said "oh you should just use this impossible API or let the outside world come up with a fix around it".
I don't know how this lie caught on so well, but it doesn't pass the smell test.
https://github.com/google/angle
several phones now ship ANGLE as their only OpenGL support on top of their Vulkan drivers.
If you want a modern-ish API that's relatively easy and portable use WebGPU via wgpu (rust) or dawn (c++).
Generally, I feel OpenGL is the recommended route if you don't really aim for advanced rendering techniques.
There are plenty 2D /lowpoly/ps1-graphics games right now, and those don't need to use vulkan.
Vulkan is an example of how the AAA gaming industry is skewed towards rendering quality and appearance. AAA game studios justify their budget with those very advanced engines and content, but there is a growing market of 2D/low poly game, because players are tired and realized they want gameplay, not graphics.
Also if you are a game developer, you don't want to focus on rendering quality, you want to focus on gameplay and features.
- No global state
- You can select which GPU you want to use at runtime
- OpenGL error handling is terrible
- Validation layers!!
- Cool official artwork
- Fantastic documentation
- You can upload data to the GPU asynchronously from a second thread in a sane way
- Fancy GPU features - Mesh shaders, RTX
I think the driver here is more likely the financial reality of game development. High-fidelity graphics are incredibly expensive, and small game studios simply cannot produce them on a realistic timeline and budget. Would consumers reject indie games with AAA-quality graphics? I think not. It's just that few such games exist because it's not financially viable, and there is a large enough market that is fine with more stylized, lower-fidelity graphics.
With current hardware and tools, it becomes much cheaper to reach graphics quality that is 10 or 15 years old, so such game would be just enough and be profitable enough.
I think that high quality rendering is reaching a tipping point where it's mostly of diminishing returns, this means AAA studios could differentiate with good graphics, but this becomes less and less true.
Gameplay matters, and the innovation is not the on the side of AAA studios.
Firefox and Safari don't support it yet. And how would you even deploy it to Steam (edit: or rather, make a desktop or mobile game with it)? Can you wrap it in a webview?
Doesn't seem mature...
There's no end of frameworks and engines out there, most unfinished, and all extremely opinionated about what your code must look like. ("You will build scene trees that hold callbacks. Actually no, you will write entity and component classes and objects. Wait, forget that, everything is immediate mode and functional and stateless now!")
Add all the platform nonsense on top (push notifications for a mobile game? In-app purchases? Mandatory signing via Xcode?) and it's all just a hot mess. There's a reason Unity has the market share it does, and it's not because it's a fantastic piece of software. It's because cross-platform for anything more complex than a glorified web view (which is, to be fair, most non-game apps) is still a massive pain.
Vulkan is a great idea for a general game engine or backing something like OpenGL. It's a low-level abstraction that should allow you to do something like use OpenGL 8.0 on a GPU that has long lost support for from its manufacturer.
Performance isn't an issue either - with the advent of GPU-resident drawing ,meaning a compute shader filling buffers with draw parameters and drawing multiple objects with complex materials in a single draw call means there isn't much of a CPU bottleneck. Look up OpenGL AZDO if you are interested in this.
OpenGL is kind of a middle ground. That's if you want to get technical, but not micromanage every detail and write pages of boilerplate. It is also a great introduction to graphics programming. More than an introduction actually, there is a lot you can do with OpenGL.
Unfortunately, Apple is holding every OpenGL app and dev hostage. It's officially deprecated by Apple. So OpenGL is not a viable cross-platform solution moving forward. Eventually, it will be completely removed from macOS, at which point it will just be an OpenGL to Metal converter that everyone uses, just like Vulkan with MoltenVK. So I feel it's better to use something like WebGPU moving forward that directly takes the stance that each OS will implement its own (perhaps multiple) backends. Unfortunately, it doesn't have the tutorials and learning material that OpenGL does. (This is from an inexperienced 3D person that's been looking into this a lot.)
One that stuck out to me: Don’t implement something unless you need it right now
This is a constant battle I fight with more junior programmers, who maybe have a few years of experience, but who are still getting there.
They are often obsessed with "best-practices" and whatever fancy new tool is trending, but they have trouble starting with the problem they need to solve and focusing on the minimum needed to just solve that problem.
> Remember that you can always rewrite any part of your game/engine later.
This isn't the case in medium to large organizations. Usually you will just move on and rarely have the time to revisit something. This is unfortunate of course, but it means you need to build things properly the first time around and make sure it won't have a chance to create bugs or side effects. I have worked in too many codebases were people feel the need to rush new features, and then it creates a minefield of a codebase where you have to manually check every feature and have the entire application in context when changing something small.
yes! and this means you need to know everything about what you're building upfront! so now you have to do waterfall, but hide all the actual effort and pretend you're just figuring it out in the moment because agile.
There are two poles: On the one side is making everything an unmaintainable pile of quick hacks, on the other side is over engineering everything. You want to be somewhere in the middle.
Don't back yourself into a corner. If a solution will block you from doing the right thing later, maybe some additional engineering may be needed after all to ensure you can continue to iterate. In large companies, inertia is enormous. Once a solution is in place and gets replicated, backtracking may become a herculean task.
Tell them YAGNI is also a best practice :D
To be fair if held strictly to these principles their work would revolve mostly around gluing together various Apis, services and sometimes adjusting already written software to company's needs. So I'm not surprised they are using every possible opportunity to write something here and there, this menial digital plumber's work takes its toll and one needs to try to squeeze in something a bit more enjoyable from time to time just to keep sanity in place a bit longer
When I WAS the bane of my existence was senior and staff engineers hoarding all the fun, new work for themselves and forcing me into "bitch work" where I'm just cleaning up old, broken code and handling bugs. I finally got promoted because I explicitly ignored my manager to deploy changes that were needed for my company, and that was what finally got me promoted. Of course, at that point, I had been looking for a new job and just left instead of taking the shitty, bottom-of-band promo increase.
It's awful for promotion chances, and it forced me to quit.
Useful advice to start with. It’s a rule that experts are allowed to break though.
I’ve observed/experienced the same exact thing [0]. I think it’s due to a combo of (1) not knowing what “the right way” to do things are and (2) thinking it’ll make your peers perceive you as more knowledgeable or advanced if they see you writing “best practices” code. Not to mention that sometimes the simpler solutions are so simple, they make you feel like you’re not a real software engineer. I usually just do my best to help them understand that simple solutions are okay, especially since (1) I’ve been there myself when I was in their shoes and (2) I know they have good intentions.
But there’s a fine line between implementing things but the difficulty being understanding long term vision and making sure short term improvements don’t actively work against the ‘ideal’. Kind of hard for newer programmers to get a good sense of system design.
This can go both ways, as senior devs can just want to use what they know.
the Vulkan driver is missing that ~20k loc of code that OpenGL driver does for you to set up the rendering pipelines, render targets etc.
This is all code that already exists in the OpenGL driver and has been optimized for +20 years by the best people in the industry.
So when you start putting together the equivalent functionality that you get out of the box with OpenGL on top of Vulkan doing it the naive way doesn't magically give you good perf, but you gotta put in some more work and then the real problems start stacking up such as making sure that you have all right fences etc synchronization primitives in place and so forth.
So only when you actually know what you're doing and you're capable of executing your rendering with good parallelism and correct synchronization can you start dreaming about the performance benefits of using Vulkan.
So for a hobbyist like myself.. I'm using OpenGL ES3 for the simplicity of it and because it's already good enough for me and I have more pressing things to matter than spend time writing those pesky Vulkan vertex descriptor descriptor descriptors ;-)
Btw this is my engine:
i've heard that vulkan allows bindless textures now, so the descriptor nonsense is a bit less awful that it used to be
vulkan is appealing, but there's a high initial cost that i don't want to pay
Then gradually we can replace that library with routines optimised for the use case?
They all introduce another layer of abstraction on top of Vulkan even before giving you the simple case without it. Its always use vk-bootstrap, volk, vma or someother library.
Is there a single resource anywhere that gives an example of doing the memory management manually because I havent found one, it seems like its either use vma or go figure out the spec are the only choices you are given. Is it too much to ask to just get the most basic example without having to add any libraries other than the Vulkan sdk itself?.
in most games, there are about 3 "lifetimes": - permanent/startup - per-level - per-frame
and they're nested. so, you can use a single stack allocator for all of them. at the end of the frame/level, pop back to where it started
there are more complicated patterns, but this one will get you pretty far. you can use it on the CPU and the GPU
When you're corroborating some random person's third-party instructions on initializing Vulkan and comparing those notes to what's done in Khronos Group repositories and reading the Vulkan 1.3 spec and realizing you have to read the specification out-of-order to get anything done, it's clear that they failed.
They failed. It's bad work by any other standard. But you'll do the work once and forget about it for the most part, so professionals don't complain too much.
Read my other comment in this thread for a portion of source code annotated with the specification chapters and sections.
It's a generic implementation that can be used with SDL and others.
Edit: As of the time of writing, the standard approach is to use VMA and Volk, both of which are included with the official Vulkan SDK. That should tell you enough about the state of the art.
One thing I found difficult is understanding how to use things in a real engine rather than a sample. A lot of samples will allocate exactly what they need or allocate hundreds of something so that they're unlikely to run out. When I was trying to learn DirectX, I found Microsoft's MiniEngine helpful because it wasn't overly complex but had things like a DescriptorAllocator that would manage allocating descriptors. Is there something similar for Vulkan?
Another thing I struggle with is knowing how to create good abstractions like materials, meshes, and how to decide in what order to render things. Are there any good engines or frameworks I should study in order to move beyond tutorials?
For descriptor set allocation, there is only one pattern that nakes sense to me: expect the pools to be rather short lived and expect to have many of them. Allocate a new one once allocation from the current one fails - don't keep your own counters for alocated descriptors. The standard allows for all kinds of pool behaviors that deviate from strict counting. Discard old pools after the the last command buffer referencing that pool is finished.
Pipeline barriers and image layouts are a big pain in the butt. It makes sense to abstract them away in a layer that tracks last usage and lat Format for everything and adds barriers as required. It can get complex, but ot's worthbitnonce you have optional passen or passes that can get reordered or other more complex things going on.
About neshes, materials, rendering order: this goes beyond what I can summarize in a single HN post. This depends a lot on the choice of rendering algorithms and I do not consider a very generalized solution to be worth the (enormous) effortto get this right.
Vulkan initialization and basic swapchain management is very verbose, but things get much better after you do it for the first time and make some handy abstractions around pipeline creation/management later.
I'd like to see them move nearly all 900-ish lines of SLOC back down into the near 90-ish you'd need to initialize OpenGL.
There's so much overlap in basically everyone's graphic usage of Vulkan that you realize after doing it yourself they should have done some simple optimization for the 99% use case, and allowed other people to write the full 900+ lines for GPU compute or other use cases.
For example, his quick sidebar to explain fundamental shader types was great even for me, as someone who is not that familiar with the topic (link goes to 11:20):
I can also recommend Arseny Kapoulkine YouTube channel[1]. It can get a bit too advanced at times, but his channel was one of my motivators of getting into Vulkan programming.
If you want to support those other devices you have to have a non-dynamic rendering path, and then at that point dynamic rendering is just more code. VK_EXT_shader_object is even better, but availability is that much worse.
Edit: If you are able to find a tutorial using dynamic rendering, learn with that. Render passes obfuscate what's going on, and with dynamic rendering you can see exactly what's happening.
I started learning Vulkan as an experiment and it seemed to work out well, so that’s why I wrote this article. :)
In my experience being daunted ans not knowing where to start is a large part of the difficulty in doing difficult things.
Great write up. Inspiring.
I try to make my website to feel like “the old Internet” that we seem to be losing and it's great that it’s noticeable. :)
* Noita
* Teardown
They both do their physics on GPU which results in some impressive effects and the level of destruction/world interaction which wasn't seen anywhere before.
Here's an interesting Teardown engine overview by its devs: https://www.youtube.com/watch?v=tZP7vQKqrl8
A simple blog post doesn't need super fancy design when its content can speak for itself.
Raw HTML definitely looks much uglier, sadly (“Reader mode” in most browsers makes websites without CSS easily readable, though!).