I find reading C++ "standards" papers onerous and feel like they're written in a way that's deliberately inaccessible. I don't much like the idea of going to CppCon -- even if my company funded it, which maybe they would, I feel like I'd be marginalized for not using template metaprogramming, not knowing the new hotness by heart, and generally being a proponent of C-with-classes. I just feel like so much of the C++ "standards" work feels like it's led by academics who think the concerns of working programmers like me are beneath them.
Is there a way I can "get involved" and does my voice have any value?
Videos of all CppCon talks from the last several years are freely available on YouTube and if you took the time to watch them you'd see that many of them are by working programmers and not academics, quite a few of them in the games industry. You'd also learn that the committee is quite focused on simplifying the use of the language and on finding more usable ways to get the benefits of template metaprogramming. You would also find explanations of many features that are more accessible than standards papers and occasional explanations of why standardese is the way it is - nobody, even the most academic speakers, claims to find the standard the most accessible way to learn about a new feature.
In a way it isn't fair. Most users of most programming languages don't know the language very well. So the views of these programmers are important.
Here are the C++ subjects of interest to me:
* geometric primitives. Not having basic geometry by now is insane.
* concurrency, I think the API as it is is good, but some of the APIs around atomics (mostly the difference between compare_exchange_weak and compare_exchange_strong) are not clear, I have had to explain those to coworkers before. Why can't we have something called test_and_set like on atomic_flag that's an OK default? Also, while I perfectly understand why that's the case, atomics have deleted copy constructors and that tends to generate compiler errors. I don't get why compilers can't just generate default constructors that don't copy the atomic like all my coworkers keep having to write and potentially introduce errors in
* filesystem API, very excited for that but sort of worried it will not work on every platform and especially that it'll work terrible on windows. I have to write Windows/Linux/iOS/Android C++ so a bunch of my gripes are "the standard is inconsistently supported" and I supposed that's not the job of the standards committee to enforce... but maybe they could stop inventing new stuff for Microsoft to screw up...
* few other misc things. The fact X const& and const X& are both valid leads to every project having its own "standard" of const placement and it's not great to read, etc. not major stuff. But things that are of concerns to working programmers.
I'm not really complaining, I love C++ and being a C++ programmer and you couldn't pay me enough to go integrate idiosyncratic web frameworks. However I think you must admit that the C++ community, beyond the language itself, is a little exclusive and not super friendly to people who are not as knowledgeable as others. I have a little time to learn some of this stuff, and I would be happy to, but I will never be an expert like most of the standard committee is. Based on your message I think that makes me and my opinion not welcome? If so that's OK, but, um, there's a lot more of me than there are of them in this industry.
(edited for formatting)
Is there a way I can "get involved" and does my voice have any value?
The truth is that C++ standardization is not full of "academics", and people involved voice these same concerns.
One of my fellow committee members, Guy Davidson, is in the games industry, and the subject of the 2D graphics proposal has been a regular topic at our meetings.
* http://cppcast.com/2018/07/guy-davidson/
Anyway, originally in my OP, I read the blog post and at the end, he was saying to people that have any industry-specific concern with the way they use C++, and the way C++ is changing, to get involved. I was just saying, as a C++ end-user, even one who cares enough to write this, there is no readily accessible way to do that, because I don't work with anyone who's already on the committee nor at Microsoft. So it feels a bit like an empty rebuttal on his side. Writing this here, it's basically the longest discussion I've ever had my whole career with people who really know deep things about C++ and the process it's made by. I'm glad for it but I'm clearly not the demographic the original blogger was addressing! Even though I definitely feel those same issues he outlines.
Most often I learn code by reading code. OpenSceneGraph is an example of an open source project that has modern ish C++ but no deliberately abstract misdirection. Open source usually does not have super clean code but it's a good way to sample the variety of structures you can find in a project. Of course the classics (sqlite and the Linux kernel) are much too C-like for where I'm getting at but they are still full of lessons for how to organize modules and APIs, how many arguments to pass and where, where to park I/O code, that sort of thing.
I posted it here for the benefit of HN readers who may have the same question but I'll write you an email too so you can share what you want to improve and such. There aren't many C++ programmers left in the HN-reading, not just punching a clock demographic, and I kind of miss talking to people who get that. Like the prototypical game dev in the article I go to one conference a year, and it's not about C++.
[1]: https://yosefk.com/blog/ [2]: https://blogs.msdn.microsoft.com/oldnewthing/
> I find reading C++ "standards" papers onerous and feel like they're written in a way that's deliberately inaccessible.
Sadly, it's somehow true; standard wordings are typically not readable for laypeople because its primary purpose is unambiguous specification but not education. Generally, I find that it's a good idea to avoid "wording for ~~" proposals if I haven't followed that specific line of proposals from its beginning.
But many proposals are still fairly readable technical papers; for instance, Herb Sutter's proposals are generally easy to follow. (ex. https://wg21.link/P0709)
> I feel like I'd be marginalized for not using template metaprogramming, not knowing the new hotness by heart, and generally being a proponent of C-with-classes.
https://www.youtube.com/watch?v=rX0ItVEVjHc
Don't worry. Mike Acton is not known to be a strong proponent of "Modern C++", but his session[1] is one of the most popular CppCon video on Youtube. Even if you don't like templates, people will generally respect you.
> I just feel like so much of the C++ "standards" work feels like it's led by academics who think the concerns of working programmers like me are beneath them.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/n479...
You can find the list of the participants for the last meeting. Most of them are just engineers and even Bjarne is now working for Morgan Stanley (I think most of "designed by Ivory Tower" concerns can be generally credited to Bjarne being a professor before) They're just writing C++ code as their daily job like you (and very likely suffering from C++ as well). That's why they're writing proposals to improve the language.
Some tangential story: except for few exceptions, PL academics are generally not working on languages like C++ because it typically doesn't align well with their interests. Usually they tend to use more elegant, academic friendly languages like Haskell or ML. Or even fancier languages like Coq, Adga, Idris, depending on topics. Or design their own languages. For formal verification researches, maybe C or Java. But C++ is typically considered as a complex, inelegant beast for researches.
But his argument fits well with C++'s history of finding exceedingly complex solutions for simple problems. Want to have efficient matrix calculation? Well, who needs native support for matrices when you can do the same with expression templates and static polymorphism/CRTP (see: Eigen library).
The last section of the article says you either do nothing or you get involved. I'm afraid it is missing the obvious third option: switch to another language which actually supports your use case.
I have to defend C++ here - "native matrices" is under-specified. In practice, "Matrix" is one of the leakiest abstractions in programming and you have to care about representation and choice of algorithm pretty much from the get-go, and IMO C++ is actually the best available option for managing that complexity, especially when you're solving large systems in parallel (and it's worth pointing out that one of the front-running open-source libs in this space is written in C++[1]).
I would expand more on the first bullet point of why game devs don't test. Tests are anti-agile and game development is extremely agile. Usually you don't know what kind of game you're making until you're done.
Not when you're constantly prototyping, which is what game dev essentially is for the most part of the process...
First they had made it so if someone was playing the game and saw a bug they could press the "file a bug" key, type in a description and the game would save out enough info to bring someone back to that point in the game, same camera, and possibly other state. From the bug database they could click a link that would launch the game back into that state, let someone verify the fix and mark it as fixes.
The also had a waypoint system for bots to play through the puzzles (this was the Talos Principle they were talking about). If the bots ever got stuck, as in didn't make to the next waypoint within some time limit they bots would file a bug using the system above.
https://www.gdcvault.com/play/1022784/Fast-Iteration-Tools-i...
As another interesting idea apparently the creator of Thumber built a URL system so people press a button which would generate a URL into the clipboard, they could then paste that URL into Slack (or email/chat/etc) that would launch the game in a particular state to pass that other users on the team.
Tests are NOT anti-agile, that is just dumb. I feel like a bunch of hacker news hipsters read an article about TDD 3 years ago and then said "Yep that's my opinion! Tests are bad." Despite every major software company requiring unit tests for their production code-bases. ( Hint: It's because they did the research and found tests beneficial. ) Tests enable agility.
Let's just get this out of the way now: Tests are not about catching bugs. Tests are about allowing your to safely refactor your code without breaking previously declared behavior.
Testing enables you to iterate and refactor code without constantly releasing new regressions. Testing IS code quality. If you lack tests you lack a core piece of code quality.
Game project managers are infamous for not being great planners, so it wouldn't surprise me that they dismissed automated tests as "a waste of time" or "something that we can't do now because we don't have time now" (so we end up wasting more time in the end, having to do death marches, etc)
* None of the problems that have been commented on are unique to the games industry at all. Slow debug builds suck for all C++ developers and weird template meta-programming is confusing for practically everyone.
* He makes these broad hand-wavey statements like "individuals don't feel pain from slow compile times", or "big companies can just can throw processor power at it" to which I would say, BS. Fast iteration in C++ is really hard because of the delay and it's a big problem for everyone.
* "Participate more" -- isn't that exactly what people are doing on twitter? Not everyone can go to CppCon.
This problem was worst in my experience in the Xbox 360 / PS3 generation because the in order processors handled debug builds very poorly and were different enough from a PC that it was common to have to debug on target rather than on a PC build on a much more powerful development machine. It's less of an issue with current generation consoles that are basically PCs as they don't suffer as badly with debug performance and many issues can be debugged on a PC build on a more powerful system. It may be more of an issue for mobile still.
Fortunately many of the newer features of C++ 17 and 20 help both with improving debug performance and with simplifying / reducing the need for "weird template meta-programming". Several also help with compile times and modules in particular are quite focused on tackling the biggest root cause of slow compiles in C++.
Rarely do I see workstation-grade hardware in the wild, and when I have, they're build slaves that are incredibly anti-agile.
Part of this is just people complaining on a platform that over values short pithy complaints.
"Before about the early 90s, we didn’t trust C compilers, so we wrote in assembly."
There a lot of games released during the 80's, were they really all written in assembly ?
https://www.myabandonware.com/browse/year/
I don't have experience in the game industry at all, I must add.
Some things like texture mapping you could only write in assembler, because you'd need to use x86 lower/higher half of word (like AL and AH registers) due to register pressure. Spilling to stack could have caused 50%+ slowdown.
486 era you needed assembler to work around quirks like AGI stalls.
On Pentium the reason for assembler was to use FPU efficiently in parallel with normal code (FPU per pixel divide for perspective correction). Of course you also needed to carefully hand optimize for Pentium U and V pipes. If you did it correctly, you could execute up to 2 instructions per clock. If not, you lose up to half of the performance (or even more if you messed up register dependency chains, which were a bit weird sometimes).
One also needs to remember compilers in the nineties were not very amazing at optimization. You could run circles around them by using assembler.
Mind you, I still need to write some things in assembler even on modern x86. But it's pretty little nowadays. SIMD stuff (SSE/AVX) you can mostly do in "almost assembler" with instruction intrinsics, but without needing to worry about instruction scheduling and so on.
Plus, nobody had a 486 in the 80s (it was released in 1989). People would be lucky to have a 286, but usually just some home computer (Apple II, Spectrum, Commodore 64, Atari ST, Amiga 500, Amstrad CPC, etc).
I tried the same algorithm in godbolt with some clang versions and it was slightly better, using two or three registers, but not by much. So I had to break it into inline assembly.
I wonder if GCC has improved since then.
This only changed slowly with 16-bit machines like the Amiga or Atari ST, they had more memory, the Motorola 68000 instruction set was more suited for compiled languages, and the custom chips (like copper and blitter) freed the CPU from many graphics tasks. Yet even on those machines the critical parts were usually written in assembly.
Most of them, yes. Here's a well known game:
Slower-moving adventure and RPG games might be in a higher-level language. (IIRC the original Wizardry was written in a VM-based Pascal for Apple II?)
Companies that specialized in adventure games would have their own interpreter and VM — Infocom's ZIL, Sierra's AGI, Lucasfilm's SCUMM. Game developers would write code in a scripting language against that company-standard VM.
Amateur games might be written in BASIC because every computer under the sun shipped with a BASIC interpreter back then.
C wasn't a practical option because decent compilers didn't exist for most non-Unix systems until the end of the 1980s — or if they did, they'd cost an arm and a leg. (I think the retail price for Microsoft's C compiler for DOS was several thousand dollars.)
Also, Microsoft Visual C became good, but it was not before version 4. I remember watching a team at Activision literally take more than an hour to compile their game. Perhaps they were doing something wrong, but similar teams had much less of a problem when 4.0 came out. You cannot imagine what a drag that is to a team's productivity and creativity.
EDIT: Found it. Of course Fabien wrote about it: http://fabiensanglard.net/prince_of_persia/
Every time I use it I get really frustrated by the difficulty of entering complex instructions. The GUI is more discoverable but I find myself missing gdb ‘s functions and parser.
However, one thing in gdb that’s become steadily worse is the ability to evaluate STL’s operator[] and the like in optimized code, with the debugger frequently whining about inlining. It’s pretty horrible having to decipher the _m_data or whatever of various implementations.
I’m actually not sure if gcc is not compiling the inlines into the object code (I thought it was required by the standard) or if gdb just can’t find them.
I'm also of the opinion that GDB is getting worse in terms of what it shows (or more often won't) these days, especially with regards to >= C++11 - maybe it's just out-of-date python pretty-printers, but on multiple recent linux distro machines, it won't even show the contents of std::string these days without diving inside the structure or using expressions.
Like in the article he mentions not being able to see custom data types, but my .gdbinit has a few pretty printers in it for exactly that purpose.
And when you do get something customized in MSVC like a specific PGO build or something, it tends to be tightly coupled to that project. It’s less easy to cut and paste into another project since the primary interface is really a dozen little text fields modifying XML somewhere.
Basically it was such a pain to debug on the actual game console platform that we made a PC build of the game just so that we could have the pleasure of debugging in Visual Studio and avoid having to debug on the console, even though we were not planning to ship on PC. Maintaining a separate PC rendering engine and other platform specific libraries at a fair expense, but it was always worth it in terms of being able to solve problems faster. We had other motivations as well, on some game platforms the tools for linking and building an executable "ROM" could be quite slow, so having a PC build would save quite a bit of turnaround time.
I have always had a very high opinion of Visual Studio's debugger because it has a few features that are invaluable that other debuggers lack:
- "Immediate Mode", a.k.a., the ability to run C++ functions while stopped at a breakpoint. We use this a lot for writing functions which log out a bunch of useful information which would otherwise be tricky or time consuming to find via the debugger's usual interface (watch window, or whatever). In particular, we encode all our strings in the game as 32-byte hashes but in debug modes we keep a u32-->string lookup table around, and calling functions in the immediate window is invaluable for getting identifying information from out of those string-hashes.
- Data Breakpoints, i.e. stop when a particular memory address is written to. Critical for debugging "memory stomp" bugs such as array-overflow issues or writes to dangling pointers.
- Custom "visualization". Visual Studio's debugger has an XML language called .natvis which allows you to define a custom way to pretty-print specific types. Basically the equivalent of defining a custom "ToString" in a language like C#. The .natvis language is kind of annoying to use, but without it certain data types would be a real pain to look at in the debugger. (e.g. rather than using pointers everywhere, we have a safer Handle type which is essentially a lookup into a table that has a pointer. These indirections would be a pain to follow manually in the debugger, but .natvis allows us to present the target object in the debugger automatically, so that it is as easy as following a normal pointer)
- Automation. There's a fairly rich automation API ("ENVDTE") which allows you to drive the debugger from outside tools. I'm currently using this with Python to provide a convenient way for non-technical people (e.g. QA testers) to bundle up and send all of the relevant details from a crash to other members of the team. (e.g. callstack with all the locals, and contents of log)
> 1. Do nothing (...) You can deal with that by imposing rules on what is and isn’t allowed in your codebase, (...)
This is what everybody is already doing in gamedev
> 2. Get involved (...) C++ committee participation is open to everyone. (...)
Most game dev studios are Small or Mediums sized companies, and don't really have the time to waste in Committee meatings...
Irrelevant. What counts is where the C++ game devs are, and it's in the big companies. And participating in the design of a language is not a waste of time...
If you are under pressure to ship something now/soon, and may not exist as a company in the next standards cycle, then it is probably a waste of time for that company.
In fact the most popular CppCon video in Youtube is from Mike Acton: “Data Oriented Design and C++”
But I'm curious if there are any console development companies that are successfully using Rust or other languages which perhaps can link with C++ libraries?
We use C# in our studio for tools, and are able to link it with our game C++ so that we can run some of the game's subsystems within the tools (e.g. animation engine) but shipping the game with C# code is not an option for several reasons, performance being the most important, but also we need to build our game for the console using CLang/LLVM, and I suspect it's not possible to write C# which interfaces with C++ using LLVM, only with Microsoft's compiler.
There are other game companies doing stuff, but we have less details including platform: EA’s SEED division, Ready At Dawn, and Embark (some ex-SEED devs making a new studio where Rust is the primary language.)
It's now dominated by C++ for a good reason, since it requires tight performance control. So Rust is a valid candidate for fixing C++ issues. C# - not really.
Rust? I don't see it either.
If you are allocating memory in your renderloop, your likely doing it wrong. Allocation outside of this critical path, well, chose your poison.
GC will consume more memory, but likely to be faster when it comes to allocating and deallocating large amount of small blocks.
Reference counting, if you do it properly (to avoid thread starvation), is costly. Cheap atomic reference counting, involves a calculated risk that you may have threads hanging.
Manual memory management, well, we all know the cost of that :)
That being said, many many games are today developed with Unit which uses C# as its primary programming language.
CPU performance is barely going anywhere. Developers should instead try to figure out how to do more with less growth.
GPUs are also overpriced, and playing older games and comparing them to new ones doesn't show great payoff. As far as I'm concerned, we've plateaued. Maybe going from a GTX 760 to a 1060 would give me a few more frames, but frankly, more often than not, the games are programmed like utter shit.