Meanwhile, most of the C++ code from Google seems to be written in some mishmash of different ideas, always at some halfway point along a migration between something ancient and something passable... but never anything I would ever dare to call "modern", and thereby tends to be riddled with state machines and manual weak pointers that lead to memory corruption.
So... I really am not sure I buy the entire premise of this article? Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem, any more than I think the massive pressure Google has exerted on the web has been positive or any more than the pressure Google even exerted on Python was positive (not that Python caved to much of it, but the pressure was on and the fact that Python refused to play ball with Google was in no small part what caused Go to exist at all).
(FWIW, I do miss Microsoft's being in the space, but they honestly left years ago -- Herb's existence until recent being kind of a token consideration -- as they have been trying to figure out a tactical exit to C++ ever since Visual J++ and, arguably, Visual Basic, having largely managed to pivot to C# and TypeScript for SDKs long ago. That said... Sun kicking Microsoft out of Java might have been really smart, despite the ramifications?)
I spilled my coffee, I was just talking the other day to some coworkers how I don't trust google open source. Sure they open their code but they don't give a damn about contributions or making it easy for you to use the projects. I feel a lot of this sentiment extends to GCP as well.
So many google projects are better than your average community one, but they never gain traction outside of google because it is just too damn hard to use them outside of google infra.
The only Google project that seems to evade this rule that I know of is Go.
Here is a concrete reason why Google open source sucks when it comes to contributions and I don't think it can be improved unless Google changes things drastically: (1) an external contributor makes a nice change and a PR on GitHub; (2) the change breaks internal use cases and their tests; (3) the team is unwilling to fix the PR or port the internal test (which may be a test several layers down the dependency tree) to open source.
> making it easy for you to use the projects
Google internally use Blaze, a version of Bazel. It's so ridiculously easy for one team to use another team's project that even just thinking about what the rest of us needs to do to use another project is unloved dreadful work. So people don't make that effort.
I do not see either of these two points changing. Sure there are individuals at Google that really care about open source community, but most don't, and so their project is forever a cathedral not a bazaar.
Hence the failure of Longhorn, or any attempt coming out from Microsoft Research.
Ironically, given your Sun remark, Microsoft is back into the Java game, having their own distribution of OpenJDK, and Java is usually the only ecosystem that has day one parity with anything Azure puts out as .NET SDK.
The kind of mass refactorings / cleanups / static analysis talked about in this article are done on a much more serious and large scale on C++ inside the Google3 monorepo than they are in Chromium. Different build systems, different code review tools, different development culture.
I don’t know if they’ve really abandoned C++ entirely—the compiler team certainly hasn’t, that’s true. But the above doesn’t feel like first-class support.
[1] https://github.com/microsoft/cppwinrt/issues/1289#issuecomme...
[2] https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...
https://developercommunity.visualstudio.com/t/Implement-C23-...
I suppose usually one would like to have everything from a language standard.
The C++20 winning run seems to have been one of a kind, and whatever made it possible is now gone.
Speaking of gone, Herb Sutter has left Microsoft and most certainly had to do something with whatever is going on MSVC, C# improvements for low level coding, and Rust adoption.
Well, you may be celebrating a bit prematurely then. Google still has a ton of C++ and they haven't stopped writing it. It's going to take ~forever until Google has left the C++ ecosystem. What did happen was that Google majorly scaled down their efforts in the committee.
When it comes to the current schism on how to improve the safety of C++ there are largely two factions:
* The Bjarne/Herb [1] side that focuses on minimal changes to the code. The idea here is to add different profiles to the language and then [draw the rest of the fucking owl]. The big issue here is that it's entirely unclear on how they will achieve temporal and spatial memory safety.
* The other side is represented by Sean Baxter and his work on Safe C++. This is basically a whole-sale adoption of Rust's semantics. The big issue here is that it's effectively introducing a new language that isn't C++.
Google decided to pursue Carbon and isn't a major playing in either of the above efforts. Last time I checked, that language is not not meant to be memory safe.
[1] https://github.com/BjarneStroustrup/profiles [2] https://safecpp.org/draft.html
Carbon is intended to be memory safe! (Not sure whether you intended to write a double negative there.) There are a few reasons that might not be clear:
* Carbon has relatively few people working on it. We currently are prioritizing work on the compiler at the moment, and don't yet have the bandwidth to also work on the safety design.
* As part of our migration-from-C++ story, where we expect code to transition C++ -> unsafe Carbon -> safe Carbon, we plan on supporting unsafe Carbon code with reasonable ergonomics.
* Carbon's original focus was on evolvability, and didn't focus on safety specifically. Since then it has become clear that memory safety is a requirement for Carbon's success, and will be our first test of those evolvability goals. Talks like https://www.youtube.com/watch?v=1ZTJ9omXOQ0 better reflect more recent plans around this topic.
Carbon is an experiment, that they aren't sure how it is going to work out in first place.
> "If you can use Rust, ignore Carbon"
https://github.com/carbon-language/carbon-lang/blob/e09bf82d...
> "We want to better understand whether we can build a language that meets our successor language criteria, and whether the resulting language can gather a critical mass of interest within the larger C++ industry and communit"
https://github.com/carbon-language/carbon-lang/blob/e09bf82d...
He at least claims that Carbon will have memory safety features such as borrow checking down the line. I guess we'll see.
Herb is developing a whole second syntax, I wouldn't call that minimal changes. And probably the only way to evolve the language at this point, because like you said Sean is introducing a different language entirely, so its not C++ at that point.
I really like some of Herb's ideas,but it seems less and less likely they'll ever be added to C++
Google can embrace modern processes, but the language itself had better be compilable on whatever ancient version of gcc works on the one mission-critical architecture they can't upgrade yet...
I'm impressed that you even get as far as finding out whether that much C++ from disparate sources works on a newer version of C++. The myriad, often highly customized and correspondingly poorly documented build systems invented for each project, the maze of dependencies, the weird and conflicting source tree layouts and preprocessor tricks that many projects use... it's usually a pain in the neck to get a new library to even attempt to build, let alone integrate it successfully.
Don't get me wrong, we use C++ and ship a product using it, and I occasionally have to integrate new libraries, but it's very much not something I look forward to.
> riddled with state machines
Why is this bad? Normally, state machines are easy to reason about.Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
Now add multithreading. Still easy?
Now add locking. Still easy?
Cleanly-done state machines can be the cleanest way to describe a problem, and the simplest way to implement it. But badly-done state machines can be a total mess.
Alas, I think that the last time I waded in such waters, what I left behind was pretty much on the "mess" side of the scale. It worked, it worked mostly solidly, and it did so for more than a decade. But it was still rather messy.
What's wrong with state machines? Beats the tangled mess of nested ifs and fors.
We are just now feeling this. Some original contributors left the field, and lately the language has went in directions I don't agree with.
Interestingly, Google has fired the Python team this year. The revolution eats its own?
Anyway, Rust should take note and be extremely careful.
Some years ago Google decided that Go projects were similar engineering effort, better performance, lower maintenance, and so on that basis there was no reason to authorise new Python software and their existing projects would migrate as-and-when.
You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.
Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.
If you like playing the compiler olympics, or your employer forces you to, please use Rust.
The problem is that the rules enforced by Rust is not restricted to lifetime rules, it's a much much larger superset that includes quite a lot of safe, legitimate and valid code.
I'll have you know I made a variable void* just yesterday, to make my compiler shut up about the incorrect type :D
Edit: except when interfacing with C APIs.
Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling
Legislatures seem a lot more able to allocate large pots of money for major discrete projects than to guarantee an ongoing stream of revenue to a continuing project.
Of course it remains to be seen how this all plays out. Static lifetimes can be done good or bad. Profiles can be good or bad. Even if whatever we come up with is done well that doesn't mean people will (I know rust programmers who just put unsafe everywhere).
It seems very fair to tell them to just use Rust and leave C++ alone.
So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...
I don't demand that C++ evolution be halted. I support the current trajectory of not adding viral annotations for the sake of implementing static lifetime checking. I want C++ to evolve into a better version of itself, I don't want it to become something it's not. If you want static lifetime checking, please use Rust. It already exists and it's great for people who need static lifetime checking.
It’s clear it’s imperfect. But not clear there is an obvious path to a nearby local maxima.
Design choices have tradeoffs.
And even if that were true, who would take advantage of that “better” language in a purely abstract sense? New language standards primarily exist to benefit existing C++ code bases, and the cohort of engineers who work on them. You have to consider that social reality.
But this isn't one thing. Part of the problem is contradictions even in the definition of this trajectory. An important example: Will it be "you don't pay for what you don't use", or will it be "stable ABI"? There are adopted papers which say both - as the linked article indicates.
> please just consider another language, e.g. Rust
I'm not one of those people necessarily, but - Rust has its own set of design goals and problems. It is legitimate to want C++ to go one way rather than another.
> To Rust advocates, you can have the US government and big tech.
Now you are kind of contradicting yourself, because wide applicability is definitely a goal for C++.
"I don't want to install air bags and these shiny safety gadgets into my cars. We have been shipping cars without them for years and it works for us and our customers."
The problem is that it doesn't actually work as well as you think, and you are putting people at risk without realizing it.
(Yes, I know about airbag vests. Let's analogize those with external static checkers.)
As usual, the answer is quite simple: "please use rust". We promise to never mention when we break out nasm.
Driver anecdote: I have antilock brakes on my Tundra, but they are annoyingly counterproductive in 4WD descending 6" or larger sandy rocky steps. Do antilock brakes work overall best for the less capable mode? Of course! Do they work best for me? No.
The profiles proposal suggests adding static lifetime checking, "without viral annotations." I use quotations because I don't really agree with this framing, but whatever. The paper is here if you'd like to read it yourself: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p30...
The core idea here is that you add annotations to opt in or out of certain checks. And opting in may be a compiler flag, requiring no changes to source code. So that would be "backwards compatibility" in that sense. Of course, code may fail these checks, so you'll have to add annotations to opt out, or re-write the code. We will see in practice how much change is required once implementations exist and are tried out.
But the other part is, these profiles do not attempt to cover all valid cases. And what I mean by that is, there are some lifetime issues that this proposal does not attempt to analyze. And, where the analysis is similar, they offer a subset of what other proposals do. These decisions were made because the authors believe that they'll reduce a significant number of issues, and are easier to adopt. And that's worth it instead of going for more checks.
The competing proposal, Safe C++, has you opt into safety checks on a file-per-file basis. So in that sense, it is also backwards compatible: all existing code compiles as-is. When you opt in to those checks, it adds new syntax, similar to Rust, to do the safety analysis checks. So you gain this benefit for only new code, but it also you get much more power. This syntax is necessary to communicate programmer intent to the checks, but is the "viral annotations" that the proponents of profiles don't like.
So, basically, that's the thing: both are backwards compatible, but offer very different tradeoffs in the design space.
Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.
If having strict safety means I can't express my mental models in code, I don't want it. It will slow me down. It will make it harder to write software that's useful.
Remember people, we are here to make things that are useful to people. If safety gets in the way of that, then it's not worth it.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
That is not possible. The the following function in C++ std::vector<something> doSomething(std::string); Simple enough, memory safe (at least the interface, who knows what happens inside), performant, but how do you call that function from anything else? If you want to use anything else with C++ it needs to speak C++ and the means vector and string needs to interoperate.
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
Is that an ABI thing? I thought all versions up to and including C++23 were ABI compatible.
Now there is the possibility that someone could come up with a new breaking syntax and want a C++26 marker. However nobody really wants that. In part because C++98 code rebuilt as C++11 often saw a significant runtime improvement. Even today C code built as C++23 probably runs faster than when compiled as C (the exceptions are rare - generally either the code doesn't compile as C++, or it compiles but runs wrong)
The other faction that has lost faith in WG21, and wants newer, safer, nimble language with powerful tooling is already heading for the exits.
Herb has even directly said that adding lifetime annotations to C++ would create "an off-ramp from C++"[1] to the other languages — and he's right, painful C++ interop is the primary thing slowing down adoption of Rust for new code in mixed codebases.
[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p34...
"newer" is hopefully a non-goal.
Unfortunately, an option that is both safer and nimble doesn't appear to exist. I'm still hopeful, but at the moment it looks like rust is our future. A fate only marginally better than C++.
That does not bode well for Microsoft. At least from the outside perspective it looks like he was the adult in the room, the driving force behind standards adoption and even trying to steer C++-the-language towards a better vision of the future.
If he is gone, MSVC will again be the unloved bastard child it has long been before Herb's efforts started to pay off. This is very disheartening news.
I'm happy he held out for this long even though he was being stonewalled every step of the way, like when Microsoft proposed std::span and it was adopted but minus the range checking (which was the whole point of std::span).
Now he has been pushing for a C++ preprocessor. Consider how desperate you have to be to even consider that as a potential solution for naysayers blocking your every move.
MSVC will continue to be used for many years, and especially the backend might see renewed effort. But I don't know about the C++ frontend specifically, I've seen complaints about more and more bugs on the cpp subreddit. It's possible MS will be investing a little less in C++.
Meanwhile on Windows side, it was made officially at Ignite that a similar decision is now to be followed upon Windows as well.
Here the official stuff, so whatever happens to MSVC is secondary,
https://azure.microsoft.com/fr-fr/blog/microsoft-azure-secur...
https://blogs.windows.com/windowsexperience/2024/11/19/windo...
He has been showing it, but not pushing it. the difference is subtle but important. He is showing a lot of "what ifs" trying them, and pushing the useful ones back into the language. Reflection is on track for C++26 in large part because he inspired a lot of people with his metaclasses talk (a long time ago, but doing things right takes time)
.at() is added in C++26.
I'm sure it's never as clean a situation as anyone would like, but hey, world is a rough place sometimes.
Javascript/Node/Typescript has even more identifiable factions.
I think developing factions around these things is unfortunately normal as languages grow up and get used in different ways. Rust has arguably tried to stay away from this, but the flip side is a higher learning curve because it just doesn't let certain factions exist. Go is probably the best attempt to prevent factions and gain wide adoption, but even then the generics crowd forced the language to adopt them.
C++ does a lot, but has a big disengaged crowd, for many reasons, and that crowd will suffer from the push forward. Python and Node are similar.
Source: I am in both factions, as are my colleagues :)
Instead we have greybeards and lone warriors, and million-line legacy codebases, half of which have their own idea on what a string or a thread is.
I’m not sure were you got this perception that it’s dead.
C++ remains the only game in town in many domains.
That said, _unless you work in those domains_ there is no good reason to use C++ IMHO.
Apart from the legacy codebases, there’s lots of C++ greenfield development.
” the ability to include C++ code and libraries from others ”
Libraries in vcpkg - a large number - are compatible enough to be used in this sense. It’s possible your specific domain is lacking contributions or you’ve been looking from the wrong places?
Yeah but not because people like the language but because there is no alternative.
At work almost every one of us devs doesn't enjoy working with C++ but since many dependencies in the embedded space are written in C++ you don't have much of a saying which language you choose. For example, Qt supports basically only C++.
Rust is the only mature contender in that space currently.
Basically, it is no different than renaming .js to .ts to take advantage of some stuff in Visual Studio Code, while keep writing plain old JavaScript.
Given that the general industry approach to technical debt is "yes, more please", it is unsurprising to me that any sufficiently old C++ project still has lots and lots of plain C inside it.
I wish I could say modules don't work, but I have yet to understand them. Which is probably a big part of its problem.
Yes, languages that are beginner friendly are ... friendlier. Yes, languages that stick to one or a small number of programming paradigms are friendlier. But if you want the "flexible efficiency and raw power of C" and "something higher level than C", C++ is your baby.
Maybe it would be better if we all used Java, Rust, and Go, but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
You can choose to follow them or not, for there's no shame in coming in 4th.
Feel the committee was smoking weed that day in la-la land. You can ignore all the safety stuff from Sean Baxter, but saying no to performance on the altar of permanent, un-specified ABI backward compatibility - when such was never mentioned as a design goal of C++ - means its "Goodbye C++" for a long, long list of orgs and "wizards". The ABI was NEVER specified formally by the C++ standard - so why bother sacrificing the world for its immortal existence ?
C++ is NO longer the choice of language for greenfield native projects and the committee takes the full blame.
C++ was always by far the most inefficient langauge to work with for me, because there's just so much chore and nonsense that you have to get through to get anything done, and almost none of it has any reasonable purpose, there's no efficency tradeoff. I'm pretty sure that the insane build situation or UB in uninitialized variables or unspecified argument evaluation order never really benefited anybody, they are just bad decisions in the language, and that's all.
beautiful, in equal parts true, sad, and endearing.
but also remember the vasa.
However, I think the author is a little off on the root cause. They emphasize tooling: the ability to build reliably and cleanly from source. That's a piece of it, but a relatively small piece.
I think the real distinguishing factor between the two camps is automated testing. The author mentions testing a couple of times, but I want to emphasize how critical that is.
If you don't have a comprehensive set of test suites that you are willing to rely on when making code changes, then your source code is a black box. It doesn't matter if you have the world's greatest automated refactoring tools that output the most beautiful looking code changes. If you don't have automated tests to validate that the change doesn't break an app and cost the company money, you won't be able to land it.
Working on a "legacy C++ app" (like, for example, Madden NFL back when I was at EA) was like working on a giant black box. You could fairly confidently add new features and new code onto the side. But if you wanted to touch existing code, you needed a very compelling reason to do so in order to outweigh the risk of breaking something unexpectedly. Without automated tests, there was simply no reliable way to determine if a change caused a regression.
And, because C++ is C++, even entirely harmless seeming code changes can cause regressions. Once you've got things like reinterpret_cast<>, damn near any change can break damn near anything else.
So people working in these codebases behave sort of like surgeons with a "do no harm" philosophy. They touch as little as possible, as non-invasively as possible. Otherwise, the risk of harming the patient is too high.
It's a miserable way to program long-term. But it's really hard to get out of that mess once you're in it. It takes a monumental amount of political capital from engineering leadership to build a strong testing culture, re-architect a codebase to be testable, and write all the tests.
A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
Sometimes it doesn't even take that long, all it takes is a single innocuous vertical slice with a pointlessly immovable deadline to inject enough harm in a codebase so you spend the next year fighting bugs that shouldn't have existed in the first place while also having to do everything else at the same time (and all planned timeframes made with only the "everything else" in mind, of course).
IMO even if it doesn't sound good, it is much more practical to learn how to deal with the mud than assume pigs do not exist :-P
That was very much my experience at EA, but has definitely not been my experience at Google. While everyone struggles with tech debt, at Google I've worked in many codebases that have been continuously well-maintained with good test coverage for over a decade.
Really, once you build a culture that says, "People not on your team may edit your code without asking and will rely on your tests to make sure they don't break things,", teams get highly incentivized to write tests.
I think that tests are a sure-fire way to improve the quality of your code, but I'd throw another piece in to the ring: sanitizers [0]. Projects that have good tests and run them regularly with TSan/ASan/UBSan in my experience are much better to work on because it means that it's much less likely there's deep seeded issues that are lurking. It gives you increased confidence that you're not introducing hard-to-detect issues as you go.
These tools aren't just exclusive to C++. I've said the same thing about C, Go, Zig, Odin, etc. Projects that use them (and have good automated tests) tend to be in good shape, and projects that don't tend to take a long time to make any progress on.
“We must minimize the need to change existing code. For adoption in existing code, decades of experience has consistently shown that most customers with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules, not even for safety reasons unless regulatory requirements compel them to do so.” – Herb Sutter
with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules
Do people really say this? Voice this in committee? I have been in a few companies, and one fairly large one, and all are happy to and looking forward to upgrade newer standards and already spend a lot of time updating their build systems. Changing 1% of code on top of that is probably not really that much comparedQuite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
But this is a multi-billion-dollar industry. If you're working on scripting a little browser "app" for a phone things may be different.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
This is one of the reason why they are bad at open sourcing - their internal code almost never match what is released
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
Changing 1% across all modules is a nightmare. Changing one module which is 1% of the code is nothing.
The first was removing `long` from the code, since a lot of code assumed its size (is it like `int` or like `long long int`?) and as machines were upgraded it caused problems.
The second was moving to C++11/14/17. Most of the difficulty was toolchains on unixen that did not support the new versions of the language, or for which support was incomplete, or for which upgrading to a version with support broke existing builds.
The third was moving to Linux from big iron unixen. As far as I understand, this initiative is still underway. It was already underway in 2011 when I joined the company.
This is a rich company with a large, healthy engineering department. I imagine that most other companies would not or could not bother.
Being really good at C++ almost demands that you surrender entire lobes of your brain to mastering the language. It is too demanding and too dehumanizing. Developers need a language and a complete tool chain that is designed as a cohesive whole, with as little implicit behavior, special cases and clever tricks as possible. Simple and straight-forward. Performance tweaks, memory optimizations and anything else that is not straightforward should be done exclusively by the compiler. I.E. we should be leveraging computers to do what they do best, freeing our attention so we can focus on the next nifty feature we're adding.
Zig is trying to do much of this, and it is a huge undertaking. I think an even bigger undertaking than what Zig is attempting is needed. The new "language" would also include a sophisticated IDE/compiler/static-analyzer/AI-advisor/Unit-Test-Generator that could detect and block the vast majority of memory safety errors, data races and other difficult bugs, and reveal such issues as the code is being written. The tool chain would be sophisticated enough to handle the cognitive load rather than force the developer to bear that burden.
Oh I see, this is a fantasy.
It's all about the magnitude of the liability, not the direction
You do not need to re-solve this problem, and when a similar problem occurs, you can adapt the existing solution to the new problem.
Another way to think about it: if code was not an asset, we would delete it immediately after compilation.
How do I know this? I migrated a codebase of about 20m lines of C++ at a major investment bank from pre-ansi compilers to ansi conformance across 3 platforms (Linux, Solaris and Windows). Not all the code ran on all 3 platforms (I'm looking at you, Solaris) but the vast majority did. Some of it was 20 years old before I touched it - we're talking pre-STL not even just pre ansi. The team was me + one other dude for Linux and Solaris and me + one other different dude for windows, and to give you an idea the target for gcc went from gcc 2.7[1] to gcc 4[2], so a pretty massive change. The build tooling was all CMake + a bunch of special custom shell we had developed to set env vars etc and a CI/CD pipeline that was all custom (and years ahead of its time). Version control was CVS. So, single central code repo and if there was a version conflict an expert (of which I was one but it gives me cold sweats) had to go in, edit the RCS files by hand and if they screwed up all version control for everyone was totally hosed until someone restored from backup and redid the fix successfully.
While we were doing the port to make things harder there was a community of 667 developers[3] actively developing features on this codebase and it had to get pushed out come hell or high water every 2 weeks. Also, this being the securities division of a major investment bank, if anything screwed up real money would be lost.
It was a lot of work, but it got done. I did all my work using vim and quickfix lists (not any fancy pants tooling) including on windows but my windows colleague used visual C++ for his work.[4]
[1] Released in 1995
[2] Released in 2005
[3] yes. The CTO once memorably described it to me as "The number of the beast plus Kirat". Referring to one particularly prolific developer who is somewhat of a legend on Wall Street.
[4] This was in the era of "debugging the error novel" so you're talking 70 pages of ascii sometimes for a single error message with a template backtrace, and of course when you're porting you're getting tens of thousands of these errors. I actually wrote FAQs (for myself as much as anything) about when you were supposed to change "class" to "typename", when you needed "typedef typename" and when you just needed "typedef" etc. So glad I don't do that any more.
But since you say version control was CVS, then I guess it was Goldman. They still have that sheizen for SecDB/Slang today.
And I assume that "Kirat" is Kirat Singh of Goldman SecDB/JPM Athena/BofA Quartz/Beacon?
Very impressive indeed.
This links to:
https://thephd.dev/finally-embed-in-c23
It was a fascinating story, particularly about how people finally coming to terms with accepting that a seemly ugly way of doing things really is the best way (you just can't "parse better").
The feature itself is interesting too.
#embed has to pretend - in principle - that we're going to conjure all these byte values into existence, as actual numbers, and then by the "as if" rule the compiler is not really going to do that because it would be crazy slow. The reality that we're just going to shove the data into the program as if it was an array is an (obviously, implemented everywhere) optimisation, rather than part of the language specification.
The analogous Rust `include_bytes!` just gets you a &'static [u8; N] -- an immutable reference to an array of N bytes which lives forever.
At first I thought OK, well, maybe the C approach lets you do some clever compile stuff that Rust can't do. Nope. If I have a compile time function checksum which calculates a 64-bit checksum of the slice passed by immutable reference - and a file of 128MB of data called firmware.bin, Rust is completely fine with let sum = checksum(include_bytes!("firmware.bin")); and that results in a 64-bit value, the 128MB file evaporated after being checksummed.
This invokes the imagery of a 1950s Apollo era scientist saying something serious. But I promise you there is no visionary low level language authority in the background. It’s just a staffer being influenced by the circle of blogs prominent on programming Reddit and twitter.
> no overhead principle
It’s actually nice to hear they are asserting a more conservative outlook and have some guiding design principle.
Bjarne is more of a super-bureaucrat than a designer. In the early days he pulled C++ into whatever language movements were popular. For a while it looked like Rust was having that influence.
But the outcome has been a refinement of C++ library safety features which are moderate and easy to adopt.
In hindsight I would've probably written a few things differently, but I really didn't want to fall into a trap of getting stuck editing.
Oh, and you can't avoid that, say, you are working on a trading bot and that's the only "supported" way to connect to an exchange.
In the end people usually just reverse engineer and reimplement to get rid of such cursed blob. Fortunately, it works - the vendor can't effectively push all clients to update their SDK too, so all wire protocols are infinitely backward compatible.
Several months later, I learned I had experienced slight brain damage due to hypoxia and I’ve been slowly recovering ever since. The worst part of all of this is that I said in that post that I was enjoying golang. In other words, I had brain damage and suddenly found writing Go to be fun. Take from that what you will
OMG. ;) It's an interesting rant nonetheless. One example of this is [...] the new proposed (but not yet approved) Boost website. This is located at boost.io and I’m not going to turn that into a clickable link, and that’s because this proposed website brings with it a new logo. [...] This logo features a Nazi dog whistle. The Nazi SS lightning bolts.
The thing about dog whistles like this is that you can feign ignorance or act like someone is seeing something that isn’t there, but for something egregious it’s very hard to defend it in this case.
Of course, there’s other political dog whistles out there in the tech world right now. Justine Tunney named her C library, cosmopolitan5, which I personally believe is named after the term Rootless Cosmopolitan. This is a pejorative Soviet epithet which was used primarily during Joseph Stalin’s antisemitic campaign in the late 40s and early 50s. This is obviously much harder to prove6 as Justine has done a very good job of deleting some very eyebrow raising tweets over the years, even having them scrubbed from The Internet Archive’s Wayback Machine [...]
Justine, unfortunately, doesn’t appear to have made any amends either, at least publicly, or even acknowledged her past behavior, though she is more than happy to reference her time in the Occupy Wall Street movement. These days however, she’s busy working on llamafiles for Mozilla. For those of you not in the know, a llamafile is basically for turning an LLM’s weights into an executable.
And then he makes (yet another!) detour to AI and C++ which I am going to follow.It's a massive post though. Right now I am an hour in and probably about 75% done and I am skipping most of the linked articles. Except for the Ender's game parts. I highly recommend those.
To save people the trouble it seemed like a manic rant intended to pick several bones (at least the author is self aware enough to admit as much). It's heavy on the "trust me, I have sources" and light on actual content. It's got enough drama and insinuations from calling people liars, narcissists, to finally nazis. It veers from committee drama, to Trump, to feminism, to AI... very hard to follow.
Worthy of a daytime soap opera but other than that there's nothing notable there. Except it does make me want to avoid all these people, on both sides of whatever drama this is.
EDIT: found one on wayback: https://web.archive.org/web/20241124225457/https://herecomes...
Were namespace versions determined to not solve this problem? That would be the most ironic thing after all if the change management system introduced in c++11 to avoid std::string is either unused, untrusted, or unworkable for the purpose it was intended.
if your library has a public function that takes a string argument or returns a string or a struct containing a string then that implicitly is an ABI dependency on either std::v1 or std::v2. The library would have to actively add versions for both ABIs. And if the stdlib type is used in a struct/class it can't even differentiate between those types in the ABI.
It would be up to the library to decide what to do. It could decide to only expose v1 or just v2.
> And if the stdlib type is used in a struct/class it can't even differentiate between those types in the ABI.
This could be solvable through annotations in the header (explicitly written or implicit through the use of flags) that indicate the version of the types used the std types are referenced implicitly (you could even have warnings and errors to that effect if you encounter it).
It seems like a solvable problem to me.
On the other hand, having a bit more transparency into the workgroups and their way of doing things may allow the process become a bit more efficient, approachable, and maybe would allow shedding some of the problems which have accumulated due to being so isolated from the world.
Some of the alleged events really leave a bad taste in the mouth, and really casts a shade of doubt for the future of C++.
Lastly, alienating people by shredding their work and bullying them emotionally is not the best way to build a next generation of caretakers for one of the biggest languages in the world. It might not fall overnight, but it'll certainly rot from its core if not tended properly. Nothing is too big to fail.
I strongly disagree with this. The more code you have, the more resources you have to spend maintaining it. There is a very relevant example close by in the post: the bit about Google having a clang-based tool that can refqactor the entire codebase. Great! Problem is, an engineer had to spend their time writing that, and you had to pay that engineer money - all because you have an unmanageable amount of code.
The real tech asset is processes: the things you have figured out in order to manage such an ungodly amount of code. Your most senior engineers, specifically what's in their heads, are an asset too.
package management belongs to the os - or at least something else.
don't get me wrong, package management is a real problem and needs to be solved. I'm arguing against a language package manager we need a language agnostic package manager.
Languages need to be more agnostic than a package manager requires because I should not have to rope another organization into my trust model.
Cargo already goes too far in encouraging a single repository (crates.io) for everything through its default behavior. Who maintains crates.io? Where is the transparency? This is the most important information the user should know when deciding to use crates.io, which is whether or not they can trust the maintainers not to backdoor code, and it is rarely discussed or even mentioned!
The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
In C/C++ you have a separation of the standard from the implementation. This is really what makes C/C++ code long-lived, because you do not have to worry about the standard being hijacked by a single group. You have a standard and multiple competing implementations, like the WWW. I cannot encourage the use of Rust while there is only a single widely-accepted implementation.
That and a better package manager so your clang wrong version problem cannot have. Which is what I was trying to get at.
Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
C++ is only starting to address this problem with efforts like CPS. We have a plethora of packaging formats, Debian, pkg-config, conan, CMake configs, but they cannot speak fluently to one another so the package ecosystem is fractured, presenting an immense obstacle to any integration effort.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
Point 2: What if there's no OS?
Will the GhostBSD maintainers pin the right version of Haskell's aeson package?
Will the Fedora Asahi devs stay on top of the latest Ocaml TLS developments?
Will MS package PureScript's code for DOM manipulation?
If we are talking about global shared dependencies, sure it may belong in the OS.
If we are talking about directly shared code, it may as well belong in the language layer.
If we are talking about combining independent opaque libraries, then it might belong in a different "pseudo os" level like NPM.
It clearly doesn't except if you're a fan of dll hell and outdated packages.
Let’s say we make a “thing” which contains packages for all participating languages.
98% of the time, aren’t users just going to go “filter down to my language” and just continue what they’re doing, except with a somewhat worse overall experience, depending on whatever the “lowest common denominator” API + semantics we use for this shared package management solution.
Multi-language build systems already exist, which happily serve the needs to those projects which find themselves needing cross-language (+distributed) builds. Could there be some easier versions of these? Sure, but I don’t feel like “throw everyone in the same big box” is the solution here.
Os package managers do a fundamentally different task than dependency management tools used in development.
They ship a bunch of applications and the libraries you need to run the applications.
If you need different version of libfoo than e.g. Firefox does, you're out of luck.
Need to support a customer with an older release which needs a different version of libfoo? Not gonna happen.
Unless you're talking about Nix or Guix, your OS package manager is not a substitute for a dependency management tools.
There must be some extremely ideological reason behind these horrible “modern” C++ standards.
There are some good trend happening during C++ 11, but now it is completely out of control now.
My Rational for Using C++ in 2024: (A) Extreme computational performance desired. (B) I learned C++ 20 years back. (C) C++ has good enough Cross-Platform (OS) compatibility.
This code was originally developed in the late 1980s.
A good packaging tool would have helped a lot.
When I contributed to LibreOffice (GSoC 2012) they were still on C++03 !
Ha ha ha, that's funny. It uses pre-98 C++ code, that's set in stone because of extension/UNO APIs. Yes, you can use C++17 in a bunch of places, but not for the basic structures, classes, idioms etc.
And - that's coming from a huge LibreOffice supporter. Speak at conventions, got the T-shirts, everything.
You should only track source versions and build things from source as needed.
I'll will not, thanks.
A good programming platform has to consist of tooling which includes package managers, compilers, linters, etc. Ideally, in this orbit you would also have "Language Servers" or similar. At the very least, the compiler and language should be written with this in mind, e.g. written for the ground up to support incremental compilation and so on.
Go, C#, and Rust all have tooling-first and more importantly first-party tooling. The people who design the language MUST be able to iterate quickly with the people who make the compiler, who in turn should be able to walk down the hall and talk to the people who make the package manager, the package manager repository, and so on.
And I do take asbestos as a serious example. Asbestos is still manufactured and used! Believe or not there are "safe" uses of asbestos and there are protocols around using them. Nevermind the fact that there is a lot of FUD and dishonesty about where exactly the line cuts on what is safe versus not safe...for example we are finding out how brake dust affects the wider environment as we crawl out from under the tent of utter misinformation of a highly motivated entrenched industry.
I feel like this is not a new human phenomenon. We made particularly poor choices in what tech we became dependent on, and lo and behold, the entrenched interests keep telling us it's not that bad and we should keep doing it because...reasons.
It will eventually play out the way it must; C++ might seem a lot more innocuous than asbestos, and in some ways that's true, but it resists all effort to reform it and will probably end up needing to just be phased out.
Uh, they took decades to implement a bunch of C99 features. Is that predictive? I suspect it is.
I feel it would be best for the C++ language that its development would stop. There's no way to fix its current problems. The fact that it stayed compatible with previous iterations over so many years is an accomplishment, almost a miracle, it should be cherished. Deviating from that direction doesn't make sense. Keeping that does not make sense either.
Programming languages benefit far more from a robust implementation, tooling, and good technical documentation (which may read like a standard) than from a prescriptive standard. The latter generates enormous waste, for what?
> * Relatively modern, capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
> * Everyone else. Every ancient corporation where people are still fighting over how to indent their code, and some young engineer is begging management to allow him to set up a linter.
Well said.
And because of this, a lot of the first is leaving for greener pastures.
I also think there's a place where it can easily be supplanted, but currently cross platform native software has Qt and bindings for it in other languages are mixed.
In performance critical things, Rust still doesn't feel like the final answer since you end up cloning a lot and refactors are very painful. Go obviously has it's issues since SIMD support is non-existent and there is limited control over garbage collection, though it works well for web APIs.
All these new features introduce some run-time overhead, it seems.
One of C++'s core tenants is (and has been since the 90's) zero-cost abstractions. Or really, "zero-runtime-cost abstractions"; compile times tend to increase.
Obviously some abstractions necessarily require more computation (e.g. raw pointers vs reference-counted smart pointers). But in many cases new features (if implemented correctly!) give better semantics and additional compile-time safety while still compiling down to equivalent binary.
ABI stability in the context of the standards committee is about library ABI, specifically the standard library. When the committee updated the wording about C++'s std::string in C++11, it meant implementers needed to change the layout of a std::string, making this "new" std::string incompatible with the "old" std::string. Any libraries passing std::string across API boundaries needed to be recompiled with the "new" std::string.
This has no effect on FFIs for interop with other languages, which are not passing STL types across language boundaries to begin with (a std::string has no meaning in Python).
ABI stability for the standard library is motivated by large, old, coroporate codebases which had poor API practices, passed STL types across ABI boundaries, and subsequently lost access to the source code of those libraries and applications or otherwise cannot recompile them for some reason. Many people question the wisdom of catering to such users.