I'm perfectly happy managing my own complexity in C, or avoiding it entirely in Python. C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security. Every other "systems programming language" from D to Objective-C to Go to Rust to Nim presents a more coherent face to the programmer.
Being a C++ compiler writer (Zortech C++, Symantec C++, Digital Mars C++) I can assure you this is not true at all.
As to why C++ is so complex, my opinion is it is because it was designed a long time ago, what is considered better practice in designing languages has moved on, and C++ is unwilling to let go of the old decisions.
(There's always some major and vocal user who has build their entire store around some ancient feature.)
For example, why does C++ still support EBCDIC?
Yeah, after I wrote that I realized it wasn't quite right. C++ is designed by compiler-writer wannabes. Architecture astronauts[1] on standards committees. They think they understand how compilers should work, and that adding support for this or that should be easy. "You just need to..." is their favorite opening. I see plenty of this in distributed storage, too. "It's so simple, I'd do it myself, but it's not worth my time, you go do what I said." The C++ designers seem hung up on an abstract model of machines and compilers that's a poor match for any real machine or compiler ever, and the actual compiler writers have to bridge the gap. Thank you for your efforts, which are Herculean in an Augean-stables kind of way.
[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
Stop right there! There is plenty of evidence that removing features from a language is fatal to adoption. Both Perl and Python have suffered from this.
Specifically for trigraphs (apart from these, EBCDIC support doesn't affect compilers on other systems) IBM have a vote and they voted not to remove it: https://isocpp.org/files/papers/N4210.pdf
Because there are companies like Unisys and IBM that want to sell C++ compilers to their mainframe customers.
You might imagine EBCDIC was a thing of the distant past, and you'd be wrong. As of at least 2013 there was still production systems using EBCDIC being actively developed. In COBOL. And not just at IBM.
> The ratio between what a C++ compiler will accept and what it will produce sane code for is huge.
As is the case for any programming language.
> C++, on the other hand, seems to have been designed by compiler writers for their own enjoyment and/or job security.
C++ is designed by its standards committee... If you know anything about the struggles compiler writers have had with implementing the standard, you'd know the standards committee definitely does not consist of compiler writers! It's really cheap to summarize their efforts as motivated by advancing their own job security if you ask me... I can recommend you to attend a meeting or to read some proceedings to convince yourself otherwise.
For one example, look at the profusion of cases in type deduction, depending on whether one is dealing with a value, an array, a pointer, or one of two different references, and whether qualified as const or volatile.
One might argue that these cases are too prevalent to be called 'corner' cases, but that doesn't exactly help! In C++11 and C++14 there was the indisputable corner case where auto and template type deduction differed with regard to braced initializer lists, though in a rare case of backwards-compatibility-breaking, it has now been fixed [1].
Scott Meyers, for one, has given examples of particular cases in the use of braced initialization, especially in the context of templates, that can be considered corner cases in that they are probably not likely to arise very often in most of the C++ code that is being written for applications.
[1] https://isocpp.org/blog/2014/03/n3922
[2] Scott Meyers, 'Effective Modern C++', pp 52-58.
I frequently find myself constructing objects using the () syntax will produce parse errors as the compiler is expecting a function declaration. Then replacing the () with {} just fixes it. It’s really frustrating that bad design like this is just maintained as a stumbling block for new users, instead of being fixed.
When it comes to design, C++ is a good example of why having a benevolent dictator is better than a committee. I still think it's a huge mistake to not have a standard ABI and rely on C's ABI.
Moves and rvalue references (and whatever a PR-value is) and even RVO scare me. They make me want to pass pointers around, because at least I know for sure what'll happen then. (And, funnily enough, C++ seems worse than dynamic languages for this -- more magic around function calls and returns than C or Python or JavaScript.)
Personally I'm hoping Rust displaces it from most of these remaining niches but even if it does it will probably happen slowly.
For a certain class of program, you mean. For applications specifically, the advantages you mention are barely relevant. Usually only small parts of a whole application need low-level control of memory etc. Those can be written in C, with the rest written in a cleaner higher-level language that interfaces easily with C (there are many such)
C++ is proof that a single language can't satisfy all needs. It tries to do the low-level stuff as well as C, and fails because it can't produce libraries that are as easy to link to. Then it tries to do all the high-level stuff as well as other languages, and utterly fails because it can't get away from its low-level roots. D, Rust, and Nim all make better compromises that suit just about all C++ use cases. Go and Pony do a better job for some but not others. I won't say there's no room for C++ in a sane computing world, but its legitimate niche is very small indeed.
Only now embedded development is starting to accept C++, and C still rules there anyway.
Which means it took about 20 years to reach this point.
And still Rust will need to go through the same certification processes that C, C++, Ada and Java enjoy for such scenarios.
* lack of namespaces - all names with long prefix look the same to me
* just text based macros
* no generics
* error handling usually based on int constants and output function parameters - in big projects is hard to use them consistently without any type checking
* no polymorphism
* ton of undefined behavior (almost the same as C++)
The difference is, C lets you control how much baggage you carry along and C++ doesn't. If I want a higher-level abstraction in C, I can usually implement it pretty well using inlines and macros, void and function pointers. Will it be pretty? Hell no. Will it have all of the syntactic sugar that the C++ version does? Nope. But it will work and be usable and most importantly the complexity/risk that it carries along will be exactly what I asked for (because I know how to manage it). Using a feature in C++ means carrying along all of the baggage from its dependencies and interactions with every other feature.
If programming languages were cars, it's like the dealer saying I can't add one option to the base model. No, I have to buy the top-of-the-line model with dozens of features I don't actually care about and will never use, costing far more and oh by the way that model has door handles that will cut you. That's about when I go to the Python dealership down the street, or just stick with good old C.
C has plenty of polymorphism. The compiler just doesn't do it for you. In fact, C++ started out as C-with-classes since it was a pain to keep recreating the OOP-in-C boilerplate over and over. Besides, there are more kinds of polymorphism than virtual methods. You can be polymorphic with an array of function pointers.
It's not what a compiler writer would want by any stretch. Having helped write a C++ compiler I can a test to that. I will agree that C is a nice language. It does exactly as you tell it.
The complexity I would say is what you get when you "design by committee"
Web standards have a similar problem they keep growing and getting more complex. Try to write a browser from the ground up these days.
More importantly, for my niche I don't see anything that can readily replace C++. Rust has very little support for scientific computing. Julia is great, and will replace my high level statistical inference code, but it's not designed to let me design low-level close-to-the-metal data structures. Scala has memory issues, which will be hopefully less problematic once value types are implemented in the JVM. OCaml and F# look interesting, I haven't evaluated these carefully.
It's not the prettiest language by any stretch, but it's quite capable and fast and has excellent support across just about every platform.
There might be a "culture of complexity" in the community, but to remove the conflicting paradigms from C++ is to destroy what makes C++ useful. I don't believe C++ is complex in it's DNA, but highly experimental, overwhelming to newcomers and experienced developers alike (since you have to truly understand any feature before using it), and easily misunderstood. It requires more strictness in design and implementation than other languages, and isn't my first choice for anything that doesn't require high performance. But since I'm in game development and audio synthesis, it's often my only choice since nothing else hits that sweet spot of abstraction and performance.
every single c++ shop I've worked at since has said, 'well, yes the language is a mess, but if you stick to a well controlled subset, its really pretty good'
and all of those shops, without exception, have dragged in every last weird and contradictory feature of what is a really enormous language. so I guess the 'sane subset' argument is ok in theory, but really not in practice.
i've actually seen some really* clever mixin/metaprogramming with templates. it was a total disaster, and in a different environment it could be a really great approach. i could never understand it in complete detail, but if C is a 38 that you can use to blow your foot off, C++ is a 20ga shotgun with a pound of c4 strapped to your head.
That's not up to the individual coder coming later to a codebase. Or wanting to use a library that enforces those features, etc.
And the design of the features can impact how other features are implemented, even one doesn't use them.
They have a ton of warts regarding annotations, usually worked around by using templates, because on that case they are inferred.
The semantics of shared are still being worked on.
The way const/immuatable works, makes some devs just give up and remove them from their code.
I can equally tell some Objective-C issues.
Yes, in general they are better than C++, but not without their own warts.
That's true. The thing with D const/immutable is they are transitive, and the compiler means it. It's not a suggestion.
The advantage of enforced transitive const/immutable is, of course, is it's foundational if you want to do functional style programming.
They kinda do...
If that were the case, the Edison Design Group (https://en.wikipedia.org/wiki/Edison_Design_Group) wouldn't exist. It exists because compiler writers don't want to have to deal with parsing C++.
(Then there's Dinkumware, which serves the same purpose for library functions.)
In pretty much all senses of the word?
> getting even more so with the myriad of features they are adding each release.
It's far from adding a "myriad of features" with each release, and most of those it adds are library stuff, see for 1.23: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...
And with respect to non-library features currently in-flight, by and large they are "playing catch-up" to C/C++ for compatibility or to fulfil use cases for which the language is not currently convenient or sufficient e.g. generics-over-values, const functions, allocators.
Rust has an upfront feeling of complexity in lifetimes and the borrow checker, but here's the ugly truth: pointer lifetime issues don't exist any less in C++, the only difference is the compiler doesn't help you with them.
In C++, creating an instance of a class is fantastically complicated. The class must be initialized, and there is a zoo of different initialization forms: value, direct, aggregate, default, list, copy, etc. Which one foo {} invokes has been the subject of a spec bug. Some of these invoke a constructor, which is like a function, but isn't a function, multiplying the number of concepts involved further. Constructors suffer from bizarre syntactic limitations. They need a special syntax for handling exceptions. The form foo f(); famously declares a function instead of default initializing f. The [dcl.init] section of the spec is about 16 pages long and there are about another dozen about how constructors work in the special member functions section.
In Rust, there is exactly one way to create an instance of a struct: you provide a value for each of its fields.
Perhaps one answer could be, that much of Rust's complexity arises from the concepts of 'borrowing' and 'lifetimes' and how they are encoded in the language.
In C++, complexity much of the complexity arises from the copy/reference semantics, and the possibility of overriding standard behaviour. You need to have wider context to understand local code. So you need to understand how the C++ works on quite a low level and you might need to know more specifics on your C++ codebase than might be the case in Rust.
In C++ a large chunk of complexity comes from legacy of being a C superset and having to preserve backwards compatibility with all of its old features and syntax quirks, and often surprising edge cases arising from interactions between different features.
The real world is complex. Don't confuse hiding complexity with minimizing complexity.
Because slippery slope arguments never shone light on any situation.
>The real world is complex.
Which is neither here, nor there. We're talking about programming languages -- where you can be 10x as complex as another, but as long as you're both Turing Complete, you can't really do anything substantially more.
So, this is not about more power to handle "the real world", but about ergonomics. Which nobody ever said were C++ strong point.
>Don't confuse hiding complexity with minimizing complexity.
Well, we should also not confuse adding complexity with adding expressiveness.
It is whatever you give priority design-wise, and if you look at what is important in c++ (zero-cost runtime abstractions, control of the resulting code), then ending up with something as complex as the c++ language is hard to prevent.
If I had the choice of having a C++2020 with absolutely no backwards compatibility but a greatly cleaned up language or having yet another step down the path it's heading now, I'd chose the former even if it breaks backwards compatibility. But the language is going to follow the developers and they want to continue down the path where they get to explain again and again and again the intricacies and delicacies of rvalue semantics, exotic template meta-programming and what-ever the next big thing is going to be. All the while we seemingly still don't even have a clear road-map for providing simple modules.
I was enthusiastic back when C++11 hit and it felt like a great big push going on in the language development, I had the feeling that things were really going to take off... but now it feels like it just died out, and now there's nothing that really gets me exited about the future of C++. I'm worried It's going to get more and more complex and less and less used.
The correct way to deal with backwards compatibility issues is the way Go does it, namely to write tools to automatically upgrade code to a newer API [1]. And as that smug Google guy explained here [2], they have something like this for C++, too. They just don't bother sharing it with us peasants.
The fundamental mistake of C++ is to religiously keep backwards-compatibility with everything for all eternity. It's so easy to say that backwards-compatibility is a good thing. It's obvious! But ultimately that decision is the reason for all the problems that C++ has.
It's not the 80s anymore, and the reason that C++ is still and will always remain hard to parse is... backwards compatibility.
Tools are great for minor upgrades of APIs and syntax, but key to C++ was compiling with existing system headers (no, realistically you do not want to maintain your own 'fixed' version), and most importantly OS and library ABI.
This DrDobbs article in an 1993 issue dedicated to possible successors to C, is the only proof that it ever existed.
http://www.drdobbs.com/article/print?articleId=184409085&sit...
Print view is the only way to read it properly.
The lesson here is that languages should restrict their backwards-incompatible changes to only those that are amenable to automatic migration via limited gofix-like tools, but that runs counter to your argument that C++ ought to be casting off its C heritage (which I quite sympathize with).
In addition, C++ not only inherited C's static-typing rules, but strengthened them, and complicated them with inheritance (multiple, and polymorphic or not, according to your needs), function overloads and operator overloading. Now add generic types through templates, and the cases to be handled explodes to the point where you need a Turing-complete parser.
A good case can be made for each of C++'s features individually, which leads to justifications of the form "As we want both X and Y (and without sacrificing run-time efficiency), then there really isn't any alternative to [insert counter-intuitive issue here]..."
That it works at all is a testament to the skill of the people involved. The complications that remain seem qualitatively different than the sort of gratuitous inconsistencies that you find in PHP, for example.
Bugs in language or compiler will make the language useless for any project which is expected to work.
I'm gonna be bold any say that you nor anyone else would put up with it. There's enough bug in one's own code, there is rarely any space for unreliable infrastructure.
Also note that almost all C code is legal C++, so when you "just write idiomatic C", you're still writing C++. (Perhaps not idiomatic C++...)
struct Adder;
struct Adder *adder_create(void);
void adder_setup(struct Adder *, int, int);
int adder_operate(struct Adder *);
void adder_delete(struct Adder *);
You can easily imagine how one would trivially implement these functions in adder.c, allocating an object, mutating its state and deleting it. The caller doesn't know anything about struct Adder nor does it know how the adding of two ints is implemented. It only needs know that struct Adder can be pointed to. The header file defines a clear interface that can be compiled against. The implementation can be changed without having to recompile all callers. Tried-and-true and boring but it works: this is the way C has done it for decades.Now, C++. You create a class Adder, declare public constructor and destructor as well as setup() and operate(). Then you declare a couple of private ints to hold state and maybe some private helper methods, and then you realise that 1) you've just exposed parts of the private implementation in the public header and 2) you can potentially break compiled binaries even if you only change the private/protected parts of the class. Yes, that's a textbook example of how to define a class that completely sucks for any real-life encapsulation purposes. You see how things are getting complex quick? This is where people began to think of more novel applications of C++ to fix the language itself.
So you define an abstract class IAdder in adder.h with pure virtual methods to act as a truly public interface, and derive an implementation class AdderImpl in adder.cpp. Great. Except you can't instantiate the private implementation. You'll need a public factory function outside the class or a static method such as IAdder::create() to construct an AdderImpl and return it as an IAdder. This isn't very clean and beautiful anymore and this was a simple example. There are more branches to be explored in the solution space but at this point we've basically had to create an ugly reimplementation of something that we thought would come free in a language that namely supports object oriented programming whose one fundamental selling point is easy encapsulation. And all that while the C counterpart is actually easy, understandable and simple, and requires no re-engineering to get it even work.
The thing with equals applies to most languages that allow to redefine operators.
Even Java has gotten quite complex, I bet he would have quite an hard time succeeding at one of those quiz questions certifications (SCJP and friends).
And simplicity can also turn into more complexity, manual memory management in C vs C++, "templates/generics" in Go, libraries to work around language limitations in JavaScript (npm, yeoman, gulp, grunt, webpacker all at the same time).
Yes, there are alternatives, but when it comes to performance, expressiveness and actual deployability, C++ is pretty awesome.
Java, on the other hand, is a simple language. But the kind of unnecessary complexity I have seen in Java-land (EJBs, Spring, etc.) has no parallel in the C++-land.
So going by your argument, I would choose C++ over Java to avoid the complexity jump, then profile the app, and if any part's too slow, improve that again in C++.
Also, since C++ is native, very memory efficient data structures can be written. This property can be leveraged to write very small, very efficient and very resilient loops to be written which can run for months at a time, and since you have the absolute control over memory management, you can prevent unwanted bloating or leaks pretty easily.
Last, but not the least; CPUs weren't that powerful 10-15 years ago. My desktop computer was as fast as the first Raspberry Pi. Python, JS, even Java were very unpleasant experiences back then.
I've never had to go looking to find warts in C++.
A line like
if ( a == b )
In C++ is pretty obvious what it does. It'll always be a value comparison, and if a and b are pointers to objects you are comparing the object identities and not their values. The meaning is exactly the same in Java of course, but the fact you're dealing with pointers is hidden so you generate a lot of confusion about the correct use of `==` and `.equals`.The author certainly isn't wrong about a culture of complexity in certain elements, but, to be hones, I see that everywhere else too and it needs to be fought wherever it occurs.
[1] - As usual, there are many ways of doing OO, not just C++/Java style.
In Java/C#, it doesn't always:
int x = 1; int y = 2; x = y;
The variable `x` does not refer to the object that `y` is referring to(as there is no reference involved at all).
Assignment is a procedure that makes `x` equal to `y` without modifying `y`. However, if `x` refers to `y`, then we can modify `y` after assignment to `x`. This destroys the ability to reason locally about the code, because each object essentially becomes as good as a global variable.
Even though C++ has areas where things gets complicated, this is the one thing that C++ keeps very simple and consistent. There is only one definition of copy, assignment, and equality, whereas in java there is multiple definitions(deep copy vs shallow copy, `==` vs `.equals`).
> That’s called value semantics, although I would prefer the term state semantics: objects do not have a value, they have state, and that’s what’s being transferred here.
No. Its value semantics, as these objects represent some entity. The interpretation of the state(or datum to be more precise) is the known as the object's value. For example, when copying a `std::vector` the internal state of the vector will be different on copy, as it will point to new piece of memory, however, its value will still be the same.
> But experience shows that making a copy of an object is hard.
The compiler already generates a copy constructor and assignment for you. Its only necessary to write one, when one is dealing with low-level pointers. Using the standard built-in types and containers, writing a copy constructor is never needed.
1. Application level: Typically manipulating lots of strings and data massaging. I prefer Java or python for this. The IDEs and eco-system just is so much faster to start with
2. Systems level: Typically a high-performance system like a DB manager or a fast processing library like a message producer etc. These things are time critical and need performance.
I used to really love C++ but I agree, it takes far too long just to start making things run. Sigh!
Most of the time this only shows the lack of deep knowledge on the language.
Just pick a copy of ANSI C and randomly ask a couple of UB questions.
You have a catalog of about 200 cases to chose from.
It is true that it is a complex language, but this is true for every tool that allows you to do so many different things.
The complexity is not a goal in itself, it's just that it can do all of those things if you need them.
Simple tools are great and will save you a lot of time, but the truth is that you won't always be able to do everything with them. C++ allows you to do everything, and it is true that the cost can be very high and requires a lot of thorough knowledge and expert learning of what happens, but there are projects and cases where you just need to use C++ because that's the only choice you have left.
I agree that an alternatives like rust or D would be great as replacements, but the problem remains: if compilers are not mature on most platform, and if you don't have a large programmer base because the basics of the language are not simple enough, the language won't grow.
>Neither the performance issue that move semantics address nor the perfect forwarding problem exist in classic OO languages that use reference semantics and garbage collection for user-defined types."
I understand reference semantics but what are move semantics?
Also what is the "perfect forwarding problem"?
http://thbecker.net/articles/rvalue_references/section_01.ht...
Warning: Read only if you have a serious interest in C++.
C++ went off into template la-la land some years back. C++ templates were not supposed to be a programming language. But they turned into one, and not a good one.
Look up the implementation of "max" and "min" in C++ templates.
Now there's a push for "move semantics", to keep up with Rust and the cool kids. But not with language support and a borrow checker. With templates and cute tricks with the type system.
I don't get this. The performance issue move semantics solve doesn't exist if you just use automatic garbage collection? Is this what the article is saying?
It's not byte code so distributing binaries is a crappy problem too. I am of the mind that a dependency system above the build system(s) is probably the best bet. Not as low level as binary interface, but I need libary X >= version n.m.o...
x = y;
behaves the same if the types are int and vector<int> (unlike e.g. Java and Python).
1. How cumbersome a language is to use (as a beginner, as a confident developer, etc).
- C++ is rather easy to start, but takes ages to master and surprises even powerful users daily.
- Rust is hard to start, it states complexity of systems upfront in a tightly packed knot that should be handled all at once, but once you're past that, it's rather consistent.
- Haskell is very hard to start with, it is basically unlearning every imperative habit, but again, after that it's a powerful and mostly consistent tool (not without its warts, but drastically less than C++). There is some tolerance of complexity in the ecosystem, but it is clearly encapsulated in libraries and justified by papers and research interests.
2. How complex is the abstract core of a language.
- Haskell has had an amazingly simple, elegant and consistent core for decades; on the other hand, modern Haskell has accumulated a lot of research-y stuff (which is still optional to use), which may be nice/cumbersome depending on the situation. Run-time behavior feels woefully underspecified though, and can be flaky and hard to reason about.
- the mental model of Rust is bigger and more complex than in Haskell (traits, generics, lifetimes/onwership, the underlying C-like memory model), but it's definitely practical (if somewhat spartan) and consistent.
- C++ does not have a coherent vision at all: it is a pile of organically grown features with interesting (and sometimes useful) interactions. This pile is outright impossible to reason about formally.
The only definition in which "C++ is not more complex" is the definition of being easy to use for a beginner in the language.
I'm quite happy with my current gig using Go. Looking back, the culture of complexity surrounding c++ is obvious, but talking with my peers who have only ever done c++ - it's like they have Stockholm Syndrome.
Go has its own share of idiosyncrasies, and it drives me nuts sometimes even more than C++ did, but these bursts of mental grind are less common and much shorter in comparison :). The arcane complexity of C++ (and tooling around it) is something that I don't miss at all.
The source of complexity isn't a mythical "culture of complexity", the complexity is there because it's inevitable if you want to implement powerful compile-time type reasoning. (And you definitely want compile-time reasoning because it's the only way to guarantee performance and correctness of programs.)
The case of Haskell and Rust proves that the issue isn't cultural, it's inherent to the problem domain.