I actually think Ada would be an easier sell today than it was back then. It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today. Plus, Ada is having a bit of a resurgence with stuff like NVidia picking SPARK.
Secondly, when companies say "we can't hire enough X" what they really mean is "X are too expensive". They probably have some strict salary bands and nobody had the power to change them.
In other words there are plenty of expensive good Ada and C++ programmers, but there are only cheap crap C++ programmers.
But, not because I think schools and colleges would jump at the opportunity and start training the next batch of students in said language just because some government department or a bunch of large corporations supported and/or mandated it. Mostly because that hasn't actually panned out in reality for as long as I can remember. Trust me, I _wish_ schools and colleges were that proactive or even in touch with with the industry needs, but... (shrug!)
Like I said, I still think the original argument is flawed, at least in the general case, because any good organization shouldn't be hiring "language X" programmers, they should be hiring good programmers who show the ability to transfer their problem solving skills across the panopticon of languages out there. Investing in getting a _good_ programmer upskilled on a new language is not as expensive as most organizations make it out to be.
Now, if you go and pick some _really obscure_ (read "screwed up") programming language, there's not much out there that can help you either way, so... (shrug!)
DoD did enforce a requirement for Ada but universities and others did not follow.
The JSF C++ guidelines were created for circumventing the DoD Ada mandate (as discussed in the video).
Everyone likes to crap on C++ because it's (a) popular and (b) tries to make everyone happy with a ton of different paradigms built-in. But you can program nearly any system with it more scalably than anything else.
> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++
What’s the problem with the F35 and combat readiness? Many EU countries are falling over each-other to buy it.
I know others who learned ADA on the job.
It’s not too terrible.
> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.
What is the evidence for this? Companies selling Ada products would almost certainly agree, since they have a horse in the race. Ada does not automatically lead to better, more robust, safer or fully correct software.
Your line of argument is dangerous and dishonest, as real life regrettably shows.[0]
[0]: https://en.wikipedia.org/wiki/Ariane_flight_V88
> The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]
> The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]
I’m sure I’m idealizing it, but at least I’m not demonizing it like folks did back in the day.
Are you sure? I cannot even find Ada in [0].
I tried modifying some Hello World example in Ada some weeks ago, and I cannot say that I liked the syntax. Some features were neat. I had some trouble with figuring out building and organizing files. Like C++, and unlike Rust I think, there are multiple source file types, like how C++ has header files. I also had trouble with some flags, but I was trying to use some experimental features, so I think that part was on me.
[0]: https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...
https://www.ghs.com/products/ada_optimizing_compilers.html
https://www.ptc.com/en/products/developer-tools/apexada
https://www.ddci.com/solutions/products/ddci-developer-suite...
http://www.irvine.com/tech.html
http://www.ocsystems.com/w/index.php/OCS:PowerAda
http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm
What is true, is that those vendors, and many others, like UNIX vendors that used to have Ada compilers like Sun, paying for Ada compilers was extra, while C and C++ were already there on the UNIX developers SKU (a tradition that Sun started, having various UNIX SKUs).
So schools and many folks found easier to just buy a C or C++ compiler, than an Ada one, with its price tags.
Something that has helped Ada is the great work done by Ada Core, even if a few love hating them. They are the major sponsor for ISO work, and spreading Ada knowledge on the open source community.
> The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]
> The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]
The main issue is mission assurance. Using the stack or the heap means your variables aren't always at the same memory address. This can be bad if a particular memory cell has failed. If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.
Your mention of STL makes it sound like you're talking about C++. But I don't know of any C++ compiler that lets you completely avoid use of the stack, even if you disable the usual suspects (RTTI and exceptions). Sure, you'd have to avoid local variables, defined within a function's body (or at block scope), but that's nowhere near enough.
* The compiler would need to statically allocate space for every function's parameters and return address. That's actually how early compilers did work, but today it would be inefficient because there are surely so many functions defined in a program's binary compared to the number being executed at any given time. (Edit: I suppose you already need the actual code for those functions, so maybe allocating room for their parameters is not so bad.)
* It would also mean that recursion would not work, even mutual recursion (so you'd need runtime checks because this would be hard to detect at compile/link time), although I suspect this is less of a problem than it sounds, but I'm not aware of a C++ compiler that supports it.
* You'd also need to avoid creating any temporary variables at all e.g. y = a + b + c would not be allowed if a,b,c are non-trivial types. (y = a + b would be OK because the temporary could be constructed directly into y's footprint, or stored temporarily in the return space of the relevant operator+(), which again would be statically allocated).
Is that really what you meant? I suspect not, but without all that your point about avoiding the stack doesn't make any sense.
This seems like a rather manual way to go about things for which an automated solution can be devised. Such as create special ECC memory where you also account for entire cell failure with Reed-Solomon coding or some boot process which blacklists bad cells etc.
This is what make remote debugging possible. It is impossible to do interactive remote debugging over a ultra low bandwidth link. If everything have static address and deterministic static, you can have a exact copy on ground and debug there.
Where do you place the variables then? as global variables? and how do you detect if a memory cell has gone bad?
> If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.
You can and do put the stack and heap pool at fixed memory ranges, so you can always do this. I'm not sold at all with this reasoning.
I actually do this as well, but in addition I log out a message like, "value was neither found nor not found. This should never happen."
This is incredibly useful for debugging. When code is running at scale, nonzero probability events happen all the time, and being able to immediately understand what happened - even if I don't understand why - has been very valuable to me.
In fact, not using a default (the else clause equivalent) is ideal if you can explicitly cover all cases, because then if the possibilities expand (say a new value in an enum) you’ll be annoyed by the compiler to cover the new case, which might otherwise slip by.
allocation/deallocation from/to the free store (heap)
shall not occur after initialization.
This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones?I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.
In this way you can use pools or buffers of which you know exactly the size. But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
stdio.h is fine in some embedded contexts, and very very not fine in others
Actual code i have seen with my own eyes. (Not in F-35 code)
Its a way to avoid removing an unused parameter from a method. Unused parameters are disallowed, but this is fine?
I am sceptical that these coding standards make for good code!
Notably this document is from 2005. So that's after C++ was standardized but before their second bite of that particular cherry and twenty years before its author, Bjarne Stroustrup suddenly decides after years of insisting that C++ dialects are a terrible idea and will never be endorsed by the language committee, that in fact dialects (now named "profiles") are the magic ingredient to fix the festering problems with the language.
While Laurie's video is fun, I too am sceptical about the value of style guides, which is what these are. "TABS shall be avoided" or "Letters in function names shall be lowercase" isn't because somebody's aeroplane fell out of the sky - it's due to using a style Bjarne doesn't like.
And boiling down these guidelines to style guides is just incorrect. I've never had a 'nit: cyclomatic complexity, and uses dynamic allocation'.
(void) a;
I'm sure there are commonly-implemented compiler extensions, but this is the normal/native way and should always work.While maybe 10% of rules are sensible, these sensible rules also tend to be blindingly obvious, or at least table stakes on embedded systems (e.g. don't try to allocate on a system which probably doesn't have a full libc in the first place).
_ = a;
And you would encounter it quite often because unused variable is a compilation error: https://github.com/ziglang/zig/issues/335Isn't it just bad design that makes both experimenting harder and for unused variables to stay in the code in the final version?
It's extremely annoying until it's suddenly very useful and has prevented you doing something unintended.
(void) a;
Every C programmer beyond weaning knows that.There are many areas of software where bureaucracy requires MISRA compliance, but that aren't really safety-critical. The code is a hot mess. There are other areas that require MISRA compliance and the domain is actually safety-critical (e.g. automotive software). Here, the saving grace is (1) low complexity of each CPU's codebase and (2) extensive testing.
To people who want actual safety, security, portability, I tell them to learn from examples set by the Linux kernel, SQLite, OpenSSL, FFMpeg, etc. Modern linters (even free ones) are actually valuable compared to MISRA compliance checkers.
[1] https://ieeexplore.ieee.org/abstract/document/4658076
[2] https://repository.tudelft.nl/record/uuid:646de5ba-eee8-4ec8...
In my opinion, the MISRA C++ 2023 revision is a massive improvement over the 2008 edition. It was a major rethink and has a lot more generally useful guidance. Either way, you need to tailor the standards to your project. Even the MISRA standards authors agree:
"""
Blind adherence to the letter without understanding is pointless.
Anyone who stipulates 100% MISRA-C coverage with no deviations does not understand what the are asking for.
In my opionion they should be taken out and... well... Just taken out.
- Chris Hill, Member of MISRA C Working Group (MISRA Matters Column, MTE, June 2012
"""Note that both MISRA and AUTOSAR's guidelines have been combined into a single standard "MISRA C++ 2023" which has been updated for C++17.
Breaking Down the AUTOSAR C++14 Coding Guidelines - https://www.parasoft.com/blog/breaking-down-the-autosar-c14-...
Reading through the JSF++ coding standards I see they ban exceptions, ban the standard template library, ban multiple inheritance, ban dynamic casts, and essentially strip C++ down to bare metal with one crucial feature remaining: automatic destructors through RAII. When a variable goes out of scope, cleanup happens. That is the entire value proposition they are extracting from C++, and it made me wonder if C could achieve the same thing without dragging along the C++ compiler and all its complexity.
GLib is a utility library that extends C with better string handling, data structures, and portable system abstractions, but buried within it is a remarkably elegant solution to automatic resource management that leverages a GCC and Clang extension called the cleanup attribute. This attribute allows you to tag a variable with a function that gets called automatically when that variable goes out of scope, which is essentially what C++ destructors do but without the overhead of classes and virtual tables.
The heart of GLib's memory management system starts with two simple macros: g_autofree and g_autoptr. The g_autofree macro is deceptively simple. You declare a pointer with this attribute and when the pointer goes out of scope, g_free is automatically called on it. No manual memory management, no remembering to free at every return path, no cleanup sections with goto statements. The pointer is freed whether you return normally, return early due to an error, or even if somehow the code takes an unexpected path. This alone eliminates the majority of memory leaks in typical C programs because most memory management is just malloc and free, or in GLib's case, g_malloc and g_free.
The g_autoptr macro is more sophisticated. While g_autofree works for simple pointers to memory, g_autoptr handles complex types that need custom cleanup functions. A file handle needs fclose, a database connection needs a close function, a custom structure might need multiple cleanup steps. The g_autoptr macro takes a type name and automatically calls the appropriate cleanup function registered for that type. This is where GLib shows its maturity because the library has already registered cleanup functions for all its own types. GError structures are freed correctly, GFile objects are unreferenced, GInputStream objects are closed and released. Everything just works.
Behind these macros is something called G_DEFINE_AUTOPTR_CLEANUP_FUNC, which is how you teach GLib about your own types. You write a cleanup function that knows how to properly destroy your structure, then you invoke this macro with your type name and cleanup function, and from that moment forward you can use g_autoptr with your type. The macro generates the necessary glue code that connects the cleanup attribute to your function, handling all the pointer indirection correctly. This is critical because the cleanup attribute passes a pointer to your variable, not the variable itself, which means for a pointer variable it passes a double pointer, and getting this wrong leads to crashes or memory corruption.
The third member of this is g_auto, which handles stack-allocated types. Some GLib types like GString are meant to live on the stack but still need cleanup. A GString internally allocates memory for its buffer even though the GString structure itself is on the stack. The g_auto macro ensures that when the structure goes out of scope, its cleanup function runs to free the internal allocations. Heap pointers, complex objects, and stack structures all get automatic cleanup.
What's interesting about this system is how it composes. You can have a function that opens a file, allocates several buffers, creates error objects, and builds complex data structures, and you can simply declare each resource with the appropriate auto macro. If any operation fails and you return early, every resource declared up to that point is automatically cleaned up in reverse order of declaration. This is identical to C++ destructors running in reverse order of construction, but you are writing pure C code that works with any GCC or Clang compiler from the past fifteen years.
The foundation beneath all this is GLib's memory allocation functions. The library provides g_malloc, g_new, g_realloc and friends which are drop-in replacements for the standard C allocation functions. These functions have better error handling because g_malloc never returns NULL. If allocation fails, the program aborts with a clear error message. This might sound extreme but for most applications it is actually the right behavior. When malloc returns NULL in traditional C code, most programmers either do not check it, check it incorrectly, or check it but then do not have a reasonable recovery path anyway. GLib acknowledges this reality and makes the contract explicit: if you cannot allocate memory, the program terminates cleanly rather than stumbling forward into undefined behavior.
Reference counting is another critical component of GLib's memory management, particularly for objects. The GObject system, which is GLib's object system for C, uses reference counting to manage object lifetimes. Every object has a reference count starting at one when created. When you want to keep a reference to an object, you call g_object_ref. When you are done with it, you call g_object_unref. When the reference count reaches zero, the object is automatically destroyed. This is the same model used by shared_ptr in C++ or reference counting in Python, but implemented in pure C.
This also integrates with the autoptr system. Many GLib types are reference counted, and their cleanup functions simply decrement the reference count. This means you can declare a local variable with g_autoptr, the reference count stays positive while you use it, and when the variable goes out of scope the reference is automatically released. If you were the last holder of that reference, the object is freed. If other parts of the code still hold references, the object stays alive. This solves the resource sharing problem that makes manual memory management so difficult in C.
GLib also provides memory pools through GMemChunk and the newer slice allocator, though the slice allocator is being phased out in favor of standard malloc since modern allocators have improved significantly. The concept was to reduce allocation overhead and fragmentation for programs that allocate many small objects of the same size. You create a pool for objects of a specific size and then allocate from that pool quickly without going through the general purpose allocator. When you are done with all objects from that pool, you can destroy the entire pool at once. This pattern shows up in many high-performance C programs but GLib provided it as a reusable component.
The error handling story in GLib deserves special attention because it demonstrates how automatic cleanup enables better error handling patterns. The GError type is a structure that carries error information including a domain, a code, and a message. Functions that can fail take a GError double pointer as their last parameter. If the function succeeds, it returns true or a valid value and leaves the error NULL. If it fails, it returns false or NULL and allocates a GError with details about what went wrong. The calling code checks the return value and if there was an error, examines the GError for details.
The critical part is that GError is automatically freed when declared with g_autoptr. You can write a function that calls ten different operations, each of which might set an error, and you can check each one and return early if something fails, and the error is automatically freed on all code paths. You never leak the error message string, never double-free it, never forget to free it. This is a massive improvement over traditional C error handling where you either ignore errors or write incredibly tedious cleanup code with goto statements jumping to labels at the end of the function.
The GNOME developers could have switched to C++ or Rust or any modern language, but instead they invested in making C excellent at what C is good at. They added just enough infrastructure to eliminate the common pitfalls without fundamentally changing the language. A C programmer can read GLib code and understand it immediately because it is still just C. The auto macros are syntactic sugar over a compiler attribute, not a new language feature requiring a custom compiler.
This philosophy aligns pretty well with what the F-35 programmers want: the performance and predictability of C with the safety of automatic resource management. No hidden allocations, no virtual dispatch overhead, no exception unwinding cost, no template instantiation bloat. Just deterministic cleanup that happens exactly when you expect it to happen because it is tied to lexical scope, which is something you can see by reading the code.
I found it sort of surprising that the solution to modern C was not a new language or a massive departure from traditional practices. The cleanup attribute has been in GCC since 2003. Reference counting has been around forever. The innovation was putting these pieces together in a coherent system that feels natural to use and composes well.
Sometimes the right tool is not the newest or most fashionable one, but the one that solves your actual problem with the least additional complexity. GLib proves you can have that feature in C, today, with compilers that have been stable for decades, without giving up the simplicity and predictability that makes C valuable in the first place.
If you look around outside Linux world, everyone was going into C++, PC world with OS/2, MS-DOS and Windows, Apple, Epoch (later Symbian), BeOS,.... UNIX was playing with CORBA, OpenInventor,....
Here the original version of the GNU Manifesto,
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
The GNU Coding Standard in 1994, http://web.mit.edu/gnu/doc/html/standards_7.html#SEC12
Moving a bit forward to 1998, when GNOME 1.0 was still being made ready,
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. For example, if you write your program in C++, people will have to install the C++ compiler in order to compile your program. Thus, it is better if you write in C. "
https://www.ime.usp.br/~jose/standards.html#SEC9
Yes, the actual version is a bit more welcoming to programming language variety,
https://www.gnu.org/prep/standards/html_node/Source-Language...
That is of course not to say that exceptions and error codes are the same.
The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight.
In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX
Definitely not a failure.
There have been over 1,200 F-35s built so far, with new ones being built at a rate of about 150 per year. For comparison, that’s nearly as many F-35s built per year as F-22s were built ever, and 1,200 is a large amount for a modern jet fighter. The extremely successful F-15 has seen about that many built since it first entered production over 50 years ago.
That doesn’t mean it must be good, but it’s a strong indicator. Especially since the US isn’t the only customer. Many other countries want it too. Some are shying away from it now, but only for political reasons because the US is no longer seen as a reliable supplier.
In terms of actual capabilities, it’s the best fighter jet out there save for the F-22, which was far more expensive and is no longer being made. It’s relatively cheap, comparable in cost to alternatives like the Gripen or Rafale while being much more capable.
There have been a lot of articles out there about how terrible it is. These fall into a few different categories:
* Reasonable critiques of its high development costs, overruns, and delays, baselessly extrapolated to “it’s bad.”
* Teething problems extrapolated to “it’s terrible” as if these things never get fixed.
* Analyses of outcomes from exercises that misunderstand the purpose and design of exercises. You might see that, say, an F-35 lost against an F-16 in some mock fights. But they’re not going to set up a lot of exercises where the F-35 and F-16 have a realistic engagement. The result of such an exercise would be that the F-16 gets shot out of the sky without ever knowing the F-35 was there. This is uninformative and a waste of time and money. So such a matchup will be done with restrictions that actually make it useful. This might end up in a dogfight, where the F-16 is legitimately superior. This then gets reported as “F-35 worse than F-16,” ignoring the fact that a real situation would have the F-35 victorious long before a dogfight could occur.
* Completely legitimate arguments that fighter jets are last century’s weapons, that drones and missiles are the future, and the F-35 is like the most advanced battleship in 1941: useful, powerful, but rapidly becoming obsolete. This may be true, but if it is, it only means the F-35 wasn’t the right thing to focus on, not that it’s a failure. The aircraft carrier was the decisive weapon of the Pacific war but that didn’t make the Iowa class battleships a failure.
The new 6th generation platforms being rolled out (B-21, F-47, et al) are all pure first-principles drone-warfare native platforms.
From a european perspective, I can tell you that the mood has shifted 180 degrees from "buy American fighters to solidify our ties with the US" to "can't rely on the US for anything which we'll need when the war comes".
Anyhow, a fair assessment is the program has gone massively over timeline and budget, so in that sense is a failure, however the resulting aircraft is very clearly the best in its class both in absolute capability and value.
Going forward there's broad awareness in the government that the program management mistakes of the F-35 program cannot be repeated. There's a general consensus that 3 decade long development projects just won't be relevant in a world where drone concepts and similar are evolving rapidly on a year by year basis. There's also awareness the government needs to act more as the integrator that owns the project to avoid lock in issues.
Criticism is fair however: they did probably extend themselves too far with the helmet technology, and I do have concerns about touch screens in cockpits (a touch screen requires you to take your eyes off of a target to move your hand to the right location, rather than locating a button by touch).
I haven’t heard anything particularly bad about the software effort, other than the difficulties they had making the VR/AR helmet work (the component never made it to production afaik).
https://www.nwfdailynews.com/story/news/local/2021/08/02/f-3...
The electrical system performs poorly under short circuit conditions.
https://breakingdefense.com/2024/10/marine-corps-reveals-wha...
They haven't even finished delivering and now have to overhaul the entire fleet due to overheating.
https://nationalsecurityjournal.org/the-f-35-fighters-2-big-...
This program was a complete and total boondoggle. It was entirely the wrong thing to build in peace time. It was a moonshoot for no reason other than to mollify bored generals and greedy congresspeople.
Right now it is also the single most advance combat airplane, built in any number, which exists anywhere in the world and guarantees that the USA will be able to convincingly assert air dominance in any conflict.
Ok, joking aside: If it is considered a failure, what 100B+ military programme has not been considered a failure?
In my totally unqualified opinion, the best cost performance fighter jet in the world is the Saab JAS 39 Gripen. It is very cheap to buy and operate, and has pretty good capabilities. It's a good option for militaries that don't have the infinite money glitch.
How does the code work with timing? It counts cycles?
Yes, I have done this. By the way, these measurements of course have to be part of the certification.
Does it have an upper limit for the longest run, or all branches have to have the same duration? I'm asking because I am curious if the function execution time being a constant is part of the program working correctly (scheduling). Somewhat related to how early programs worked correctly for 4.77 MHz and faster clock on CPU would break the program https://en.wikipedia.org/wiki/Turbo_button
I am working in free time on code for a controller that has to do time sensitive operations (actuate a solenoid/injector for a couple milliseconds) and I am thinking on how to correctly trigger the code so that the timing is accurate.
At least before we had zero-cost exceptions. These days, I suspect the HFT crowd is back to counting microseconds or milliseconds as trades are being done smarter, not faster.
what leads to better code in terms of understandability & preventing errors
Exceptions (what almost every language does) or Error codes (like Golang)
are there folks here that choose to use error codes and forgo Exceptions completely ?
In C++, which supports both, exceptions are commonly disabled at compile-time for systems code. This is pretty idiomatic, I've never worked on a C++ code base that used exceptions. On the other hand, high-level non-systems C++ code may use exceptions.
Highly recommend checking her other videos out if you like this
The rule is likely speaking to this code.
I was getting to a point in the code. I could tell by a log statement or some such. But I didn't know in what circumstances I was getting there - what path through the code. So I put in something like
char *p = 0;
*p = 1;
in order to cause a core dump. That core dump gave me the stack trace, which let me see how I got there.But I never checked that in. If I did, I would expect a severe verbal beating at the code review. Even more, it never made it into release.
Honestly I think that's probably the correct way to write high reliability code.
From https://news.ycombinator.com/item?id=45562815 :
> awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
From "Safe C++ proposal is not being continued" (2025) https://news.ycombinator.com/item?id=45237019 :
> Safe C++ draft: https://safecpp.org/draft.html
Also there are efforts to standardize safe Rust; rust-lang/fls, rustfoundation/safety-critical-rust-consortium
> How does what FLS enables compare to these [unfortunately discontinued] Safe C++ proposals?
- no exceptions
- no recursion
- no malloc()/free() in the inner-loop
It is "C++", but we also follow the same standards. Static memory allocation, no exceptions, no recursion. We don't use templates. We barely use inheritance. It's more like C with classes.
The C++ was atrocious. Home-made reference counting that was thread-dangerous, but depending on what kind of object the multi-multi-multi diamond inheritance would use, sometimes it would increment, sometimes it wouldn't. Entire objects made out of weird inheritance chains. Even the naming system was crazy; "pencilFactory" wasn't a factory for making pencils, it was anything that was made by the factory for pencils. Inheritance rather than composition was very clearly the model; if some other object had function you needed, you would inherit from that also. Which led to some object inheriting from the same class a half-dozen times in all.
The multi-inheritance system given weird control by objects on creation defining what kind of objects (from the set of all kinds that they actually were) they could be cast to via a special function, but any time someone wanted one that wasn't on that list they'd just cast to it using C++ anyway. You had to cast, because the functions were all deliberately private - to force you to cast. But not how C++ would expect you to cast, oh no!
Crazy, home made containers that were like Win32 opaque objects; you'd just get a void pointer to the object you wanted, and to get the next one pass that void pointer back in. Obviously trying to copy MS COM with IUnknown and other such home made QueryInterface nonsense, in effect creating their own inheritance system on top of C++.
What I really learned is that it's possible to create systems that maintain years of uptime and keep their frame accuracy even with the most atrocious, utterly insane architecture decisions that make it so clear the original architect was thinking in C the whole time and using C++ to build his own terrible implementation of C++, and THAT'S what he wrote it all in.
Gosh, this was a fun walk down memory lane.
I feel like that's the way to go since you don't obscure control flow. I have also been considered adding assertions like TigerBeetle does
https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
Some large commercial software systems use C++ exceptions, though.
Until recently, pretty much all implementations seemed to have a global mutex on the throw path. With higher and higher core counts, the affordable throw rate in a process was getting surprisingly slow. But the lock is gone in GCC/libstdc++ with glibc. Hopefully the other implementations follow, so that we don't end up with yet another error handling scheme for C++.
You can compile with exceptions enabled, use the STL, but strictly enforce no allocations after initialization. It depends on how strict is the spec you are trying to hit.
The idea of `become` is to signal "I believe this can be tail recursive" and then the compiler is either going to agree and deliver the optimized machine code, or disagree and your program won't compile, so in neither case have you introduced a stack overflow.
Rust's Drop mechanism throws a small spanner into this, in principle if every function foo makes a Goose, and then in most cases calls foo again, we shouldn't Drop each Goose until the functions return, which is too late, that's now our tail instead of the call. So the `become` feature AIUI will spot this, and Drop that Goose early (or refuse to compile) to support the optimization.
But ... that rewrite can increase the cyclomatic complexity of the code on which they have some hard limits, so perhaps that's why it isn't allowed? And the stack overflow, of course.
> no recursion
Does this actually mean no recursion or does it just mean to limit stack use? Because processing a tree, for example, is recursive even if you use an array, for example, instead of the stack to keep track of your progress. The real trick is limiting memory consumption, which requires limiting input size.
https://plato.stanford.edu/entries/technology/
https://bpb-us-e2.wpmucdn.com/sites.uci.edu/dist/a/3282/file...
That explains all the delays on the F-35....,