But all these problems are quite doable. Nobody claims that gcc is small, yet I managed to get that working. Compiler makers can follow a few guidelines to make it much easier, see: http://www.dwheeler.com/trusting-trust/dissertation/html/whe... Check out the graph at the Debian Reproducible Builds project at https://wiki.debian.org/ReproducibleBuilds - yes, some packages don't reproduce bit-for-bit, but they've made a tremendous amount of progress and managed it for most packages.
You can see some related information at this page: http://www.dwheeler.com/trusting-trust/ including a video of me discussing the paper.
Our ideal world would be fully reproducible builds with a complete chain of custody. We have some of it, but not the whole kit and kaboodle.
But we can't really do this so long as we rely on unreproducible upstream build configurations.
Of course, you then have to convince them what specifically to do. The reproducible builds project has some nice documentation: https://reproducible-builds.org/docs/ and I already mentioned my guidelines: http://www.dwheeler.com/trusting-trust/dissertation/html/whe... . You can also look at specific war stories, such as Tor's: https://blog.torproject.org/blog/deterministic-builds-part-t... or sbcl's: http://christophe.rhodes.io/notes/blog/posts/2014/reproducib...
We can also make it easier. One great thing is that the Debian reproducible builds group has been modifying tools to make it easier to create reproducible builds. That doesn't mean there's nothing left to do, but making it easier makes it way more likely. The "containerization of everything" also has the potential to make life easier - it makes it easier to start from some fixed point, and repeat a sequence of instructions from there.
This is a classic paper on reproducible builds, everybody is working on since. Better overview: http://www.dwheeler.com/trusting-trust/
Older discussion, 7 years ago:
Lots of builds are recreatable but not reproducible (there is probably a better term of art here). You can go back to a point in time and build the version of the software as it was, but you are not guaranteed to get a bit-for-bit clone. (See https://reproducible-builds.org for a thorough discussion)
The problem is that there are lots of uncontrolled inputs to a build that are due to sourcecode or compiler changes. Most famously there are timestamps and random numbers, which mess up all sorts of hashing-based approaches.
These can even be non-obvious. Just the other day I and a colleague were investigating the (small but unsettling) possibility that an old buildpack had been replaced maliciously. We compared the historical hash to the file: different. We rebuilt the historical buildpack with trusted inputs: still different.
Then we unzipped both versions and diff'd the directories: identical.
What had thrown our hashes off was that zipfiles, by default, include timestamps. We have a build that is recreatable but not reproducible.
Speaking of builds, we are able to reproducibly build some binaries but not others. Off the top of my head our most high-profile non-reproducible build is NodeJS. Some other binaries (Ruby and Python, in my not-at-all-complete recollection) are fully reproducible.
This difficulty with fully reproducing makes it hard to provide a fully trustworthy chain of custody. A company which uses Cloud Foundry have in actual fact stood up an independent copy of our build pipelines inside their own secure network, so that they can be completely autarkic for the build steps leading to a complete buildpack. This doesn't defend against malicious source, but it defends against malicious builds.
Disclosure: I work for Pivotal, the majority donor of engineering to Cloud Foundry. As you've probably guessed, I'm currently a fulltime contributor on the buildpacks team.
The biggest conceptual mistake we are making is that by default compilers always build for ~this~ machine, linking to this libraries. This makes it so the state of the machine inherently changes with every compilation (aka compiling is not a purely functional operation anymore). If I could go back time and change automake and glibc, cross compiling and explicit dependency handling should be the norm. (As an aside, containers would greatly benefit too as you wouldn't need to package an entire linux distribution with every binary)
I am sometimes amazed, sometimes disappointed by this reproduceability problem. Computers supposed to be machines that can do the same thing again and again without a mistake, but this is not the case anymore. We have so many layers of complexity and everything is bolted together with duct tape. We focus on developer convenience in the short term but in the long term we completely loose determinism. Sure we can write more code faster than before, but building software is more problematic than ever.
Yet, somehow everything seems to be going to this direction, in fact some people celebrate it and compare it to biology or evolution. I just call is "accidentally stochastic computing".
Creating life is scientifically exhilarating, but incredibly dangerous.
(Some might find the documentation for the `guix challenge` command interesting: https://www.gnu.org/software/guix/manual/html_node/Invoking-...)
[0] https://www.gnu.org/software/guix/ [1] https://nixos.org/nix/
The one that springs to mind is video game emulation, from more than a decade ago - the communities there needed a means to reproducibly compress their ROMs, both for verifying dumps and for sharing large sets of ROMs. The tools created have been through many iterations, but they're still in use today in the form of TorrentZip[0] and various relatives like torrent7z.
"Investigate how we can allow users to independently verify/authenticate a final buildpack" (https://www.pivotaltracker.com/story/show/104469634)
"Explore: Compiled binaries should be reproducible" (https://www.pivotaltracker.com/story/show/104746074)
"determine whether the libfaketime reproducible build strategy will work across all of our binaries" (https://www.pivotaltracker.com/story/show/107752798)
"Investigate Why are our node builds not reproducible?" (https://www.pivotaltracker.com/story/show/128161137)
As well as supporting work to help independent verification of the "chain of custody". There's 25 of those under that label, if you use the search box.
The good part is that you have a clear goal.
I've been wondering if perhaps there's a "trusting trust" problem where we have 3d-printers that can print circuit-boards (eg CPUs)... and also print 3d-printers. The printer is sort of like a compiler in this case. How do you know it won't produce printers that will produce malicious CPUs? It might not be easy to do "bit-for-bit" comparisons between CPUs to make sure one is safe.
Since trusted compilers on untrusted hardware aren't trustworthy, I had hoped that 3d-printing might eventually allow us to trust our own printed hardware... but it might be turtles all the way down!
I can answer that question easily: "Yes, there's a problem :-)". DDC can be used to help, but there are caveats.
I talk about applying DDC to hardware in section 8.12: http://www.dwheeler.com/trusting-trust/dissertation/html/whe...
This quote is probably especially apt: "Countering this attack may be especially relevant for 3-D printers that can reproduce many of their own parts. An example of such a 3-D printer is the Replicating Rapid-prototyper (RepRap), a machine that can “print” many hardware items including many of the parts required to build a copy of the RepRap [Gaudin2008]. The primary goal of the RepRap project, according to its project website, is to “create and to give away a makes-useful-stuff machine that, among other things, allows its owner [to] cheaply and easily… make another such machine for someone else” [RepRap2009]."
I also note that, "... DDC can be applied to ICs to detect a hardware-based trusting trust attack. However, note that there are some important challenges when applying DDC to ICs..." and there's the additional problem that when you're done, you'll only have verified that specific printer - not a different one. Determining if software is bit-for-bit identical is easy; determining if two pieces of hardware are logically identical is not (in the general case) easy. No two hardware items are perfectly identical, and it's tricky to determine if something is an irrelevant difference or not.
If someone wants to write a follow-on paper focusing on hardware, I'd be delighted to read it :-).
https://news.ycombinator.com/item?id=12667632
I try to be clear in there that I'm not rebutting your work, which I respect, so much as the extreme amount of focus by all the IT forums on just a few problems that cause almost no failures in real-world vs the ones that cause all sorts of compiler & app issues (including security). I also present an alternative that addresses root causes, leveraging two of your own articles, that knocks out the interception risk as a side effect of just doing the computation and distribution securely.
You can see some of my comments below. This discussion is a little subtle, because I agree with you that many other activities need to be done, and you respect my work (thanks!). I don't think there's an "extreme" focus; to my knowledge nearly no money has been spent on DDC, and the amount of funding spent on reproducible builds is dwarfed by the funding spent on formal methods and related work (by orders of magnitude). I think the "focus" that you see is more the excitement about things that can be done now, without radical retooling.
e.g. we have compilers A and B. Either of which may, or may not, have back doors. Is it possible to detect back doors via compilation?
A complex backdoor can detect compilation of A or B, and insert different backdoor code into each. But will those back doors be identical?
i.e. is there a a compilation path (A -> B' -> B'', etc.) such that we can either detect that the backdoors are either bit for bit identical, or that there are no back doors?
DDC works if the trusted compiler has back doors and other nastiness - as long as the back doors won't affect the DDC results. This is noted in section 4.3: "... something is “trusted” if we have justified confidence that it does not have triggers and payloads that would affect the results of DDC. A trusted program or process may have triggers and payloads, as long as they do not affect the result. A trusted program or process may have defects, though as shown later, any defects that affect its result in DDC are likely to be detected. Methods to increase the level of confidence are discussed in chapter 6." http://www.dwheeler.com/trusting-trust/dissertation/html/whe...
Chapter 6 discusses various ways to make this likely: http://www.dwheeler.com/trusting-trust/dissertation/html/whe... While you're doing tests, it's probably best to do them on an isolated system or network. One interesting appraoch is to use old computers to compile newer compilers (possibly through emulation). It's not likely that the older computers will have malicious attacks or backdoors that will work against newer compilers.
You can also apply it multiple times using multiple different trusted compilers. In that case, an attack would have had to subvert all of those compilers (and/or their environment). This quickly becomes vanishingly unlikely.
Just wanted to mention we are now having regular IRC meetings:
https://lists.reproducible-builds.org/pipermail/rb-general/2...
Happy hunting!
Here's a quick enumeration of the problems in case people wonder why I gripe about this and reproducible builds fad:
1. What the compiler does needs to be fully specified and correct to ensure security.
2. The implementation of it in the language should conform to that spec or simply be correct itself.
3. No backdoors are in the compiler, the compilation process, etc. This must be easy to show.
4. The optimizations used don't break security/correctness.
5. The compiler can parse malicious input without code injection resulting.
6. The compilation of the compiler itself follows all of the above.
7. The resulting binary that everyone has is the same one matching the source with same correct or malicious function but no malicious stuff added that's not in the source code already. This equivalence is what everyone in mainstream is focusing on. I already made an exception for Wheeler himself given he did this and root cause work.
8. The resulting binary will then be used on systems developed without mitigating problems above to compile other apps not mitigating problems above.
So, that's a big pile of problems. The Thompson attack, countering the Thompson attack, or reproducible builds collectively address the tiniest problem vs all the problems people actually encounter with compilers and compiler distribution. There's teams working on the latter that have produced nice solutions to a bunch of them. VLISP, FLINT, the assembly-to-LISP-to-HLL project & CakeML-to-ASM come to mind. There's commercial products, like CompCert, available as well. Very little by mainstream in FOSS or proprietary.
The "easy" approach to solve most of the real problem is a certifying compiler in a safe language bootstrapped on a simple, local one whose source is distributed via secure SCM. In this case, you do not have a reproducible build in vast majority of cases since you've verified source itself and have a verifying compiler to ASM. You'll even benefit from no binary where your compiler can optimize the source for your machine or even add extra security to it (a la Softbound+CETS). Alternatively, you can get the binary that everyone can check via signatures on the secure SCM. You can even do reproducible builds on top of my scheme for the added assurance you get in reproducing bugs or correctness of specific compilations. Core assurance... 80/20 rule... comes from doing a compiler that's correct-by-construction much as possible, easy for humans to review for backdoors, and on secure repo & distribution system.
Meanwhile, the big problems are ignored and these little, tactical solutions to smaller problems keep getting lots of attention. Same thing that happen between Karger and Thompson time frame for Karger et al's other recommendations for building secure systems. We saw where that went in terms of the baseline of INFOSEC we had for decades. ;)
Note: I can provide links on request to definitive works on subversion, SCM, compiler correctness, whatever. I think the summary in this comment should be clear. Hopefully.
Note 2: Anyone that doubts I'm right can try an empirical approach of looking at bugs, vulnerabilities and compromises published for both GCC and things compiled with it. Look for number of times they said, "We were owned by the damned Thompson attack. If only we countered it with diverse, double compilation or reproducible builds." Compare that to failures in other areas on my list. How unimportant this stuff is vs higher-priority criteria should be self-evident at that point. And empirically proven.
Following Wirth's Oberon and VLISP Scheme, the easiest route is to leverage one of those in a layered process. Scheme, esp PreScheme, is easiest but I know imperative programmers hate LISP's no matter how simple. So, I include a simple, imperative option.
So, here's the LISP example. You build initial interpreter or AOT compiler with basic elements, macro's, and assembly code. Easy to verify by eye or testing. You piece-by-piece build other features on top of it in isolated chunks using original representation until you get a real language. You rewrite each chunk in real-language and integrate them. That's first, real compiler that was compiled with the one you built piece by piece starting with a root of trust that was a tiny, static LISP with matching ASM. You can use first, real compiler for everything else.
Wirth did something similar out of necessity in P-code and Lilith. In P-code, people needed compilers and standard libraries but couldn't write them. The could write basic system code on their OS's. So, he devised idealized assembly that could be implemented by anyone in almost no code and just with some OS hooks for I/O etc. Then, he modified his Pascal compiler to turn everything into P-code. So, ports & bootstrapping just required implementing one thing. Got ported to 70+ architectures/platforms in 2 years as result.
The imperative strategy for anti-subversion is similar. Start with idealized, safe, abstract machine along lines of P-code with ASM implementations. Initial language might be Oberon subset with LISP or similar syntax just for effortless parsing. Initial compiler done in high-level language for human inspection with code side-by-side in subset language for that idealized ASM. It's designed to match high-level language, too. Create initial compiler that way then extend, check, compile, repeat just like Scheme version.
The simple, easy code of the initial compilers and high-level language for final compilers means anyone can knock them off in about any language. That will increase diversity across the board as many languages, runtimes, stdlibs, etc are implemented quite differently. Reproducible build techniques can be used on the source code and initial process of compilation if one likes. The real security, though, will be that many people reviewed the bootstrapping code, the ZIP file is hashed/signed, and users can check that source ZIP they acquired and what was reviewed match. Then they just compile and install it.
I encourage continued research work on solving steps 1-8 (and some related ones, too). As you noted, I've advocated for work in many of them, and have written about them. I'm no stranger to SCM security or formal methods :-). But many of these steps are notoriously difficult. CompCert is a great step forward in the research community. But CompCert only supports a subset of the C99 programming language, is proprietary, and produces executable code that performs significantly worse than Freely-available gcc and clang/llvm when used at their usual optimization levels (-O2). Neither gcc nor clang/llvm are ever going to be supplanted by a subset proprietary compiler that generates poorly-performing code. In addition, where are the rest of the compilers, for all the rest of the programming languages? I don't think the ECMAScript ES6 specification meets point #1, and you can't get the rest until you have point #1.
When you say, "The implementation of it in the language should conform to that spec or simply be correct itself." - sounds good. I guess you'll use formal proofs. How do you know that those formal methods tools aren't subverted?
> The "easy" approach to solve most of the real problem is a certifying compiler in a safe language bootstrapped on a simple, local one whose source is distributed via secure SCM. Most people do not have access to a "certifying compiler", and in any case, that begs the question of how you certify that compiler. DDC lets you break the cycle.
Reproducible builds and DDC only solve specific problems, but they solve them in ways that can be implemented relatively quickly, at relatively low cost, and can use today's tools. I think that is what's exciting about them - they're much more obviously within reach. Yes, they do require some tweaks to build processes and build tools. But they don't require everyone to radically switch to totally unfamiliar approaches (e.g., to formal methods). And there are subverted binaries attacks - quite a number of them - which can be countered by reproducible builds. DDC can be viewed as the next step in reproducible builds - enabling detection of subverted binaries when a compiler/build tool is involved. I think there's no shame in solving focused security problems today, even if they don't solve absolutely everything; we should celebrate the successes we can get.
> Meanwhile, the big problems are ignored and these little, tactical solutions to smaller problems keep getting lots of attention. I don't think they're being ignored, but you should expect that projects are less likely to get funding if the require $$BIGNUM, take a long time, and have a significant risk of not turning into something people widely use. We don't all have $$BIGNUM. I think humanity can both work to fix smaller problems (with shorter-term payback) and work to make progress on bigger & harder problems.
This is the crux of my complaint. I know we plus some people here believe that. I don't think the majority do in practice, though. When it's compiler risk, the Thompson paper, yours, and reproducible builds are about all anyone talks about on the majority of forums. There's also warnings about optimizations reducing security or compilers just screwing up with solution being look at assembly. There's been virtually no interest outside high-assurance and functional programming fields in building robust compilers using whatever techniques can be done or addressing common security risks.
I think that's why you can make some of your later claims about what's out there for various languages. Virtually nobody cares enough to put time or money into it. They'll happily use crap compilers that are potentially subverted or full of 0-days so long as the binary was reproducible. That's root problem on demand side. Least we got the CompSci people staying on it FOSSing key results. Look up K Framework's KCC compiler & C semantics for what a great reference might look like. Designed to look like GCC for usability, passed 99.2% of torture suite by 2012, and didn't get locked up like CompCert. Yay!
http://www.kframework.org/index.php/Main_Page
http://fsl.cs.illinois.edu/pubs/ellison-rosu-2012-popl.pdf
" But CompCert only supports a subset of the C99 programming language, is proprietary, and produces executable code that performs significantly worse than Freely-available gcc and clang/llvm when used at their usual optimization levels (-O2)"
It's true. The proper response would've been to do a non-formal, open-source knockoff of CompCert or similar tool in safe, simple language. Then start adding optimizations. Most FOSS people in compilers didn't try but there's some more CompSci teams working on it. QBE backend was exciting, though, as a starting point on first, C compiler in chain if re-written with medium assurance techniques:
" I guess you'll use formal proofs. How do you know that those formal methods tools aren't subverted?"
Oh I saw that coming cuz it worried me, too. Let me present to you a page listing work to knock about everything related to formal methods outside the TCB outside the specs & logics themselves:
https://www.cl.cam.ac.uk/~mom22/
https://www.cs.utexas.edu/users/jared/milawa/Web/
Just follow... all.. of he and his accociates' links. They're all great. Quick summary: verified LISP 1.5 to machine code in HOL, verified SML (CakeML) same way, translation validation sort of compiles specs to assembly in HOL, they did theorem proven composed of simpler provers whose root-of-trust starts with their verified LISP/ML/assembly tooling, extraction mechanism in Coq or HOL to ML that's untrusted, HOL in HOL, a HOL/light designed to be tiny + use verified, and now a HOL to hardware compiler in works. These people are on a win streak.
I included specifically the paper that addresses your concern in addition to Myreen's page. It does for provers what my scheme would do for compilers by working from simple to complex with each step proving the next + verified implementations. They then do extra step of converting each layer to preceding layers to verify with those. It's remarkably thorough with you basically trusting simple specifications and implementations at each step + a tiny, first step. Were you as impressed reading it as I was that they both tackled the prover problem and were that thorough?
Note: It's for a FOL. K framework, some certified compilers, hardware verification, an Alloy-to-FOL scheme, Praxis's method, and Design-by-Contract are among stuff I found done in FOL. Already useful. Plus, I found a HOL-to-FOL converter proven in HOL. Just do that or the HOL/Light stuff in FOL... if possible... then we're from specs down to machine code with verified extraction & tiny TCB in Jared's link.
"And there are subverted binaries attacks - quite a number of them - which can be countered by reproducible builds."
This one I'm more open to consideration. The question being, if you have to build it to check in the first place, why aren't we countering the binary risk by solving source distribution and authentication process? Users vetted it was the right source & makefile then produced their own binary from it using local tools that they hopefully checked. No longer a binary subversion risk. It was the requirement for A1-class systems. The source authentication aspect is very easy to implement: (a) have central location with source, (b) replicate it to others where someone at least looks at it, (c) matching hashes for the source files, and (d) signed archives of it available at multiple locations for comparison. I also like to point out that one needs something like this anyway for source-level security, signing/checking distributed items is already a thing, and people constantly develop more SCM bullshit for fun. Seemed low friction.
So, at that point, you are working only with stuff that wasn't MITM'd. Getting backdoored at that point means the problem was in the source supplied or an existing binary (eg compiler) on your system. The former requires verifiable compilers I was advocating that have source checking & distribution above. The second, since distribution was secure, usually means the user is already compromised: you find a subverted, subverting compiler on your machine you erase that system, maybe buy new hardware with firmware-level attacks these days, reinstall the shit, carefully get a new compiler (preferrably source), and reinstall it. They'll have to get source or binary of replacement compiler from trusted place, vet it came from there, build if source, apply an act of faith to believe it's not malicious, and then run it.
It's just not buying a lot vs secure distribution with a diverse checking of source and testing of binary approach. Diverse, source checking against hashes and public keys is also super easy with prerequisite tools on most Linux distros I've seen. From there, the source or local tool might also be malicious. From there, you can do a C interpreter with checks, compile it with Softbound+CETS followed by strong sandboxing, whatever you want to do if avoiding building a whole, certifying compiler. Thing is, all these still have a compiler in the root-of-trust whose code might be incorrect or malicious even if binary output matches. I'm wanting to see a Hacker News post with 200-500 votes on countering that with an open-source project. Other than QBE and TCC which were a nice start, got my props, and I found here. :)
Note: I have other middle ground ideas, too. Maybe also rewriting an optimizing compiler like GCC, Clang, or SGI's OSS one to just use a subset of C a simple interpreter or C compiler can take. One user can verify by eye or hand or code themselves. Bug-for-bug matching to get that initial bootstrap. Maybe even pay AbsInt to use CompCert one time on initial bootstrap with simple, OSS knockoff allowed just for verification of that. There's precedent with TrustInSoft and PolarSSL. Still gonna need to rewrite them overtime for easier verification of correctness, effect of optimizations on security features, and covert/side channel production. Inevitable. Might as well start while top compilers are only (absurdly large number) lines of code instead of (that number + similarly large number) later.