IIRC, C was specifically designed to allow single-pass compilation, right? I.e. in many languages you don't know what needs to be output without parsing the full AST, but in C, syntax directly implies semantics. I think I remember hearing this was because early computers couldn't necessarily fit the AST for an entire code file in memory at once
It explains the memory limits and what happened :)
> After the TMG version of B was working, Thompson rewrote B in itself (a bootstrapping step). During development, he continually struggled against memory limitations: each language addition inflated the compiler so it could barely fit, but each rewrite taking advantage of the feature reduced its size. For example, B introduced generalized assignment operators, using x=+y to add y to x. The notation came from Algol 68 [Wijngaarden 75] via McIlroy, who had incorporated it into his version of TMG. (In B and early C, the operator was spelled =+ instead of += ; this mistake, repaired in 1976, was induced by a seductively easy way of handling the first form in B's lexical analyzer.)
Compare that to, say, Rust, which would be pretty painful to single-pass compile with all the non-local behavior around traits.
Once you want to optimize or analyse, things become more complicated.
> Compare that to, say, Rust, which would be pretty painful to single-pass compile with all the non-local behavior around traits.
Type inference also spans a whole function, so you can't do it in a single pass through the code. (But it's still tamer than in eg Haskell, where type inference considers your whole program.)
E.g. to call a function later in the file, you need a prior declaration. Or else, an implicit one (possibly wrong) will be assumed from the call itself.
This is not true in some C++ situations, like class declarations. In a class definition there can be functions with bodies. Those are inline functions. The functons can freely refer to each other in either direction. Type checking a class declaration therefore requires all of it to be parsed.
A one pass language is advantageous even if you're building a serious multi-pass compiler with optimization. This is because that exercise doesn't require an AST! Multi-pass doesn't mean AST.
Building an AST doesn't require just more memory, but more code and development work: more complexity in the code to build the abstraction and to traverse it. It's useful if you need to manipulate or analyze the program in ways that are closely related to the source language. If the language cannot be checked in one pass, you might need one; you wouldn't want to be doing the checking on an intermediate representation, where you've lost the relationship to the code. AST building can be reused for other purposes, like code formatting, refactoring, communicating with an IDE for code completion and whatnot.
If the only thing you're going to do with an AST is walk it up and down to do some checks, and then to generate code, and you do all that in an order that could have been done without the AST (like a bottom-up, left to right traversal), then it was kind of a waste to construct it; those checks and generation could have been done as the phrase structure rules were parsed.
At first I thought that it wasn't possible for C. After I thought about it, as long as you disallow forward references, and rely on a single source file as input, it's possible to compile a complete C program in one pass. Anything else requires a preprocessor (e.g "#include") and/or linker (e.g. "extern" and prototypes) to solve. The implementation in the article dodges all of these and focuses on a very pure subset of C.
My cousins ex had a workflow in the late 70's that involved two floppy drives and a little dance to compile and link. Later he got a 5M hard drive which improved things a lot.
block
;; code for "i = 0"
loop
;; code for "i < 5"
i32.eqz
br_if 1
i32.const 1
loop
if
;; code for "i = i + 1"
br 2
else
end
;; code for "j = j * 2 + 1"
i32.const 0
end
end
end
It doesn't require cloning the lexer so probably would still fit in 500 lines? But yeah, in normal assembly it's way easier, even in one-pass: ;; code for "i = 0"
.loop_test:
;; code for "i < 5"
jz .loop_end
jmp .loop_body
.loop_incr:
;; code for "i = i + 1"
jmp .loop_test
.loop_body:
;; code for "j = j * 2 + 1"
jmp .loop_incr
.loop_end:
Of course, normally you'd want to re-arrange things like so: ;; code for "i = 0"
jmp .loop_test
.loop_body:
;; code for "j = j * 2 + 1"
.loop_incr:
;; code for "i = i + 1"
.loop_test:
;; code for "i < 5"
jnz .loop_body
.loop_end:
I propose the better loop syntax for languages with one-pass implementations, then: "for (i = 0) { j = j * 2 + 1; } (i = i + 1; i < 5);" :)https://www.blackhat.com/presentations/win-usa-04/bh-win-04-...
(minus directly emitting opcodes, and fitting into 500 lines, of course.)
I am wondering if this complexity exists due to historical reasons, in other words if you were to invent C today you would just define int as always being 32, long as 64 and provide much more sane and well-defined rules on how the various datatypes relate to each other, without losing anything of what makes C a popular low-level language?
You'd lose something because those decisions would be impractical for 8-bit and 16-bit targets (which still exist in the world of embedded programming).
My main point is that a C programmer today is forced to learn dozens of rules just to cater for many niche platforms that they will probably never target, so if you were to separate those two use cases you can get a much more neat C that targets modern 64 bit architectures with all the power of traditional C but a bit less portability.
That's a nice theory and is what we've got, but it falls down in a few places.
The first is that the "int" world has got a bit munged - some platforms make some slightly strange choices for long and short and so you can't always rely on it (although int is usually pretty sensible).
The other is that when doing unsigned maths, rollover is silent so generally you really need to know the exact size at coding time so that you can ensure that rollover doesn't happen silently.
Together, these mean that you're generally just better using uint32_t (etc.) all over the place and you get more predictable results.
IMO, being under X lines of code is part of the readability—10,000 lines of code is hard to approach no matter how readable it otherwise is.
http://www.trs-80.org/tiny-pascal/
I figured out the basics of how a compiler works by going through it line by line.
I didn't see a link to the source in the article, but this seems to be it: https://sourceforge.net/p/tiny-pascal/code/HEAD/tree/NorthSt...
Pressing "build" in Turbo Pascal on my 386sx it was already done before you could even perceive any delay. Instant.
I wonder if is this a good path to becoming an extremely productive developer. If some one spends time developing projects like this, but for different areas... A kernel, a compressor, renderer, multimedia/network stack, IA/ML... Will that turn a good dev into a 0.1 Bellard?
Off topic, but a log scale might be useful: 0.1 Bellard --> -10 deciBellards. That allows for: 0.001 Bellard --> -30 deciBellards.
Problem: Programmers with negative productivity cannot be represented on the same log scale.
Are they a 4σ programmer, 1σ programmer, -0.5σ programmer, -2σ programmer?
Plus, most people are "average" not negative productivity, and CDFs let you use really fun stuff like Beta Distributions (variable, shapable distributions) and Gamma Distributions (exponential distributions). They're super sweet as far as probability statistics.
[1] https://en.wikipedia.org/wiki/Cumulative_distribution_functi...
[2] https://en.wikipedia.org/wiki/Probability_density_function
This is similar to the problem of price-to-earnings ratio. The ratio goes asymptotic as earnings goes through zero. It would be better to quote earnings-to-price ratio. Another screwy reciprocal unit is miles per gallon for cars.
The feeling that you write code somewhere in the skies and have no idea how something works underneath has always really bugged me when I've used something.
The function of the university in the near future will probably just be to have like-minded curious people to discuss ideas with, and to get a better grasp of what problems need to be solved (specifically scientific ideas, rather than just applying engineering).
The prestige element (specifically of certain universities over others, perhaps not university over high school) is dwindling, and hopefully will be abolished with this new generation.
The people behind this project avoided that caveat by simply not implementing C. Apparently they kept a bit of the syntax but then proceeded to cherry-pick features that suited them and not make.an effort to even try to comply with any version of the standard.
Essentially all you need is a grammar, parser library and a couple of tree walkers to convert the AST first to expand macros and then convert to assembly.
A production compiler with all its optimisation steps is of course far more complex and more modern languages have many more features, but C is really pretty simple (the K&R book is concise and good!) as it was built to work on the computers of half a century ago.
https://www.amazon.com/500-Lines-Less-Amy-Brown/dp/132987127...
- demystifies compilers, interpreters, linkers/loaders and related systems software, which you now understand. This understanding will no doubt one day help in your debugging efforts;
- elevates you to become a higher level developer: you are now a tool smith who can make their own language if needed (e.g. to create domain specific languages embedded in larger systems you architect).
So congratulations, on top of other forms of abstraction, you have mastered meta-linguistic abstraction (see the latter part of Structure and Interpretation of Computer Programs, preferably the 1st or 2nd ed.).
It was super fun and interesting. But I wouldn't say it was a terribly useful exercise that has greatly enriched me as a programmer.
And somehow I have ended up with a very strong bias against DSLs.
Most DSLs are bad, partially because most people are bad at designing languages.
Embedded DSLs can be quite neat. Eg Haskell makes it easy to embed something like DSLs inside your Haskell code.
(If you squint a bit, the ability to define your own functions in any language goes in that direction of allowing you to define your own mini-sub-language that handles your problem better. Haskell (and the Lisps) just dial that kind of approach up to 11.)
But DSLs are tempting, especially among the more passionate programmers, so at work we've ended up with a lot of them. All of them are tripping hazards.
Not sure if most people will get much extra out of writing their own compiler, too?
It takes too much code in Python. (Not a phrase one gets to say often, but it’s generally true for tree processing code.) In, say, SML this sort of thing is wonderfully concise.
C4x86 | 0.6K (very close)
small C (x86) | 3.1K
Ritchie's earliest struct compiler | 2.3K
v7 Unix C compiler | 10.2K
chibicc | 8.4K
Biederman's romcc | 25.0K
This is more of a transpiler, than an actual compiler.
Am I missing something?
Nowadays, people generally understand a compiler to be a program that reads, parses, and translates programs from one language to another. The fundamental structure of a machine code compiler and a WebAssembly compiler is virtually identical -- would this project somehow be more of a "real" compiler if instead of generating text it generated binary that encoded the exact same information? Would it become a "real" compiler if someone built a machine that runs on WebAssembly instead of running it virtually?
The popular opinion is that splitting hairs about this is useless, and the definition of a compiler has thus relaxed to include "transpilers" as well as machine code targeting compilers (at least in my dev circles).
> Notably, it doesn't support:
> structs :-( would be possible with more code, the fundamentals were there, I just couldn't squeeze it in
> enums / unions
> preprocessor directives (this would probably be 500 lines by itself...)
> floating point. would also be possible, the wasm_type stuff is in, again just couldn't squeeze it in
> 8 byte types (long/long long or double)
> some other small things like pre/post cremements, in-place initialization, etc., which just didn't quite fit any sort of standard library or i/o that isn't returning an integer from main()
> casting expressions
(Respect to the author for doing this, I just couldn’t resist the obvious joke)
RatC did not need 500 lines for its preprocessor support, by the way.
https://root.cern.ch/root/html534/guides/users-guide/CINT.ht...