[1]: https://www.youtube.com/playlist?list=PLbtzT1TYeoMhTPzyTZboW...
The first lab: https://pdos.csail.mit.edu/6.1810/2023/labs/util.html
I do plan to work through the whole thing myself again at some point soon.
I heard the first version is close to VMS. Is it true? Is there any material on VMS? OpenVMS is open sourced but the version is too high.
So no, NT is nothing at all like VMS from a featureset or likeness perspective, nothing at all from an implementation perspective (VMS had beautiful clean C standard library and compiler suite), but very much was the race to the bottom by the mini crowd to play games in the micro world.
check dave plumbers interview with cutler on youtube for more color.
I took him as an inspiration. His “What I really wanted to do was work on computers, not apply them to problems.” rings so true with me, and he as a natural leader also makes me look up to him.
> I heard the first version is close to VMS.
No.
> Is it true?
No.
> Is there any material on VMS?
Tons. It came with metres of docs. It is still on sale. You can get a freebie hobbyist VM image.
> OpenVMS is open sourced
No, it isn't.
> but the version is too high.
Not true.
Always, always check your assumptions before asking a question. If you want to ask "because (a) can I (b)" then check A is true before asking. ALWAYS.
As someone who missed this part of history, I would love to have learned about it in college.
Basically a rather simple Makefile, gcc, qemu, xorriso...
I see lots of open source hobbyist OS projects require you to build your own GCC, which is not fun. Xv6 works with the standard gcc (provided by distro package manager). On macOS: just install homebrew-i386-elf-toolchain, and tweak some lines on the Makefile. Done. You are ready to build Xv6, which should take about a minute or less.
The main difference between Microkernels and Monolithic is how address space get to be shared between userland and kernel. I don't see how microkernel design would be "better". Why teach a design that isn't widely used?
Old is not always a problem, but monolithic kernels face the same problems they did in the 80s. It's not surprising Apple is moving away from kernel drivers and into userspace functionality.
In theory, I like the concept of functional core, imperative shell - the imperative shell provides various functions as a kind of APIs, and the functional core handles all the business logic that involves the connections between the APIs. (It's also sometimes called hexagonal architecture.)
However, it is questionable whether it actually reduces complexity; I daresay it doesn't. Every interaction of different shell APIs (or even every interaction that serves a certain purpose) needs a controller in the core that makes decisions and mediates this interaction.
So when you split it up, you end up with more bureaucracy (something needs to call these APIs in between all the services) which brings additional overhead, but it's not clear whether the system as a whole has actually become easier to understand. There might also be some benefit in terms of testability, but it's also unclear if it is all that helpful because most of the bugs will then move to the functional core making wrong decisions.
https://git.minix3.org/index.cgi seems online?
https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...
> a microkernel design would be better.
why re-hash a 30 year old debate? [0]
0: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...
It’s a bare metal Free Pascal environment for Raspberry Pi.