In agriculture, we have the doomsday seed vault [0] just for this purpose. If we anticipate collapse of the current economic system or society, I think we should build a doomsday computer vault, that keeps everything we need to rebuild the computing industry in a underground bunker. It keeps everything we need in a controlled environment, such as 8080, Z80, m68k, motherboards, logic chips, I/O controllers, ROM/RAM, generic electronic parts, soldering irons, oscilloscopes, logic analyzers, schematics, documentation, textbooks, software. We also keep some complete, standalone computer systems such as desktops and laptops, and all the parts that need to service them. We also need to preserve the old semiconductor production lines around the world, although probably not in the same bunker. Even if we fail to build better systems, 8080s are already useful enough!
Meanwhile in peace time, we need to form a team of experts that makes a roadmap to rebootstrap the computing technology for the future using parts from the bunker, with a step-by-step plan, that can be easily followed and executed.
[0] https://en.wikipedia.org/wiki/Svalbard_Global_Seed_Vault
Indeed. It takes a civilisation to build an iPhone.
I don't think people appreciate that even within the highest-tech manufacturing industries there is a lot of tacit knowledge. People shake their fists about "technology transfer" to China, and before that Japan; but that's taken decades for them to reach parity. And that's with running, copyable examples and all the parts of an existing supply chain widely available. Similarly the process for making a nuclear bomb can be written in a short paper, but few countries have successfully replicated it.
"Post-collapse recovery" and "technology transfer" are the same problem, except that post-collapse recovery is cribbing from a dead example rather than a live one and in much worse circumstances.
Collapse recovery is a fun little competence fantasy to play out in your own head. Like "the rapture" for atheists. But within our lifetimes, we have to put in the work to avoid the collapse.
The point here is that we don't need to build an iPhone, we only need a radio. Building a 8080 is much simpler, the USSR did it, the East Germany did it, China did it, all around the same time without too much difficulty. It's certainly would be much more difficult if the current civilization collapsed, but I think the author doesn't anticipate a total collapse, just a breakdown of the current economic system, thus it should be doable.
> Similarly the process for making a nuclear bomb can be written in a short paper, but few countries have successfully replicated it.
My understanding is that the physics of achieving the nuclear explosion itself is relatively straightforward. The real difficulties are to produce the weapon-grade materials needed, and to transform the explosion to an useful weapon, all under external sanctions, and even sabotage.
Taiwan had a nuclear weapon project in 1970s, significant progress was made in the beginning, if the U.S. didn't discover it and dismantle everything, it would be interesting to see how it turned out to be.
1: https://www.theguardian.com/environment/2017/may/19/arctic-s...
This website got posted to HN in the past, definitely something that should be part of the attempt.
Especially that the people who will restart from a Z80 will face technological edges when their skills and needs grow. We faced the same in the 80's: going from 8 bit to 16, then 32, having memory addressing problems, being pushed to new and hardly compatible architectures, having programs to rewrite...
Then why not leaving behind us a technology, still simple, but that will save them most of these issues?
I feel an 8-bit data bus and 32-bit address bus would be a good way to make long-live programs, edge-free when extending memories, and still not so complex processors and main boards. The address bus does not need to be fully wired in the first processor versions, so it can scale over time with more complex processors when skills and needs grow.
Besides, it would be smart to leave in a vault a kind of FPGA technology with sources to flash the FPGA components. So no need to create a production line for many different integrated components: only one output, and the components are specialized by flashing them.
Indeed even microprocessors can be flashed on an FPGA.
Well, just ideas...
FPGA: https://en.wikipedia.org/wiki/Field-programmable_gate_array
Low tech CPU to support 32 bit code: https://en.wikipedia.org/wiki/Motorola_68008 (8 lines data bus, 20 lines of address bus, 32 bit programs, 70 000 transistors)
Microprocessor on an FPGA: https://en.wikipedia.org/wiki/Soft_microprocessor
While the focus is on the data, it also necessarily involves access to original hardware at times, which is additionally also often stored for curatorial purposes in its own right.
What you need is a super-deep immortal vault that isn't anywhere near any known fault lines, and who's ingress point is sufficiently above ground, and can be accessed safely if under water, and supplies its own electricity for thousands of years.
Everything outside of the facility will have to be electrically neural up to extreme voltages (due to the possibility of plasma storms arising that make the worst thunderstorms on Earth look like a nice day outside).
I disagree. If humanity collapses, and a new fledgling civilisation grows out of the ashes, then maybe we should let them follow their own path, and make their own discoveries and mistakes.
It's massive hubris to assume that future generations can only survive if they have access to our knowledge.
Many civilisations have fallen only to be replaced by newer betters ones, even when they've had no previous records to refer to.
We're just a stepping stone along the path of human evolution, we are no greater or lesser than the stones before us, or the stones that come after us.
A collapse would be the perfect and likely sole opportunity to ditch the mistakes of the past and forge something completely new with the benefit of a "foresight" forged from hindsight.
It reminds me an 80's cartoon where Spanish, aliens and 3 children run after a golden city and destroy it out of greed when they find it.
One of the questions I keep coming back to for such a scenario, and still haven't come up with a great answer for, is how does someone living in a world without the ability to manufacture a computer still have computers that work 100+ years after the last one was made? Even manufacturing transistors without modern methods is non-trivial. Will a Z80 last 100+ years? I mean, maybe, if it's kept dry and not exposed to anything corrosive. I've got a Commodore 64 that's ~40 years old and still works...so, 100 years seems reachable, but there have to be extenuating circumstances to get to that "post-tech" world (though I guess in a post apocalyptic world, the value of computers would be seen as minimal for a few years while survival is the only concern, so just forgetting could be enough).
But the book would be more about the consequences of this - do they eventually take the thing apart and jump-start a silicon chip revolution in the 1950s, or (more likely I think) does the government destroy the machine as the UK government did to the Bletchley machines after WWII, and because there's no ground-up computer theory does it set back computing for decades?
So now ask somebody really smart in that technology, like say Jay Forrester[2] who had just finished inventing core memory, to analyze this magic beige plastic box. He could probably recognize that the PCB provided connectivity between parts, but what are the parts, these little flat plastic tiles? I don't think it would be possible to work out from first principles what the functional contents of a DRAM chip is, let alone the CPU. Even if they x-rayed it, supposing they had x-ray tech with enough resolution to resolve a chip, how could they figure out that those little blobs are transistors? Transistors hadn't been invented!
I think they'd have to concede this is "sufficiently advanced" tech, in Arthur Clarke's phrase, to be indistinguishable from magic.
From Greer's point of view, the factors that make today's hardware brittle are not technical, but economic. Corporations have to make electronics at a profit, and at a price point that is accesible to the average working class citizen. This business model would not be sustainable in the either a fast-collapse or slow-collapse scenario.
Instead, in the novel, governments take over the tech industry sometime in the second half of the 21st century, and treat it as a strategic resource in its struggle to not be left out in the global musical chairs game of climate change + resource depletion. They run it at a loss, and put the best minds they can spare to the task of making a computing infrastructure that is built to last.
By the 25th century, which is the time when the novel's events take place, Humanity has lost the ability to manufacture electronics, but computers built 350 years ago are kept in working order by a cadre of highly trained specialists (most of which have the skills of a geeksquad employee, but still). Common people have maybe heard some wildly innacurate legend about robots or computers. Wealty individuals are probably better informed but still cannot own one of those at any price. They only computers depicted of spoken about are US government property operated at US millitary facilities (or maybe there was one at the Library of Congress, do not really recall, though).
There's one post-collapse hacker in the novel, a secondary character that is part of the protagonist's crew. The author is not an engineering type and dances around the actual skills of this guy, but I'd say he seems able to use a debugger/hex editor and read binaries. His greatest achievement, though, is to fix and set up an ancient printer and recover documents from a disk that was "erased" but not wiped clean.
... you find a controller ... you find their emulation raspberry pi. ... all of a sudden, the world isn't as desolate.
They basically walk you through assembling and programming a full CPU from nothing but NAND-gates in a hardware description language, and in the second part even adding a compiler and higher-level language to the stack.
This, of course, presumes libraries are also mostly gone, since you don't need WikiPedia if you have a library.
In general, the scenario is, that the whole world is broken down, but full of tech. So many machines to get back to working. Machines beat muscle on a scale
There's a nice project page of one here[1], including an in-depth video about it here[2]. There's a collection of other relay computers here[3].
[1]: http://web.cecs.pdx.edu/~harry/Relay/
Ryzen processors have 80MB of L3 cache! IIRC that may have been twice what was needed to run the colorized versions of NeXT.
A lot of "man the old days were all we needed" recollections of Windows and DOS forget how crappy those OSes were...
But NeXT? That was basically a modern OS, with a bit less anti-aliasing. XWindow systems STILL look horrible compared to it.
It fits in L3 cache. 5-10 NEXT hard disks fit in RAM!
It had a web 1.0 browser, TCPIP networking, all the productivity apps. It had DOOM.
Sure Mach the microkernel was buggy and leaked memory, but kernels... there's lots of kernels now.
I think it would be a great project to start with the NextOS and carefully rebuild it up with good security and multitasking.
It would be fun to run NeXTSTEP on bare metal with RAMdisks (spare a processor to do the writes to SSD), and compare with other systems, see if it feels ludicrously fast or not.
AROS is the next best OS to use as it has a low memory footprint and can run on X86 systems: http://aros.sourceforge.net/
If you want Windows try ReactOS: https://reactos.org/
OS/2 try OSFree: http://osfree.org/
BeOS try HaikuOS: https://www.haiku-os.org/
All are low memory OSes.
Will try the other two, I guess. Not holding my hopes up.
Never heard of AROS...
ReactOS is bloated by definition of replicating windows, isn't it? It probably uses less than main windows, but it is just my opinion that the 1990-1997 period preceded some of the real bloat added due to the massive Moore's law gains after that with the GHz races.
OS/2 ... seems obscure to me but I never used it. But NeXT and Beos had revolutionary capabilities, I think OS/2 was basically just preemptive multitasking for Windows. Is that right?
BeOS, which I've never used, is probably also a good starting spot. HaikuOS probably has more oomph behind it community wise.
How long will 7nm chips work before electromigration destroys them?
https://semiengineering.com/chip-aging-becomes-design-proble...
The new Ryzen "only" has 64 MB L3 cache but it's split into 16 MB chunks per 4 cores. You can add in the L2 cache since it's exclusive (at the cost of more access fragmentation) to get 18 MB/72 MB depending on how you want to count it.
The new Epyc is the same design, just more cores so you get 256 MB. Still "only" 16 MB accessible per group of 4 cores though.
You are correct that one can't snap their fingers and create the community that even FreeBSD has. The last big corporate sponsor opportunity for this was the infant smartphones like the Palm Pre era (they owned Beos IP at that point I think) and early Android.
Sigh, that reminds me that Beos should have been the foundation of OSX, if not for the exhorbitant buyout cost they were insisting on.
You are correct that one can't snap their fingers and create the community that even FreeBSD has. The last big corporate sponsor opportunity for this was the infant smartphones like the Palm Pre era (they owned Beos IP at that point I think) and early Android.
I think we could also get started now. Not necessarily de-escalating tech, but realizing that the fundamental supply of newer, more powerful chips might not last even with a shift to more plentiful supplies of rare-earth metals due to our need to get off of fossil fuels, fast. I think it might be useful in the more immediate term to be able to lock-in the minimum set of features that make the web and Internet useful then distribute that as widely as possible on low-power, commodity platforms with resilient networks that could survive super-storms knocking out huge swaths of devices in one fell swoop.
Low-power p2p protocols, mesh networking, recycling devices, component-platforms that allow scavenging and make-shift repairs, etc.
Until we can solve the green energy problem it might be nice to know that even if your community gets hit with a storm or flood, it's still possible to restore and maintain network services rapidly in the aftermath. Simply being able to send a message to someone would be a big deal.
The simple way would be radio morse
As web client tech stabilizes and telecom regulatory rollback continues, there may be an opportunity for localized solutions to be landed for all sorts of different purposes.
The whole reason the web works for business is that you can give something away to millions of people for ad revenue or sell something to very large groups for small amounts of money.
Localization reduces your customer base. The price a business has to charge would be higher. This is on top of the adoption problem.
The older gentleman who was polite enough to listen to me said, "It's ok guy, if that 401k doesn't exist, then neither will you".
And so I think I will not stockpile any computers for later. I do like the engineering spirit of this however.
There are a number of ways that can not be true without a large scale societal collapse. Fraud involving pension funds has happened many times in the past (Bernie Madoff, Robert Maxwell being two high profile examples). The last financial crisis brought a bit more attention to the topic of counterparty risk - the idea that your "safe" investment is only as safe as the institutions that are backing it in many cases. It's not necessarily a high priority concern but I think it's worth at least considering splitting your retirement savings across more than one account with different institutions.
There are also lots of conceivable larger scale crises with historical precedent (many in the 20th Century) that would render your retirement savings largely worthless without leaving you dead. In many of those you would have more pressing concerns than your 401K but it still seems like not a bad idea to have some physical things of value that you keep somewhere secure but accessible (cash, perhaps gold and/or silver).
If someday the US looks like Syria, your 401k will be worthless but you’ll probably survive.
The first link on CollapseOS’s announcement called “Winter is Coming” has it all:
So, first of all, how are you supposed to download this thing onto your homebrew computer, given that internet will most likely be down?
"But if the collapse magnitude is right, then this project will change the course of our history, which makes it worth trying."
Mmmh, I think the author is a bit on the hyperbolic side here. I'm quite sure that anyone that can design & assemble a z80 computer can quite comfortably code some basic utilities by himself just fine. All the others won't care a bit about your OS. Sorry if I sounded harsh, but I actually was.
Why plan for less than the raspberry pi level?
(SD cards seem like a good commodity to stockpile here, as he supports them, but they're likely incredibly hard to manufacture post-collapse.)
[0] https://www.livingcomputers.org
[1] https://twitter.com/TeriRadichel/status/1164369796307116033
If it can be useful, than it can also be useful today to all the (poor) tinker people around the world today. There are lots of alternative eco villages etc. trying to be self sufficient, who do all kinds of recycling and improvised technology. If this adopts with those people, then it might be useful.
But if they cannot use this today, then I don't see how a broken down surviver group could use it.
If I was tasked with bootstrapping a post-apocalyptic computer from junk, a hard copy of a well-commented Forth implementation would be a welcome assistance.
For those curious as to what a modern machine using Forth on bare metal as an operating system might feel like, check out Open Firmware: https://www.openfirmware.info/Open_Firmware
(If you have an OLPC sitting around in a closet somewhere from the Give-One-Get-One program years ago, you already have a serviceable and physically robust Forth machine ready to roll! Same deal for some older Powerbooks and Sun workstations.)
A PDP-8 can be implemented in fewer transistors (original DEC wiring diagrams are on bitsavers, and github has source for several clones in Verilog), and DEC already shipped a moderately full software suite for it.
Climate change? Will cost trillions of dollars and billions of lives, but will likely be played out over course of several decades. We will be stressing out about it but its not going to be electronics-ending apocalyptic
Nuclear war? Please. The countries that have the capability are also level-headed enough to use them to play brinksmanship, despite what the news is telling us. These countries want deterrence, not to blow stuff up.
Disease? We're too widely distributed and the most successful viruses are ones that infect but do not kill. Ebola is scary but its too destructive for its own good which makes it easy to contain. The most successful virus is the common cold, and possibly HIV which is certainly a serious problem, but nobody's out there building shelters because of that.
Water/food supply? Fresh water is a function of energy, and if anything is a plus about climate change its that we're gonna have a lot of fresh water raining down on us from the Earth trying to compensate for higher temps.
Second order effects from climate change will likely affect arable land and is worrisome but it may also open up new areas for growth and will likely play out over time, so I'm considering this more of a political problem.
The only things I can think of are either:
1) A sudden disappearance of rare earth metals needed to make electronic, which would be massively inconvenient but we'd figure out a way around that, either by it suddenly becoming more valuable to recycle old electronics or not needing them in the first place. Besides if this happens we'd just get extra motivated to start mining asteroids.
2) Celestial events like asteroid strike or Coronal Mass ejection hitting Earth in the wrong way. The first problem is mitigated with asteroid tracking and we're getting better at that, and the second one would make for an interesting 6 months but pretty sure we'd get back on track pretty quick.
I am all for technology that does not depend on a complex global supply chain - we will need to manufacture simple but sophisticated tech in space and mars in the future but this prepper BS is just fantasy driven by a hyper-apocalyptic news cycle shlepping around clickbait.
What am I not worried about that I should be? What massively apocalyptic event is going to happen in 10 years to turn us back to the middle ages? Seriously.
Au contraire, it’s the belief that our system can continue like it’s doing that is the real hyperbole. Collapse is just baseline reality of civilizations.
- HISTORY: Collapse is a property of every civilization we’ve studied. These people were as smart if not smarter than us, working with societies smaller and simpler than ours.
- ECONOMY: The way money is created and managed today is an ongoing experiment that almost ended in 2008, and we are still on uncharted ground. We can only continue paying for debt by increasing consumption in the following year, yet our debt keeps increasing, by the ever-devaluation of our currency, requiring more production and consumption. No one is planning on an end to this model of growth.
- TECH: Most of our infrastructure is built under the incentive of increased efficiency and profit, not long-term robustness since profit has to be sacrificed to plan for contingencies like price fluctuations in supply. Short term tech outcompetes the long term, easy. Strong but fragile. And then there’s the incentivized inefficiencies from economies of scale: one calorie of food now requires ten calories of energy from our system to produce.
- COMPLEXITY: “More is different.” As everything becomes interconnected, things become entrenched into dynamics that become increasingly difficult to control and even reason about. Rational decision-making must always be filtered by the interests of the current system, thus there is a loss in agency in what we can do (read: incentives), and we are stuck trying to find creative solutions that must accept the framework of what may be a harmful system, often just making that system more effectively harmful.
- ENVIRONMENT: Some call it the sixth mass extinction. Whatever it is, the biosphere is changing dramatically. Soil is in a weird zombie state kept alive by oil. The basic line is that the value of life is diminished through the lens of our economy, as dead resources. So our model will continue bringing the real world into consistency with that deadness.
- MYTHS: When we live in a civilization that sanctifies all forms of advancement and improvement and growth, there is no fertile soil for the acceptance of limitation. We only have the vocabulary to label it pessimist. Thus, optimism becomes co-opted for the aspirations of a mythical techno-utopia beyond all conceivable boundary.
How would you define "civilization?" Because sure, every civilization has an expiration date, but for current computing technology to be lost requires a worldwide civilizational collapse. Current global civilization is a decentralized collection of many civilizations which have all shared and replicated the knowledge of computing.
>our debt keeps increasing
Public and private debt are separate things. Public debt has generally seen a continuous march upwards. Private debt has been peaky, with no upward trend. Debts are fine when the debt is incurred for a purpose that has a sufficient return on investment. Public debts of sovereign currency issuers can always be repaid, and the yields on those bonds are whatever the currency issuer decides. And further debts shouldn't be judged as nonviable just because of the quantity of existing debt. Rather, the question at each point should be whether the investment is a good one.
> Soil is in a weird zombie state kept alive by oil
Soil is renewable, and can be made even with simple techniques. The terra preta soil of the Amazon rainforest was largely human-made, and thus the Amazon itself is largely a human construct. Creating it didn't require any oil.
>there is no fertile soil for the acceptance of limitation
Malthusian thinking has often been the default, and one of the most popular modes of thinking since the Enlightenment. The mid 20th century was full of best-selling Malthusian books by the Club of Rome, Paul Ehrlich, M. King Hubbert, and EF Schumacher. The entire fields of biology and ecology have been predicated on Malthusianism. Darwin was explicitly inspired by Malthus.
It has been to the great surprise of the intelligensia of each successive generation that there hasn't been mass starvation. We've been able to do more and more, with less and less. Any serious type of collapse hypothesis needs to factor in the history of losing bets on that side of the argument, and internalize why their predictions were wrong. It wasn't just luck every time.
You could buy a new EPYC server with solid state drives, grind it up and homogenize the whole thing in acid, and the resulting solution would have a smaller percentage of rare earth elements in it than the same mass of ordinary crustal rocks treated the same way.
Computers don't need rare earth elements. Nor do solar panels, nor do most wind turbines.
See for example the "Consumption" section in the USGS 2015 Minerals Yearbook Rare Earths report:
https://s3-us-west-2.amazonaws.com/prd-wret/assets/palladium...
In descending order of global consumption volume, rare earth elements are consumed by the manufacture of catalysts, magnets, glass polishing media, and metal alloys. Everything else is just miscellaneous.
https://www.cnet.com/news/we-arent-ready-for-a-solar-storm-s...
Near miss in July 2012: https://science.nasa.gov/science-news/science-at-nasa/2014/2...
Also, while I think a big solar flare would break a lot of stuff--- I think we're better prepared for it than many give us credit for. Tens of millions of people might be initially without power; some fraction of them may need to wait for a long time (months or even years) to get it back; and various bits of transport and production may get disrupted. Enough to require rationing and a major pain to quality of life, but not enough for any kind of catastrophic chain reaction, IMO.
Either way, I don't think such is a good premise to start this/an OS. There are much better argument to be made to prefer a lightweight OS. Intellectual curiosity, for one.
When the product lifecycle changes from 1 year to 10+ years, you'll find that people will just keep their stuff around longer and the demand on the supply chain goes way down.
Plus, there will be a shitload of data centers with capacity that will no longer be necessary (because of reduced devices making requests, segregated internet, less connectivity) in apocalyptic scenarios. Those can probably be re-purposed.
We haven't had to get clever about computer conservation because there's been so much supply.
Also, "middle ages" are going to take a good century at least. Think instead of the collapse of Soviet Union (with some places playing the part of the Balkans / Caucasus), but worse...
Btw, rare earths are not so rare - it's just that the US got rid of this industry.
No large scale technology was lost. If anything, human civilization became more technologically advanced.
[0]: https://en.wikipedia.org/wiki/Hyperinflation_in_the_Weimar_R...
[1]: a charming novel about this: https://en.wikipedia.org/wiki/The_Black_Obelisk
Sure, we were able to make do a century or so ago, but not with 8 billion people and counting. People will die without some way to keep the various microcontroller-driven systems up and running. It's a long shot that we'd be able to adequately replace a microcontroller in a tractor ECM or a pacemaker or an air conditioning system or a water pump, but a slim chance is better than no chance at all, and the latter is exactly what we'll have unless we're thinking about and testing out solutions now, while we still have the resources to easily do so.
Not to mention the energy supply chain. If the supply chain required to make electronics collapses, that probably also means the energy supply chain has collapsed, or has at least been severely disrupted. That seems far more likely to be damaging and far more quickly that a lack of ability to keep a microcontroller running. If I don't have gas for my car, it doesn't really matter if I can fix it when it breaks down. (And I run out of gas in a few hundred miles, but repairs are required on the order of tens of thousands of miles.)
This is really what I was trying to get it with my first comment. The problems presented by a lack of ability to make new technology are the sorts of problems that take months or years to become critical, but in a true collapse setting, the issues that matter most would unfold in days or weeks.
(I feel like I should point out that I don't think any of this is particularly likely.)
As for scavenged parts, you're going to need a warehouse of manuals and datasheets, eh?
Depending on the details of your post-apocalyptic scenario planning, simple automation driven by relays or clockwork logic will be more likely than e.g. scavenged microcontrollers.
I applaud the spirit of the project though: I don't want to live on Gilligan's Island making everything out of coconuts and vines.
You're right! As a thought experiment, let's say I download CollapseOS and then switch off my internet.
I have in my house a normal complement of electronic devices. I have a soldering iron, some wire etc. I assume if I start taking things apart I'll find some Z80s. Those Z80s will be living on boards with clock chips and memory etc. Where do I even start?
https://www.opensourceecology.org/gvcs/
The Global Village Construction Set (GVCS) is a modular, DIY, low-cost, high-performance platform that allows for the easy fabrication of the 50 different Industrial Machines that it takes to build a small, sustainable civilization
http://fuzix.org/ - lots of 8-bit targets, z80 included
http://cowlark.com/cpmish/index.html - has a vi-like editor, assembler, and is cp/m compatible so it can run lots of old cp/m software like various compilers
In conjunction with that, it would be good to have an archive of useful software and data in a durable format where access to that data can also be bootstrapped. I'm not sure what that format would be...
It was designed to be extremely simple and reduced in scope to the minimum of what a processor needed. It went into space. Radiation hardened versions were made.
The original version had its functionality broken up into multiple chips. That could allow for easier repairs.
I don't know how many transistors were in it, but I doubt it's more than the Z80 or 6502.
The RCA 1802 is another one I'd consider. In fact, it will likely outlive the human race entirely, as it's in the Voyager spacecrafts.
But you won't find them in calculators just lying around that you can scavenge. Remember, the narrative driving this is post-economic/supply chain apocalypse.
- Will the ICs last that long, can they?
- How will it get electricity if the sockets and voltage standards change?
- How do you make it durable to dropping, water, dust, etc?
- What sort of writable storage can last that long without degrading?
- How do you edit fonts as language changes over time?
- What sort of libraries and documentation do you include?
- Should you include some sort of Rosetta Stone for new users?
1. Yes; 10C reduction in temperature means doubling of life. I've known pentiums to last 10 years at 60C+; just running processors at 30*C instead is 80 years minimum. Main thing is to use leaded solder so you don't get electromigration problems.
2. Solar panels and batteries. Battery voltage is chemical and fixed by physics; nickel-iron batteries can be rebuilt and last forever. Solar panels can be oversized to provide enough energy even when they degrade over time and/or the computer can just be used at a lower duty cycle.
3. Make it big and hard to move in a sturdy box.
4. Flash can last that long if it is periodically rewritten, kept cool, has redundancy, and isn't updated often.
Would you like us to share notes on the book itself? I've practiced its teachings for the last 3 years or so, and I'd like to chat with someone about it.
Sorry if this comment is a bit irrelevant, but HN doesn't really have a DM system. /shrug
If society collapses and recovers relatively quickly, we likely can coast for 10-20 years on the computers that have already been built. This would be what I'd expect to happen with a point-in-time catastrophe that disrupts everything but then ends and we can all set to work to rebuilding everything. (Like a massive economic collapse, huge meteor strike, nuclear winter, etc.) Even if 95% of computers become inoperable, there's a lot you can do with the remaining 5%. Probably more than what you can do with new stuff you build.
Another scenario is that we recover really slowly. This would be due to some kind of enduring factor that holds back humanity, like a really long-term famine or global political instability that we somehow cannot reset. In that case, what's the hurry to develop software that's ready to go? Maximizing compute capability doesn't seem like it would be the thing that tips the scales and allows society to get rolling again. For that you need to solve whatever the root problem is.
TLDR, if we fall, maybe there is nothing holding us down, and we can bounce back up relatively quickly, in which case we don't need this. Or there is something holding us down, then it seems unlikely that computing is what we need to solve that.
Maybe there are other scenarios that I haven't thought of, though. Or ways that computing would help in the above scenarios.
Love the idea to make it run on simple 8-bit CPUs that will be scavenged Fallout-style, but seems to presume that no 'newer' technology would survive and be functional.
Wonderful to see, none the less.
If we can't manufacture new smartphones, we need to have a baseline of computer to develop new computers that can eventually develop computers that can develop smartphones. Essentially he's proposing that if we lose our societal ability to compute advanced things, that we be able to fall back to the Z80 rather than the abacus.
Not sure I'm totally sold on it here, but it's an interesting topic to say the least.
https://collapseos.org/why.html
"The z80 has 9000 transistors. 9000! Compared to the millions we have in any modern CPU, that's nothing!"
The author thinks that when their imagined Mad Max society comes to be, they're going to be picking up a soldering iron against old Segas and TI-84s. If for some reason that you need to use computers in a developmental capacity (since the author's OS has an assembler and an `ed` clone) in a "post-collapse society," I don't think it would be that hard to find some discarded HP desktop or laptop to work on.
In one part a high security government installation was described with "ancient" PC's. They couldn't make new ones so they kept whatever they could running and the narrators mind was blown thinking about how much energy they wasted.
I think one of the top priorities for a project like this should be making it easy to implement considering practically everything you would use now days to get help getting it working won't exist. No websites or forums or anything like that.
Android phones. Tens of millions of them. It must be the most ubiquitous computing platform by now...
I often think about hoarding a collection of software and media for an end of the world scenario. Then another year goes by and the world is still here.
I've thought about this a little, and I think rebooting vacuum tube technology from scratch is possible more easily. Not trivial, but possible. Once you get reliable triodes, you're on your way.
In my opinion, there should be system in which all the blueprints for the technology is saved & that machine should be self sufficient to run on its own power, memory and should be capable enough to educate or atleast gives the basic idea of structure, as after the post Collapse, if anyone who is lucky enough get this technology, can improve and build a new system.
I like the idea of Collapse OS, in similar manner create the machine which can run any software/os or supports most basic and used operations.
Same goes with the books as well.
~Nauman
I suspect the argument against modern Intel chips is just their complexity. They need an incredibly complicated and somewhat fragile support infrastructure...you can't build a modern PC motherboard in your garage and you don't expect modern PCs to last decades. They're very common, though, and I suspect there will be plenty of PCs to scavenge, at least through our lifetimes. But, the next generation will probably have trouble keeping them going...I've got a 40 year old C64 still running with nearly all original parts, but I am nearly 100% certain my modern laptop will not last even a decade without repairs using parts that can't be manufactured without modern infrastructure.
Well that, and the fact that we already have plenty of OSes to run on x86(-64).
Looking at arch/ in linux's source:
alpha avr32 frv Kconfig microblaze openrisc score um
arc blackfin h8300 m32r mips parisc sh unicore32
arm c6x hexagon m68k mn10300 powerpc sparc x86
arm64 cris ia64 metag nios2 s390 tile xtensa
I'm surprised that it doesn't have support for Z80 if it's so common.I'm also surprised that I can't see mention of Z80 in GCC's documentation.
https://education.ti.com/en/products/calculators/graphing-ca...
There was definitely a rad-hardened version of the 8085 (similar to the 8080, and therefore to the Z80), which was used on the Sojourner rover (among various other NASA and ESA spacecraft). Seems like RISC processors were more common for this, though (looks like most relatively-recent NASA spacecraft - including pretty much all of NASA's Mars landers after Sojourner - use(d) rad-hardened POWER CPUs, e.g. the RAD6000 and RAD750).
though Sojourner rover used a rad-hardened 80C85 [1]
[0] https://researcher.watson.ibm.com/researcher/view_group.php?...
[1] https://en.wikipedia.org/wiki/Comparison_of_embedded_compute...
"But if the collapse magnitude is right, then this project will change the course of our history, which makes it worth trying."
Let's say the jury is still out on that one? :D
Smart phones are a lot of things, but general purpose computers are not one of them.
As the author points out, probably useless, but still fascinating.
[1] https://en.wikipedia.org/wiki/The_Knowledge:_How_to_Rebuild_...
With an 8080 equivalent running a serial character display terminal based on an oscilloscope CRT (1940s RADAR tech) you have an input/output device.
This leaves the main job of processing to another cpu, which could be 16-bit for arithmetic speed and efficiency. The late 70s, early 80s 8-bit machines were only underpowered because they were doing all of the video output using the same cpu. Separate computation from video generation and you get a much faster system.
8-bit cpus rarely needed an OS. They were really only capable of running single applications at a time. All an operating system does is separate hostile C code applications from each other. C is probably not the best starting point to reboot society using 8-bit systems.
Forth, or some derivative might be better. Charles Moore's original 1968 listings for Forth on an IBM 1130 are available from here: https://github.com/ForthHub/discussion/issues/63
Remember also that every mid-1970s microprocessor generally relied on a minicomputer (built from TTL) for its software and logic design. If you go back 10 years (1965) to the PDP-8 minicomputer, these were built from diode-transistor logic or DTL - made from discrete diodes, transistors, resistors and capacitors. This sort of technology could possibly be re-booted more easily for post-apocalypse society.
The original 12 bit PDP-8 contained 10,148 diodes, 1409 transistors, 5615 resistors, and 1674 capacitors. See- https://www.pdp8.net/straight8/functional_restore.shtml
Scale these figures by 1.33 and you have the approximate requirements for a 16-bit architecture.
Whilst over 50 years old, the PDP-8 could run BASIC at speeds not too dissimilar to the early 8-bit micros that appeared in 1976 - about 10 years later.
It used a modular construction - and if you did find yourself with an excess of diodes and transistors, the best approach might be to build a series of logic modules - loosely based on the 7400 series, but using DTL for simplicity. If you were to standardise on a footprint similar to a 40 pin DIP, you could probably recreate about 8 NAND gates in such a device.
Some years ago I looked at the NAND to Tetris cpu, and worked out a bitslice design based entirely on 2-input NANDs. Each bitslice needed 80 NANDs, so a 16-bit machine would need 1280 gates. Memory would be difficult, but something could be implemented using shift registers. You could of course revert back to storing charge on a CRT screen - which formed the basis of the 1K words of memory on the Manchester Baby machine of 1949 (Williams Tube).
Finally - never underestimate audio frequency generation, and storing signals as audio tones - something that cpus are good at. Possibly use a rotating magnetic drum for storage.
In the summer of 1984 - a friend and I, who both owned Sinclair ZX81s set up a 1-way data link between one machine and the other across our college dorms - using a FM transmitter bug and an FM radio receiver - over a distance of 300 feet.
The primary design requirement for a stand alone computer system in a post-* world is simplicity, maintainability, and debugability. It must be possible for a single user to do _everything_ in situ. There are very few existing systems that meet all three of these criteria across the whole hardward-firmware-software stack, and modern technology companies are actively moving away from this.
At all levels this requires extensive and open documentation and implementations, and ideally a real standard.
The hardware level would probably need a complete rethink, and if you want good peripheral support (e.g. to be able to try to access whatever data device you come across) then you need a solution that doesn't require a subsystem kernel maintainer for everything, or you just give up on that. A potential 4th requirement here could be a large supply of parts since in most scenarios it is extremely unlikely that anyone will be able to get a fab working again for hundreds or thousands of years. Maybe radiation hardened large feature size ICs or something like that. The alternative would be a zillion RPis (with some alternate data storage interface) so that hopefully some of them survive and continue to work after 100s of years, but this seems like a much riskier bet than trying to actually engineer something to survive for a very long time. Above the IC level the ability for someone to replace parts without special tooling beyond maybe a soldering iron also seems like it is probably also important.
At the software level there are two existing systems that might serve, one of the Smalltalks, or one of the lisps (my bias says common lisp, despite the warts). Assembly and C are just not a big enough lever for a single individual, and other things like Java seem to have been intentionally engineered to deprive individual users of power. The objective here is not to be fast, the objective is to retain access to computation at all so that the knowledge of how to work with such systems is not lost. Also at the software level the requirements pretty much preclude things like browsers that are so monstrously complex that there no hope than an individual could ever hope to maintain a legacy artifact (or probably even compile one of the monsters) for interpreting modern web documents.
I do not think that we can expect the current incentive structure around software and hardware to accidentally create something that can meet these requirements. If anything it is going in the other direction as large corporations can employ technology that can _only_ be maintained by large engineering teams. We are putting little computers in everything, but they are useless to anyone in a world without a network.
[1] https://en.wikipedia.org/wiki/Setun
[2] https://web.archive.org/web/20080207064711/http://sovietcomp...
[3] http://www.computer-museum.ru/english/setun.htm
It is a stack machine, it has somthing like FORTH.
In which you can implement anything else, if you absolutely have to. Like some have done with another stack oriented system here:
[4] https://en.wikipedia.org/wiki/POP-11
[5] https://en.wikipedia.org/wiki/Poplog
[6] http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog...
And then have some cybernetic monks preach the advantages of something like TRON
[7] https://en.wikipedia.org/wiki/TRON_project
applied to all of the above.
I'm thinking old phones, tablets, and portable computers will be more common. I keep several bootable USB drives which have lots of ebooks, audio books, videos, software, and games along with several old laptops/netbooks which were free. I also keep some of those files on microSD cards to make them accessible with tablets.
IMO collapse will be very boring so lots of books, audio files, video games, and music would be nice to have if it can be run off small off grid solar setups.
Makes me wonder what possibilities become, er, possible if we up the computing power a few orders of magnitude to a Pi Zero W or Pi 4.
From what I understand it’s fairly easy to use a Pi as an LTE Router for longer ranges and WiFi for shorter ranges. I wonder if the right microsd cards and were stockpiled one would be able to reconnect several communities in a mesh.