Moving 72 racks was NOT easy. After paying substantial storage fees, I rented a 1500sf warehouse after selling off a few of them and they filled it up. Took a while to get 220V/30A service in there to run just one of them for testing purposes. Installing IRIX was 10x worse than any other OS. Imagine 8 CD's and you had to put them each in 2x during the process. Luckily somebody listed a set on eBay. SGI was either already defunct or just very unfriendly to second hand owners like myself.
The racks ran SGI Origin 2000s with CRAYlink interlinks. Sold 'em off 1-8 at a time, mainly to render farms. Toy Story had been made on similar hardware. The original NFL broadcasts with that magic yellow first down line were synthesized with similar hardware. One customer did the opening credits for a movie with one of my units.
I remember still having half of them around when Bitcoin first came out. It never occurred to me to try to mine with them, though I suspect if I'd been able to provide sufficient electrical service for the remainder, Satoshi and I would've been neck-and-neck for number of bitcoins in our respective wallets.
The whole exercise was probably worthwhile. I learned a lot, even if it does feel like seven lifetimes ago.
Installing IRIX doesn't require CDs; it's much, much easier done over the network. Back in the day it required some gymnastics to set up with a non-IRIX host, now Reanimator and LOVE exist to make IRIX net install easy. There are huge SGI-fan forums still active with a wealth of hardware and software knowledge - SGIUG and SGInet managed to take over from nekochan when it went defunct a few years ago.
I have two Origin 350s with 1Ghz R16ks (the last and fastest of the SGI big-MIPS CPUs) which I shoehorned V12 graphics into for a sort of rack-Tezro. I boot them up every so often to mess with video editing stuff - Smoke/Flame/Fire/Inferno and the old IRIX builds of Final Cut.
I think that by the time Bitcoin came out, Origin2000s would have been pretty majorly outgunned for Bitcoin mining or any kind of compute task. They were interesting machines but weren't even particularly fast compared to their contemporaries; the places they differentiated were big OpenGL hardware (InfiniteReality) with a lot of texture memory (for large-scale rendering and visualization) and single-system-image multiprocessor computing (NUMAlink), neither which would help for coin mining.
That was originally an XFL innovation.
https://www.sri.com/press/story/the-first-down-marker-how-te...
Installed Configuration: SGI ICE™ XA.
E-Cells: 14 units weighing 1500 lbs. each.
E-Racks: 28 units, all water-cooled
Nodes: 4,032 dual socket units configured as quad-node blades
Processors: 8,064 units of E5-2697v4 (18-core, 2.3 GHz base frequency, Turbo up to 3.6GHz, 145W TDP)
Total Cores: 145,152
Memory: DDR4-2400 ECC single-rank, 64 GB per node, with 3 High Memory E-Cells having 128GB per node, totaling 313,344 GB
Topology: EDR Enhanced Hypercube
IB Switches: 224 units
Moving this system necessitates the engagement of a professional moving company. Please note the four (4) attached documents detailing the facility requirements and specifications will be provided. Due to their considerable weight, the racks require experienced movers equipped with proper Professional Protection Equipment (PPE) to ensure safe handling. The purchaser assumes responsibility for transferring the racks from the facility onto trucks using their equipment.
Get little commemorative plaques for each one and sell for $200 each or so.
edit: it seems each motehrboard is a dual CPU board and so there are 4032 nodes, but the nodes are in blades that likely need their rack for power. But I think individual cabinets would be cool to own.
There are 144 nodes per cabinet... so 28 cabinets. I'd pay a fair amount just to own a cabinet to stick in my garage if I was near there.
Using them as desktop PCs would likely be a challenge.
But you could probably make a decent profit on just the CPUs alone parted out, even with the moving/handling costs.
For privacy reasons, I won't say who originally owned the servers, but they had a cool custom paint job and were labelled YETI1 and YETI2. If the original owner is on HN, perhaps they will recognize the machines and provide more information.
https://slerp.xyz/img/misc/argo-lyra.jpg https://slerp.xyz/img/misc/argo-lyra-open.jpg
I wonder if there is a market for the SGI hardware.
The stuff I could quickly find was $1M+ in individual sales (not sure about auction sites and used).
DDR4 RAM $734,500 @ $75 / 32 GB for 313,344 GB
CPUs $484,000 @ $60 / (1) E5-2697v4 for 8,064 units
Figure the hard drives must also be enormous, and probably a huge amount of storage, just not sure on the quantity, and they may not be sold with the system.
They're not kidding on the moving company, that's 10's of tons of computer to haul somewhere. 26 1U Servers @ 2500 lbs each. Couple semi-trucks, since the most they can usually do is 44,000 lbs on US highways. Not sure if the E-Cell weight was 14 @ 1500 lbs in addition to the 26 @ 2500 lbs?
https://cowboystatedaily.com/2024/05/07/cheyenne-supercomput...
Can you imagine the RAMDisk? Yes, you can. Especially in 20 years when it will be the norm. And also the Windows version that will require half of it in order to run /s
It has a peak compute of 5.34 petaflops, 313TB of memory, and gobbles 1.7MW.
The speed of processing and interconnect is orders of magnitude faster for an A100 cluster - 1 8 gpu pod server will cost around $200k, so around $600k more or less beats the supercomputer performance (price I'm searching seems wildly variable, please correct me if I'm wrong.)
The 5.6PF you quote for 18 A100's would be in BF16. Not comparable.
The A100 can only do 9.746 TFLOPS in FP64.
So you would need 548 A100's to match the FP64 performance of the Cheyenne.
The Pacific Locomotive Association is an organization with that problem. About 20 locomotives, stored at Brightside near Sunol. They've been able to get about half of them working. It's all volunteer. Jobs that took days in major shops take years in a volunteer operation.
For anyone unfamiliar, pattern makers would make wooden model versions of parts that were to be cast in metal. The pattern would be used to make the mold. He could use these various sanding machines to get 1/64" precision for complex geometries. It was fascinating to watch how he approached things, especially in comparison to modern CNC.
His major project outside of teaching the classes? Making patterns for a local steam locomotive restoration project. He had all these wooden versions of various parts of a locomotive sitting around.
I was surprised to see such an old locomotive in operation, but apparently it's still good (i.e. economic) enough to shuttle cars from factory to port. Guess the air pollution restrictions aren't all that tough for diesel trains.
EDIT: didn't realize this is big business in Germany: https://railmarket.com/eu/germany/locomotive-rental-leasing
— if the best stuff from then is still better than good stuff from now, it's art
— if the best stuff from then is worse than bad stuff from now, it's technology
- If it's useful as a medium for money laundering, it's art.
- If it's useful as a facilitator for money laundering, it's technology.
Even the RAM has aged out...
Very hard to justify running any of this; newer kit pays for itself in reduced power and maintenance quick in comparison.
I lusted after a Cray T3E once that I coulda had for $1k and trucking it across TN and NC; but even then I couldn't have run it. I'm two miles away from 3 phase power and even then couldn't have justified the power budget. At the time a slightly less scrap UltraSPARC 6k beat it up on running costs even with higher initial costs so i went with that instead. I did find a bunch of Alphas to do the byte swizzling tho. Ex "Titanic" render farm nodes.
I've been away from needing low budget big compute for a while, but having spent a few years watching the space i still can't help but go "ooo neat" and wonder what i could do with it.
I worked for an auction, and sellers accepted bids below the reserve price all the time. They just want to avoid a situation where an item sells at a “below market” price due to not having enough bidders in attendance - e.g. a single bidder is able to win the auction with a single lowball bid. If they see healthy bidding activity that’s often sufficient to convince them to part with the item below reserve.
Reserve prices are annoying for buyers, but below-reserve bids can provide really useful feedback for sellers.
We even had full-time staff whose job was to contact sellers after the auction ended and try to convince them to accept a below-reserve bid, or try to get the buyer and seller to meet somewhere in the middle. This worked frequently enough to make this the highest ROI group in our call center.
Also, consider this: if the reserve is too high, and no one bids on it, then everyone looking at it is going to wonder what it is really worth. If there are several other bidders, then that gives reassurance to the rest for the price they each are bidding.
It's the same reason an auction can go above the price/value of the thing, because you get invested in your $x bid, so $x+5 doesn't seem like paying $x+5, but instead "only $5 more to preserve your win" type of thing.
See penny auction scams - https://utahjustice.com/penny-auction-scams - for an extreme example.
I've always been fascinated by the supercomputer space, in no small part because I've been sadly somewhat removed from it; the SGI and Cray machines are a bit before my time, but I've always looked back in wonder, thinking of how cool they might have been to play with back in the 80s and 90s.
The closest I get to that now is occasionally getting to spin up some kind of HPC cluster on a cloud provider, which is fun in its own right, but I don't know, there's just something insanely cool about the giant racks of servers whose sole purpose is to crunch numbers [1].
[1] To the pendants, I know all computers' job is to crunch numbers in some capacity, but a lot of computers and their respective operating systems like to pretend that they don't.
> The Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming began operation as one of the world’s most powerful and energy-efficient computers. Ranked in November 2016 as the 20th most powerful computer in the world[1] by Top500, the 5.34-petaflops system[2] is capable of more than triple the amount of scientific computing[3] performed by NCAR’s previous supercomputer, Yellowstone. It also is three times more energy efficient[4] than Yellowstone, with a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed.[5]
There was a company that made a card that connected AMD servers like that. I don’t know if such tech ever got down to commodity price points. If you had Infiniband, there were also Distributed, Shared Memory (DSM) libraries that simulated such machines on clusters. Data locality was even more important then, though.
https://www.nextplatform.com/2015/07/16/what-if-numa-scaling...
This uses 1.7MW, or $6k per day of electricity. It would take only about four months of powering this thing to pay for 2000 5950X processors. Those would have a similar compute power to the 8000 Xeons in Cheyenne but they'd cost 1/4 the power consumption.
I would be neat if a subset of this could be made operational and booted once in awhile a computer history museum. I agree that it doesn't make sense to actually run it.
https://seattle.gov/city-light/business-solutions/business-b...
Samantha CarterI think mining crypto with these would burn far too much energy compared to the ASICS in use.
ex: https://wccftech.com/iconic-5-34-pflops-cheyenne-supercomput...
Those alone are selling for about $40 on ebay.
Let's say you sold them for $15 each.
$120,000. Let's say the auction and the moving and the break down costs were $20,000.
maybe worth it?
For example, in April ebay sold 79 of them. You're looking at ~100 months. And that's assuming constant demand.
In total, Cheyenne weighs in around ~95000lbs in weight, they require it be broken down in <= 3 business days on site and only offer 8AM - 4PM access. Therefore you'll need to support unbolting and palletizing all the cabinets for loading into a dry trailer, get a truck to show up, load, then FTL to the storage (or reinstall) location of your choosing for whatever's next.
Requirements for getting facility access seem to mean you bring both labor and any/all specialized equipment including protective equipment to not damage their floors while hauling away those 2400lb racks. Liability insurance of $1-2M across a wide range of categories (this isn't particularly unusual, or expensive, but it does mean you can't just show up with some mates and a truck you rented at Home Depot).
I'd guess you're looking at more like $25k just to move Cheyenne via truck, plus whatever it sells for at auction, unless you are located almost next door to them in Wyoming. Going from WY to WA the FTL costs alone would be $6-7k USD just for the freight cost (driver, dry trailer and fuel surcharges), add a bit if your destination doesn't have a loading dock and needs transloading for the last mile to >1 truck (probably) equipped with a tail lift. All the rest is finding a local contractor with semi-skilled labor and equipment to break it down and get it ready to load on a truck and covering the "job insurance".
Warehouse costs if you're breaking it for parts won't be trivial either and you'd be flooding the used market (unfavorably to you) were you to list 8000 CPUs or 300TB DDR4; it could take months or even to clear the parts without selling at a substantially depressed price.
It will take probably several thousand hours in labor to break this for parts assuming you need to sell them "Tested and Working" to achieve the $50/CPU unit pricing another commenter noticed on eBay, and "Untested; Sold as Seen" won't get anywhere near the same $/unit (for CPUs, DRAM, or anything else) and so even assuming $25/hr for fully burdened relatively unskilled labor, you could well be talking up to $100k in labor to break, test and list Cheyenne's guts on eBay or even sell onward in lots to other recyclers / used parts resellers. I don't think I could even find $25/hr labor in WA capable and willing of undertaking this work, I fear it'd be more like $45-60/hr in this market in 2024 (and this alone makes the idea of bidding unviable).
A lot of the components like the Cooling Distribution Units are, in my opinion, worth little more than scrap metal unless you're really going to try and make Cheyenne live again (which makes no sense, it's a 1.7MW supercomputer costing thousands of dollars per day to power, and which has similar TFLOPs to something needing 100x less power if you'd bought 2024-era hardware instead).
Anything you ultimately can't sell as parts or shift as scrap metal you are potentially going to have to junk (at cost) or send to specialist electronics recycling (at cost); your obligations here are going to vary by location and this could also be expensive -- the leftovers.
If anyone does take the plunge, please do write up your experiences!
It's just a question of whether you want to add air and refrigerant into the mix.
It seems they're decommissioning it partly due to "faulty quick disconnects causing water spray" though, so an air cooling stage would have had its benefits...
In the right climate, and the right power density, you can use outside air for cooling, at least part of the time. Unlikely at this scale of machine, but there was a lot of work towards datacenter siting in the 2010s to find places were ambient cooling would significantly reduce the power needed to operate.
with a water cooled setup you can move alot more heat through your pipes just be increasing flow rate. so you need pumps instead of fans. and now your machine room isn't a mini-hurricane, and you can more flexibly deal with the waste heat.
Also, likely they had infrastructure available, from the nuclear power they use.
Let's say you just wanted to have this transported to a warehouse. How much are we talking, between the transportation cost and the space to store it?
Take the Dell T7910 for example (I use one of these for my homelab), you can pick up a basic one with low end CPU/RAM for sometimes as little as $300. Dumping all these 18 core E5s and DDR4 ECC on the market should make it even cheaper to spec out. Currently they go for about $100-150 each on the CPUs, and ~$150-200 for the RAM. Not bad IMO.
Sorry I couldn't resist.