If servers can run on cheap ARM hardware that uses less electricity and runs far cooler without much of a performance difference, that's a massive improvement. Electricity savings would more than justify a switch. Apple has shown that it's possible at the desktop/laptop level. Now they or someone else can prove it works at the server level as well.
Arm servers have been sold for some years now. AWS, Oracle and others already made the switch. And the fastest super computer (Fugaku, Japan) uses ARM.
The fact that apple is making arm mainstream means if I switch to arm, so are a lot of others working to make the arm port of my stack a first class consideration.
Suddenly the gavatron cores aws has are becoming a whole lot more viable. I personally plan to move our api instances over to them once the erlang vm's arm port matures a bit.
Phones bit of a mixed bag but their computing gear has always been a bit old hat since the 1980s.
The M1 is the first time since PowerPC they took a risk. My guess is that it will turn out the same PowerPC. I really want a linux powered ARM workstation but Apple aren't going to make that.
PowerPC was developed by IBM. Apple got involved when Motorola could not deliver a faster 68k to beat Intel's 486 which was passing 50MHz. So they joined IBM along with Motorola and formed the AIM alliance (Apple, IBM, Motorola) to build better processors.
> The M1 is the first time since PowerPC they took a risk.
There's little to no risc (buh-dum-tish) in moving to Arm these days. It's a well supported and understood architecture and found everywhere.
> I really want a linux powered ARM workstation but Apple aren't going to make that.
Apple has the power to steer its own ecosystem. When AMD was working on its Arm A series server processors I kept thinking "This sounds like a backwards approach destined to fail. Why not start by making a performant Arm SoC with 2-4 cores and GPU with a TDP of 10-20W? Target it at consoles/tv/laptops/desktop computing and jump start the desktop Arm market which will naturally lead to demand for Arm servers." The Idea was an Arm SoC that could fill the gap between low power/performance Arm SoC's for mobile/embedded and the power hungry yet performant x86 chips. Basically an AMD version of the M1. That could have really changed things but the big issue AMD would face is where's the Arm Desktop software ecosystem? That's why Apple can take these "risks", they have full control over the whole stack.
The complexity of developing validation tests suites for PowerPC alone would have sucked up all the software resources inside of Motorola at the time, as all the old 68k OSes and support code, etc had to be rebuilt from scratch for PowerPC. Not at all a small undertaking.
The 6502 was introduced at Wescon in September 1975.
Apple I was out in April 1976. That's seven months.
The KIM1, a board made by MOS to demonstrate their 6502 chip, was also released in April 1976. Even they didn't beat Apple to it.
Commodore Pet was December 1977. Rockwell AIM65 was 1978. Acorn System 1 was March 1979. Atari 400 was November 1979
In short: I don't know what the heck you're talking about.
Similarly, the Lisa was a very early 68000 machine. The Amiga and Atari ST were years after the Lisa and Mac. Only very expensive workstations from HP, Apollo and Sun were before the Apple Lisa.
If the pattern holds the M1 is due to be a dead end.
No one got a Newton, but every mid-boss or manager got a Palm.
Not always. Firewire, for example.
Besides that it's fun to debug with on older systems, or even new ones if they have it. If not it can be plugged in from anything between 30 to 50 universal credit units from 1 to 4 port Firewire800 as PCIe 1x to 4x. It's fun to have in a homelab. You can even tunnel IP over it.
All of these ARM server chips are going to be very low on the pecking order for TSMC fab time, so they won't have the edge that Apple had by buying up the first slot on the latest manufacturing node. Even being a process node ahead, M1 only just squeaks by with comparable single threaded performance to Zen 3 cores, so ARM still has some catching up to do in the performance realm.
I don't think this is true. It is not only power consumption but also power backup. If you can use smaller diesel engines and smaller battery packs this will lower the cost.
And why do you think datacenters have been raising temperatures? It saves a ton of money: https://www.datacenterknowledge.com/archives/2008/10/14/goog... HP estimated they saved 8 million. That is not a rounding error.
A server that uses less power will also generate less heat.
I have to admit it is years ago I worked for a company owning datacenters but at that time the highest costs were always: power and connectivity.
100,000 sq. ft. is roughly enough space for about 250,000 1U servers. Say conservatively the servers are in the ballpark of $2,000 each. That's $500 million just in server hardware costs. If the total electric cost is around 3*$8 = ~$24 million over their lifetime, then we're talking about <5% of just the server expenses. Never mind the facility and staff costs (and, as you've said, connectivity).
So maybe not a rounding error, but way down the priorities list.
There's a caveat to that in the planning phase of a data center: A lot of the site infrastructure costs are a function of the total power requirement. So if - when building out a data center - you can get more power efficiency, then that does translate to a significant cost savings.
Cooling tends to be somewhere between a 20 and 60% add on to the direct power consumption of a server.
1: https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/(TUI3011B)SimpleModelDetermingTrueTCO.pdfhttps://www.irishtimes.com/news/politics/data-centres-could-...
Norway seems poised to build out significantly. They're actively inviting datacenters [1]. They also seem to have lower electric costs for large businesses [2] vs Ireland [3]. FAANG are already populating the Nordics too [4].
1: https://www.datacenterdynamics.com/en/news/norway-wants-data-centers-to-locate-there-and-be-more-sustainable-too/
2: https://www.statista.com/statistics/595859/electricity-industry-price-norway/
3: https://www.statista.com/statistics/595806/electricity-industry-price-ireland/
4: https://www.zdnet.com/pictures/the-nordic-datacenter-boom/The same climate benefits apply to northern England and Scotland, but Great Britain's grid's capacity has around 10 times the capacity.
(Ireland's grid's peak demand was 6.8GW, Great Britain's 63GW.)
Although the same argument then applies for siting the datacentres in continental Europe, where the grid is around 667GW. Perhaps this is part of Facebook's reasoning for a second datacentre in Denmark.
(Most of Denmark, including Facebook's existing datacentre, is connected to the continental European grid. The eastern islands are part of the Scandinavian grid.)
[1] https://www.thelocal.dk/20211013/facebook-eyes-second-danish...
units '500W * 1 year * 0.20 €/kWh' '€'
€ 876
I'm sure businesses get a discount, but we haven't paid for cooling yet. This is hardly a rounding error.Not as large as some may expect once you factor in power delivery guarantee and redundancy. But basically yes. Every single HyperScaler has been pointing out electricity as one of their largest item on their TCO, 2nd only to hardware cost.