> The strange CPU core layout is causing power problems; Radxa and Minisforum both told me Cix is working on power draw, and enabling features like ASPM. It seems like for stability, and to keep memory access working core to core, with the big.medium.little CPU core layout, Cix wants to keep the chip powered up pretty high. 14 to 17 watts idle is beyond even modern Intel and AMD!
Admittedly, the 10G interfaces and fast RAM make up for some of it, but at least for a normal homelab setup, I can't think of an application needing RAM faster than even DDR3, especially at this power level.
A base Mac Mini (256GB/16GB) would cost me €720 while a Minisforum MS-R1 (1TB/32GB) would cost me €559 (minus a 25 euro discount for signing up to their newsletter if you accept that practice).
Price to performance the Apple solution may be better, but the prices aren't similar at all.
Upgrading the Mac to also feature 1TB of storage and 32GB of RAM, the price rises by a whopping €1000 to €1719.
I did not realize the EU versions were that much more expensive.
I do agree about the RAM/storage prices though. It's only worth it if you want the raw power, where the Mac handily beats this.
MinisForum makes disposable hardware. We used to use them for TV computers at work, and while they are cheap, they are fidgity with hardware and drivers, come with hacked-Windows Enterprise installed by default, and generally last for about 2 years before they hit the recycle pile.
Go for the Mac Mini, the hardware incl thermal is also built exceptionally well. That's why you still have 20 year old Mac Minis still running as home servers etc.
It runs docker (supports docker compose) and vms and has the usual raid stuff.
They also do an arm version for half the price but I wanted the intel gpu for transcoding.
The 10 GBit/s NUCs you find on eBay are enterprise-grade stuff: 10 Gbit/s hasn't really been a consumer thing. A used Fujitsu, Intel or Mellanox dual 10 Gbit/s bought on eBay isn't a "stained shit" that's "not guaranteed to be reliable". It's enterprise grade hardware.
(that said the machine in TFA looks nice)
Since this server seems to have pretty average performance/watt and cooling, I can't really see much advantage to ARM here, at least for typical server use cases.
Unless you're doing ARM development, but I feel like a Pi 4/5 is better for basic development.
An older but better ARM CPU with quadruple Cortex-A78 cores (Armv8.2-A ISA) is available for use in embedded computers from Qualcomm, rebranded from Snapdragon to Dragonwing. There are a few single-board computers of credit-card size with it, which are much faster than Raspberry Pi and the like.
Such SBCs are cheaper than the one from TFA and they are better for the purpose of software development.
The computer described in this article has the advantage of better I/O interfaces, the SoC has much more PCIe lanes, which allows the computer to have more and faster network interfaces.
If you want for an ARM computer to be a true high-throughput network server, then this one is the best choice. Nevertheless, for a true network server, a mini-PC with an Intel or AMD CPU will have a much, much better performance, at the same price or even at a lower price.
Using ARM is justifiable only for the purpose of software development, or if you want a smaller volume and a lower power consumption than achievable by a NUC-sized computer. For these purposes, one of the SBCs with Qualcomm QCM6490 is a better choice.
While a credit-card-sized SBC has only one Ethernet port, you can connect as many Ethernet interfaces as you desire to it (by using an USB hub and USB Ethernet interfaces), as long as the network throughput is not important and you just want to test some server software.
The Minisforum computer from the parent article has only 2 advantages for software development, the Armv9 ISA and being available with more memory, i.e. 32 GB or 64 GB, while the smaller ARM SBCs are available with 8, 12 or 16 GB.
The article never explained why the author wanted an ARM setup. I can only consider this a spiritual thing, just like how the author avoids Debian without providing any concrete explanations.
Does FreeBSD work better?
However I’m not sure of any of the rk3588 vendors that support both UEFI and have a full-size PCIe slot like the MS-R1 has.
8 Cortex-A720 vs. 4 Cortex-A76 means at least 3 times better performance for optimized programs.
Also for I/O throughput, this computer has far more fast PCIe lanes than RK3588, allowing many fast peripherals.
Minisforum probably reused the x86 power supply for ARM. The x86 MS-01 and MS-A2 supports GPUs after all.
I'm not a hardware engineer, I've failed miserably in software engineering and now run a VPS host.
Caveat: I'm frequently mistaken, always keen to learn and reduce the error between my perception and reality!
I’m curious how hard hosting VPS as a business was to get off the ground? I’ve worked 5 years previously as a Linux sysadmin, but am getting pretty bored at my current job (administering Cisco VOIP systems). Think I’d rather go back to that
My Beelink Me Mini has an integrated PSU. Actually same with the EQR6 I got too.
Otherwise I'd probably have a few machines from this company.
Why is Fedora not considered good for a server?
Whereas Debian/Ubuntu have 5 years and RHEL/Alma/Rocky have 10 years.
I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.
You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.
A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.
For myself I've had nothing but positive experiences running Fedora on my servers.
For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates. Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version. Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.
> I’ve always wanted an ARM server in my homelab. But earlier, I either had to use an underpowered ARM system, or use Asahi...
What is stopping you using Mac with MacOS?
With full disk encryption enabled you need a keyboard and display attached at boot to unlock it. You then need to sign in to your account to start services. You can use an IP based KVM but that’s another thing to manage.
If you use Docker, it runs in a vm instead of native.
With a Linux based ARM box you can use full disk encryption, use drop bear to ssh in on boot to unlock disks, native docker, ability to run proxmox etc.
Mac minis/studio have potential to be great low powered home servers but Apple is not going down that route for consumers. I’d be curious if they are using their own silicon and own server oriented distro internally for some things.
"On a Mac with Apple silicon with macOS 26 or later, FileVault can be unlocked over SSH after a restart if Remote Login is turned on and a network connection is available."
https://support.apple.com/guide/security/managing-filevault-...
The full disk encryption I can live without. I'm assuming these limitations don't apply if it's disabled. [Ah, I just saw the other reply that this has now been fixed]
I was aware of the Docker in a VM issue. I haven't tested this out yet, but my expectation is this can be mitigated via https://github.com/apple/container ?
I appreciate any insights here.
The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot.
https://security.apple.com/blog/private-cloud-compute/Granted, I don't know if it's really server oriented or if they're a bunch of iPhones on cards plugged into existing servers.
On the flip side, an M4 mini is cheaper, faster, much smaller (with built in power supply) and much more efficient. Plus for most applications, they can run in a Linux container just as well.
At that price, why not a mac mini running linux? I think (skimming Asahi docs) the only things that would give you trouble don't matter to the headless usecase here?