I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips. They have similar performance/battery life, and run cool (so you can use the laptop on your lap or in bed without it overheating), but have full Linux support today and you don't have to deal with x86 emulation. If anyone needs a thin & light Linux laptop today, they're probably your best option. Personally, I get 10-14 hours of real usage (not manufacturer "offline video playback with the brightness turned all the way down" numbers) on my Vivobook S14 running Fedora KDE. In the future, it'll be interesting to see how Intel's upcoming Panther Lake chips compare to Snapdragon X2.
No gaming - and I came in knowing full well that a lot of the mainstream programs don't play well with snapdragon.
What has amazed me the most is the battery life and the seemingly no real lag or micro-stuttering that you get in some other laptops.
So, in all, fine for light use. For anything serious, use a desktop.
Many people including myself do serious work on a macbook, which is also ARM. What's different about this qualcomm laptop that makes it inappropriate?
Unfortunately, even Intel is moving in that direction whenever they're trying to be "legacy free", but I wonder if that's also because they're trying to emulate the success of smartphone SoC vendors.
Instead I paid the premium for a nicely specced Macbook Pro, which is honestly everything I wanted, safe for Linux support. At least it's proper Unix, so I don't notice much difference in my terminal.
Depends why the Snapdragon chips were relevant in the first place! I got an ARM laptop for work so that I can locally build things for ARM that we want to be able to deploy to ARM servers.
(I'm keen about ARM and RISC-V systems, but I can never actually justify them given the spotty Linux situation and no actual use case)
That isn't to say that modern standby/s2-idle isn't super useful, because it is, but more for actual use cases where the machine can basically go to sleep with the screen on displaying something the user is interacting with.
One thing that I find suspicious is the large delta in single thread score between ARM and X86 currently. The real world performance does not suggest that big of a difference in actual use. The benchmarks suggest a 25% performance delta but in actual use the delta seems to be less than 10%. Of course Apple Silicon has the efficiency crown very much locked down
Since they have become a marketing target the benchmarks have become much less useful.
I am so grateful to the Asahi Linux guys who made this whole thing work. What a tour de force! One day, we'll get the M4 Mac Mini on Asahi and that will be far superior to this Snapdragon X Elite anyway.
I remember working on a Qualcomm dev board over a decade ago and they had just the worst documentation. The hardware wouldn't even respond correctly to what you told it to do. I don't know if that's standard but without the large amount of desire there is to run Linux on Apple Silicon I didn't really anticipate support approaching what Asahi has on M1/M2.
For all the flack Qualcomm takes, they do significantly more than Apple to get hardware support into the kernel. They are already working to mainline the X2 Elite.
The difference is that Apple only makes a few devices and there is a large community around them. It would be far less work to create a stellar Linux experience on a Lenovo X Elite laptop than on a M2 MacBook. But fewer people are lining up to do it on Lenovo. We expect Lenovo, Linaro, and Qualcomm to do it for us.
Fair enough. But we should not be praising Apple.
They have little experience producing code that is high enough quality it would be accepted into Linux kernel. They have even less experience maintaining it for an extended period of time.
"Linux on Snapdragon X Elite: Linaro and Tuxedo Pave the Way for ARM64 Laptops"
291 points, 217 comments
If you want to change some settings oft[sic] the device, you need to use their terrible Electron application.
Linux OTOH can only use the information it has from ACPI to accomplish things like CPU power states, etc. So you end up with issues like "the fans stop working after my laptop wakes from sleep" because of a broken ACPI implementation.
There are a couple of laptops with excellent battery life under linux though, and if you can find a lunar lake laptop with iGPU and IPS screen, you can idle around 3-4W and easily get 12+ hours of battery.
It's extremely dependent on the hardware and driver quality. On ARM and contemporary x86 that's even more true, because (among other things) laptops suspend individual devices ("suspend-to-idle" or "S0ix" or "Modern Standby"), and any one device failing to suspend properly has a disproportionate impact.
That said, to a first approximation, this is a case where different people have wildly different experiences, and people who buy high-end well-supported hardware experience a completely different world than people who install Linux on whatever random hardware they have. For instance, Linux on a ThinkPad has excellent battery life, sometimes exceeding Windows.
If you have a dGPU, Linux implementation of the power management or offloading actually consumes more power than Windows due to bad architectural design. Here is a talk from XDC2025 that plans to fix some of the issues: https://indico.freedesktop.org/event/10/contributions/425/
Desktop usage is a third class citizen under Linux (servers first, embedded a distant second). Phones have good battery life since SoC and ODM engineers spend months to tune them and they have first party proprietary drivers. None of the laptop ODMs do such work to support Linux. Even their Windows tooling is arcane.
Unless the users get drivers all the minute PMICs and sensors, you'll never get the battery life you can get from a clean Windows install with all the drivers. MS and especially OEMs shoot themselves in the foot by filling the base OS with so much bloat that Linux actually ends up looking better compared to stock OEM installs.
Think I'm arguing its both things where the OS itself can optimize things for battery life along with instilling awareness and API support for it so developers can consider it too.
This meant that by the time they started pushing devs to pay attention to QoS and such, good Mac apps had already been thoroughly multithreaded for years, making it relatively easy to toss things onto lower priority queues.
Try writing Apple Watch software.
Everything is about battery life.
That's in an extremely vanilla Debian stable install, running in the default "Balanced" power mode, without any power-related tuning or configuration.
That compares reasonably well with my 14" M3 Macbook Pro, which seems to be drawing around 3.5 W with a similar set of apps open.
Sure, the XPS is flattered in this comparison because it has a slightly smaller screen, but even accounting for that it would still be... fine? Easily enough to get through a full day of use, which is all I care about.
There's nothing special about this XPS, and I'd expect the Thinkpad models that have explicit Linux support to be equally fine. The key point is that the vendor has put some amount of care and attention into producing a supportable system.
But that's just one problem, I bet.
I personally never tested it, and I can't find definite benchmarks that confirm and measure the waste.
The comment about ACPI being the problem is slightly off base, since its a huge part of the solution to good power management on modern hardware. There isn't another specification that allows the kind of background fine grained power tuning of random busses/devices/etc by tiny management cores who's entire purpose is monitoring activity and making adjustments required of modern machines. If one goes the DT route as QC has done here, each machine needs a huge pile of custom mailbox interface drivers upstreamed into the kernel customized for every device and hardware update/change. They get away with this in the android space because each device is literally a customized OS and they don't have the upstream turnaround problem because they don't upstream any of it, but that won't scale for general purpose compute as the parent article talks about.
My wife is very sensitive to glossy screens and we have big problems to find new laptop for her, as most good ones are glossy now.
Apparently the Windows exclusivity period has ended, so Google will support Android and ChromeOS on Qualcomm X2-based devices in 2026, https://news.ycombinator.com/item?id=45368167
Generally, they are far nicer than Qualcomm when it comes to supporting standard technology.
https://news.ycombinator.com/item?id=45938410
BTW. I don't think Qualcomm SoCs running Windows was just about performance but more of a time-limited exclusivity deal with MS.
My guess is that it's all the same as in Linux phones that they have large blobs of drivers given by the board producer but not being open, but then... Maybe we should invest time in microkernels? Maybe Linux is a dead end because of the monolithic architecture? Because I doubt big companies will change...
Google has already built Chromebooks (which are Linux based) on them, so presumably the necessary drivers exist.
Outside of laptops, NVidia sells its Jetson Devkits and DGX workstations which run Linux and are pretty fast and ARM based.
And System76 also sells a high powered (and $$$) Linux workstation based on an NVidia ARM chipset
So at least for some ARM SOCs, performance issues have largely been solved.
First of all the userspace is completely different, secondly Android throughout the years has been aggressively changing the ways background process work (in the context of Android activities, not bare bones UNIX), thus it isn't the same as GNU/Linux where anything goes.
Google and Samsung managed to make very successful Chromebooks together, but IIRC there was a bunch of back and forth to make the whole thing boot quickly and sip battery power.
For example I've had this dell Elitebook where I've installed Debian wiping out Win. While on windows system prompts Bios update practically every week but been years in Linux on same bios. IIRC updates were win only or jump thru some complex rings of fire. Haven't bothered looking up in a while..
I've also had to disable some protection such as security before I could install Debian though I guess there's a way if I research hard enough.
I was disappointed to see that no more good linux compatible XPS was available anymore because they are now based on the last snapdragon for bullshit windows "ai" reasons.
The hard part isn't the money - it's identifying an addressable market that makes the investment worthwhile and assembling a team that can execute and deliver on it.
The market can't be a few hundred enthusiasts who want to spend $10k on a laptop. It has to be at least tens of thousands who would spend $1-2k. Even that probably won't get you to the break-even when you consider the size (and speciality) of team you need to do all this.
Only RISC-V is worth switching to.
Here is a list of major ARM licensees, categorized by the type of license they typically hold. 1. Architectural Licensees (Most Flexible)
These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.
Apple: The most famous example. They design the "A-series" and "M-series" chips (e.g., A17 Pro, M4) for iPhones, iPads, and Macs. Their cores are often industry-leading in single-core performance.
Qualcomm: Historically used ARM's core designs but has increasingly moved to its own custom "Kryo" CPU cores (which are still ARM-compatible) for its Snapdragon processors. Their recent "Oryon" cores (in the Snapdragon X Elite) are a fully custom design for PCs.
NVIDIA: Designs its own "Denver" and "Grace" CPU cores for its superchips focused on AI and data centers. They also hold a license for the full ARM architecture for their future roadmap.
Samsung: Uses a mixed strategy. For its Exynos processors, some generations use semi-custom "M" series cores alongside ARM's stock cores.
Amazon (Annapurna Labs): Designs the "Graviton" series of processors for its AWS cloud services, offering high performance and cost efficiency for cloud workloads.
Google: Has developed its own custom ARM-based CPU cores, expected to power future Pixel devices and Google data centers.
Microsoft: Reported to be designing its own ARM-based server and consumer chips, following the trend of major cloud providers.
2. "Cores & IP" Licensees (The Common Path)These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.
MediaTek: A massive player in smartphones (especially mid-range and entry-level), smart TVs, and other consumer devices.
Broadcom: Uses ARM cores in its networking chips, set-top box SoCs, and data center solutions.
Texas Instruments (TI): Uses ARM cores extensively in its popular Sitara line of microprocessors for industrial and embedded applications.
NXP Semiconductors: A leader in automotive, industrial, and IoT microcontrollers and processors, almost exclusively using ARM cores.
STMicroelectronics (STM): A major force in microcontrollers (STM32 family) and automotive, heavily reliant on ARM Cortex-M and Cortex-A cores.
Renesas: A key supplier in the automotive and industrial sectors, using ARM cores in its R-Car and RA microcontroller families.
AMD: Uses ARM cores in some of its adaptive SoCs (Xilinx) and for security processors (e.g., the Platform Security Processor or PSP in Ryzen CPUs).
Intel: While primarily an x86 company, its foundry business (IFS) is an ARM licensee to enable chip manufacturing for others, and it has used ARM cores in some products like the now-discontinued Intel XScale.None of these companies is able to license cores to third parties.
Only ARM can do that. ARM holds a monopoly.
>this is from DeepSeek, ymmv
DeepSeek would have told you this much, given the right prompt. Confirmation bias is unfortunately one hell of a bias.