Looks like it’s just a display.
The consensus thus was that spending more time in lower power states (where you use ~0W) was much more efficient than spending a longer amount of time in the CPU sweetspot, but with all sort of peripherals online that you didn't need anyway.
I remember when Google made a big deal out of "bundling" idle CPU and network requests, since bursting them out was more efficient than having the radio and CPU trotting along at low bandwidth.
Manufacturers chase benchmark results by youtubers and magazines. Even a few percent difference in framerate means the difference between everyone telling each other to buy a particular motherboard, processor, or graphics card over another.
Amusingly, you often get better performance by undervolting and lowering the processor's power limits. This keeps temperatures low and thus you don't end up with the PC equivalent of the "toyota supra horsepower chart" meme.
1400W for a desktop PC is...crazy. That's a threadripper processor plus a bleeding edge top of the line GPU, assuming that's not just them reading off the max power draw on the nameplate of the PSU.
If their PC is actually using that much power, they could save far more money, CO2, etc by undervolting both the CPU and GPU.
OTOH, if it's something like realtime game rendering without a frame limiter, throttling would reduce the frame rate, reducing the total amount of work done, and most likely the total energy expended.
The ideal form factor might be a smart plug itself, but I can’t find any with hackable firmware and also matter/thread/wifi.
The current implementation uniformly sets max frequency for all 128 cores, but I'm working on per-core frequency control that would allow much more granular optimization. I'll definitely measure aggregate consumption with your suggestion versus my current implementation to see the difference.
Ideally these goal are balanced (in some 'efficient' way) against matching electricity prices. It's not either/or, you want to do both.
Besides better amortizing the embodied energy, improving compute utilization could also mean increasing the quality of the compute workloads, ie doing tasks with high external benefits.
Love this project! Thanks for sharing.
[1] https://forums.anandtech.com/threads/embodied-energy-in-comp...
The stuff the chip and motherboard do, completely built-in, is light-years ahead of what you're doing. Your power-saving techniques (capping max frequency) are more than a decade out of date.
You'll get better performance and power savings to boot.
This core will get to sleep less than the others.
You can also use the CPU "geometry" (which cores share cache) to set max frequency on its neighboring cores first, before recruiting the other cores
The dual Epyc CPUs (128 cores) in my setup have a relatively high idle power draw compared to consumer chips. Even when "idle" they're consuming significant power maintaining all those cores and I/O capabilities. By implementing uniform throttling when utilization is low, the automation actually reduces the baseline power consumption by a decent amount without much performance hit.
We don't pay for electricity directly (it's included in the rackspace rental), but we could reduce our carbon footprint by adjusting the timing of batch processing, perhaps based on the carbon intensity APIs from https://app.electricitymaps.com/
Though, the first step will be to quantify the savings. I have the impression from being in the datacentre while batch jobs have started that they cause a significant increase in power use, but no numbers.
Only really makes sense for learning or super confidential info