Also that a significant proportion (majority?) of them will have just 8 GB of memory which is not exactly sufficient to run any complex AI/ML workloads.
I expect OS's will expose an API which, when queried, will indicate the level of AI inference available.
Similar to video decoding/encoding where clients can check if hardware acceleration is available.
Even their high-end prosumer hardware could be interesting as an AI workstation given the VRAM available if the software support were better.
Idk every business I’ve worked and all the places my friends work seem to be 90% Apple hardware, with a few Lenovo issued for special case roles in finance or something.