Maybe they’ll manage to get LLMs running well locally with the new low-bit developments? Not my area. But for training/learning it seems like Apple is DOA. They have the same problem as AMD, no one is doing research with their hardware or software.
Intentionally shipping low RAM/unified memory quantities seems short sighted too. Maybe with a 16GB baseline they could do something special with local LLMs.