I assume they have a moderate bet on on-device SLMs in addition to other ML models, but not much planned for LLMs, which at that scale, might be good as generalists but very poor at guaranteeing success for each specific minute tasks you want done.
In short: 8gb to store tens of very small and fast purpose-specific models is much better than a single 8gb LLM trying to do everything.