Try reading the thread before mindlessly replying.
"I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local."
State of the art LLM means GPT-4 or equivalent. Trillion+ parameters. You won't run that locally on an iPhone any time soon.