Because that's crazy to me. Humans can be reasoned with. AI can't. My experience with the likes of ChatGPT tells me that if the AI is wrong about something (which it very very often is), there's no point trying to explain to it that it's wrong or how it's wrong, it will say something like "You are right, sorry for the confusion." and follow up with the same or a similar error again.
AI might eventually become an alright first line, but losing the option to speak with a human -- an intelligent entity which can actually be reasoned with -- seems dystopic.