I very much doubt this will be the case and if it were, it's unlikely to be effective. We know people that have to check something that's right 99% of the time but wrong 1% zone out and overlook issues. It's a big part of why self-driving cars can't be a 99% affair and why people die when self-driving cars make bad decisions even though a driver is behind the wheel.
I think where profit can be extracted, companies will rely on good ol' lock-in or hope the cost of changing is too high to warrant a mass exodus from their platform. Everyone has a super computer in their pocket, but instead of improving typing we started adding "Sent from iPhone" as a way to excuse us from having to proofread anything. I can't count how many times I've gotten mass emails with template variables that weren't interpolated. Or emails that reference discussions that never happened. Ostensibly, a human was there to review all of this, but shirked that responsibility because ultimately they can let it waste someone else's time. I see AI turbo-charging that.
I think we're being disingenuous with all of these automated tools and thinking an attentive, caring human will check all that work. It'll be more profitable, whether in terms of a company's capital or an individual's time, to clean our hands and go "whelp, that's AI for ya" when things go wrong.