I've noticed this in my own work with financial data. I used to manually sanity-check numbers from SEC filings and catch weird stuff all the time. Started leaning on LLMs to parse them faster and realized after a few weeks I was just... accepting whatever came back without thinking about it. Had to consciously force myself to go back to spot-checking.
The "System 3" framing is interesting but I think what's really happening is more like cognitive autopilot. We're not gaining a new reasoning system, we're just offloading the old ones and not noticing.