I've never seen any stats text book or course discuss techniques for dealing with large amounts of data to any significant level, but in data science that is a core part of what you do.
I ran production systems before DevOps and after. Again, it's very different - prior to devops, there was no emphasis at all about using software engineering techniques to manage and deploy software. The most you'd get was some scripts maybe kept in source control if you were lucky.
Now I run an AI company, and a key part of the ML we use involves generating structured text files from images. I guess predictive statistics is technically a correct label, but the tools and techniques are so dramatically different that that thinking of them as separate fields is more correct than incorrect.