The theory I heard worked in the other direction: if we assume scammers have a finite amount of time, it could be in their interest to minimize the amount of "likely good targets" in order to increase the amount of "
very likely good targets". So all those untapped potential targets are just too similar to non-good targets for them to discriminate effectively, leaving them so far focused on the lower-hanging fruit.
I mean, with Google Translate, spellcheckers, etc, improving all the time, at least some of those messages should have been improving as well, no? If their grammar has not improved at all during the last decade, then there might be a hinge of truth to the theory.