As the data set size increases, the average frequency of non-trivial most frequent patterns will go down, and the tail will get much larger. Thus if you had a 1% cutoff, the percentage of the data set hitting this cutoff goes down as the data set size goes up. Take a look at pareto distributions with high alpha to understand the statistics of it.
Of course, this is only true if new data is distinct from old data. If you just copied your data set 10x and pretended it was a 10x larger data set, it would behave like you expect.