When you use Large Pages and you run out of contiguous 2MB chunks, what do you do then?
Unlike Linux, Windows actually guarantees its memory to anything that requested it. Windows does NOT ever "take back" memory and crash processes randomly (see Linux's OOM killer). But this guarantee has its own issue on Windows: important services who make requests for new bits of code will crash instead.
So Large Pages naturally will run out the longer a system runs. They are a limited resource: how often do you find a contiguous 2MB block when most programs request memory in 4kB blocks?? And the longer a system runs, the fewer 2MB blocks will exist.
I guess Linux handles the issue by making normal pool, large pool, and "huge" pool all separate. So you can run out of normal-pool but have lots of large-pool space remaining. But this has the disadvantage of being wasteful (Ex: 1GB Huge Pool permanently eats up 1GB that the smaller pools can't ever use).
------------
Ultimately, applications aren't supposed to use the OS-level memory allocator as if it were malloc / free. Because when fragmentation hits you in malloc/free, you mess up your own memory.
But if fragmentation hits you at the OS-level, you're basically screwing the entire system.