Its when you get to hundreds of repositories or 10's of gigabytes of code that local tools cannot run fast enough. They are not designed for this use-case, and usually rely on the files being searched hitting the disk cache for repeated search performance.
It may be possible for github to shell out to grep for a single repository search (I have no idea how the back-end works but I doubt its impossible) but, I suspect that almost everyone wants/expects this to work across multiple repositories or across all of the github repositories.
Since its not easily possible to do so across everything they are not adding it to even a single repository to avoid search working differently for different situations, which is a fair approach in my opinion.
On GitHub, tens of thousands of searches will be happening at once on the same search cluster. If they were all literally doing a "git grep" (a linear scan of the associated repo data), the disk caches would thrash back and forth between queries, and nothing could be answered in less than 30 seconds.
The only way for GitHub to respond to code searches at scale in a reasonable time, is to have a pre-built index.
If the index was per repo, that'd be a kind of partitioned index; and there's no DBMS that I know of that can handle a partitioned object having 58 million partitions. It makes much more sense to have an unpartitioned index... which effectively implies "cross-repo search" (because, once you have an unpartitioned index built up over all your repos, it costs nothing to enable someone to search that entire index at once.)
You could then offer code search across the whole index (minus any project marked private that you are not a part of) as a paid offering.