On GitHub, tens of thousands of searches will be happening at once on the same search cluster. If they were all literally doing a "git grep" (a linear scan of the associated repo data), the disk caches would thrash back and forth between queries, and nothing could be answered in less than 30 seconds.
The only way for GitHub to respond to code searches at scale in a reasonable time, is to have a pre-built index.
If the index was per repo, that'd be a kind of partitioned index; and there's no DBMS that I know of that can handle a partitioned object having 58 million partitions. It makes much more sense to have an unpartitioned index... which effectively implies "cross-repo search" (because, once you have an unpartitioned index built up over all your repos, it costs nothing to enable someone to search that entire index at once.)