I think it would depend, wouldn't it? If the query can be answered directly from an index (there exists some index containing all of the columns required by the query) then an index scan would suffice and be faster by virtue of not having to scan
all the data (the index would be smaller by not including all columns). I believe most modern DB query optimizers are capable of this.
If there isn't such an index, then it's a toss up: yes, going to the main table to fetch a row has a cost, but if there are only a few rows answered by the query, then it might be worth it. If there are many rows, that indirection will probably outweigh the benefit of the index scan & we'd be better off with a table scan. This would require an optimizer to estimate the number of rows the query would find. I don't know if modern DB query optimizers would do this or not. (And my naïve guess would be "they don't", specifically, that the statistics kept are not sufficiently detailed to answer any generalized LIKE expression.)