If you look at the end-to-end problem of 'what is the minimum amount of data I need during this request' vs 'how much data do I fetch, and what is my total latency / number of roundtrips to the db doing so?' I think for most ORM patterns that use lazy loading your primary target is reducing roundtrips, and for most hand rolled queries or ORMs tweaked to do eager loading, the primary target is deduplicating the results.
My take on this is that a decent approximation is a query per relation you're fetching, so if you have 10 entities A in a transaction, and each has 20 entities B attached, ideally you want 2 queries: one for the 10 entities A, and one for the 200 entities B. Lazy loading will give you 1 query for A + 10 queries for B, and eager loading will duplicate the 10 A entities data 20 times each (and that problem gets worse as your graph gets bigger with more one-to-many relations).
Once you run into the raw data transfer between database and backend being the limit, trying to optimize that protocol comes into play, but at least in the use cases I tend to have this is not usually a bottleneck. Besides, I'll typically serialize the data fetched to send out over HTTP again, which essentially has the same challenges if you're not using protobuf or so.