What you say would be true if images/js/css is truely served by the same IP adresse (not hostname!). In reality, people use CDNs to deliver the static assets like images/js/css, and only the API calls are used to warm up the TCP connection to the actuall data backend. Also things like DNS load-balancing would break the warm up, because the congestion control caches operate on IPs, not hostnames.
Additionally, its really hard to benchmark and claim it is "faster". You will always measure using the same networking conditions (latency, packet loss rate, bandwidth). So if a benchmark between two machines yields faster results using technology A, the same benchmark may return complete different results for different link paramters. Point being: Optimizing for a single set of link parameters is unfeasible, you'd have to vary networking link conditions and find some kind of metric to determine what means "fast": Average across all paraters? Or rather weighted averages depending on your 98th percentile of your userbase etc.
Regarding improving the benchmarks: It is really hard, since (a) docker images cannot really modify TCP stack settings on the docker host and (b) client and server would have to flush their TCP congestion control caches at the same time, and only after both flushed the next run can be conducted.
EDIT: Regarding serving static assets to warm up the connection: In that case, you'd have to include page-load time to download that assets in your meassurement (including time to parse+execute JS) and overall time comparison. Switching the API prototocol from REST to something else will probably not have that big of an impact on the total load time then. Saying: If you spend 80% of your time downloading index.html, css, some javascript etc. and querying your API accounts only 20% of the time, you will only be able to optimized on that 20%. Even if you cut load times for the API calls in half, overall speedup for the whole page load would be 10%.