Depends on the workload.
Normally you would go read() -> write() so:
1. Disk -> page cache (DMA)
2. Kernel -> user copy (read)
3. User -> kernel copy (write)
4. Kernel -> NIC (DMA)
sendfile():
1. Disk -> page cache (DMA)
No user space copies, kernel wires those pages straight to the socket
2. Kernel -> NIC (DMA)
So basically, it eliminates 1-2 memory copies along with the associated cache pollution and memory bandwidth overhead. If you are running high QPS web services where syscall and copy overheads dominate, for example CDNs/static file serving the gains can be really big. Based on my observations this can mean double digit reductions in CPU usage and up to ~2x higher throughput.