Anecdotally, most desktops I used lived much longer than most servers I worked woth. I know a Compaq desktop that worked as a server for 11 years. Still does. One hardware failure in that entire time. (Power supply.) Could be a result of lower workload, but still...
Putting all of this aside though, the real secret to the superiority of hardware meant for the datacenter often comes down to the practice of binning. Manufacturers have different tolerances for the products they produce; hence why Intel has a million models of CPUs, Seagate produces so many different hard drives and Samsung sells DRAM chips to other vendors and makes their own DIMMs.
The products that perform the best are binned in the server-y bins, the others are moved down the list until they fit another bin. No manufacturer wants to discard parts if they can possibly avoid it.
Sometimes you're actually getting a deal, but a lot of the time you're just trading reliability for cost when you use lower binned items like desktop hard drives in a server environment. Sometimes that trade-off is worth it though.
ECC RAM and redundant PSUs alone should be unquestioned advantages over standard desktops.
Not to mention hardware raid controllers, IPMIs, designs that allow fans and other parts to be serviced without downtime... :P