We recently had 3 servers have two drives each fail within hours of each other, with about two weeks between each of the 3 servers. These were 3 out of 4 servers that had been configured at the same time, with drives from the same delivery - clearly something had gone wrong.
Usually we try to drive types, but we didn't have enough suitable drives when we had to bring these up. Thankfully we do have everything replicated multiple times and very much specifically avoided replicating things from one of the new servers to another.
When we brought them back online we got a chance to juggle drives around, so now they're nicely mixed in case we get more failures.
For my private setup, I've gone with a mirror + rsync to a separate drive with an entirely different filesystem + Crashplan. Setups like that seems paranoid until you suffer a catastrophic loss or near loss...
My first big scare like that was a decade or so ago when we had a 10 or 12 drive array of IBM Deathstar (Deskstars) that started failing, one drive after the other, about a week apart, and the array just barely managed to rebuild... Particularly because it slowed the array down so much during the rebuild, that we were unable to complete a backup a day while running our service too, and taking downtime was unacceptable. So our backups lagged further and further behind while we waited for the next drive failure.. Those were some tense weeks.