Anyway, as with other BOINC projects, they sent each work unit (simulation run) to at least two (or was it three?) different computers and compared results, to ensure correctness. And they found that they got quite a lot of work units which disagreed and had to be sent to more computers for validation.
After some digging and eliminating factors like overclocked CPUs, they found that usually, all Intel machines would agree and all AMDs would agree, but Intel and AMD would disagree. Like, a run that would hit the detector wall after 30 revolutions on Intels could go on for many thousands of revolutions on AMDs.
Further digging led to the discrepancy in lower bits of transcendental operations in the FPUs[2]. After switching to a software library for these operations, at the cost of a few % in speed, they got Intels and AMDs to agree.
So yeah, when you do a large number of iterated operations like this, even a single LSB of difference can lead to issues.
As an aside, the LHC@Home was initially run almost like a hobby project by a few researchers connected to the LHC, without much official support. However the data the project produced was AFAIK highly beneficial to the machine commissioning, and it later became a more official part of the High Luminocity upgrade.
[1]: https://cds.cern.ch/record/1463291/files/CERN-ATS-2012-159.p...