Cache conflicts happen between processes in the same way, and for the same reasons, and just as often, as they do between threads.
An essentially identical architecture that defined the same object within a single address space is, quite clearly, not going to be subject to that bug.
Well, then it would be the same architecture, because the one you described (one process with multiple threads) is also a single address space.
Still, I think you're wrong. The cache system doesn't have any awareness of threads or processes. So, you get spurious evictions all the time from completely unrelated programs, no matter what you do. Or from unrelated data in the same program that just happens to map to the same cache line.
If you still disagree with me and you can give me a specific example, I'd really appreciate it, because if I am misunderstanding something, it's a big deal, and I want to double check myself.
Now take the same architecture and spawn a hundred processes. Those Foo objects now live in different physical pages and thus writes to them from all the processes live happily in L1 cache.
Obviously not all architectures work like this. If Foo is "really" shared, then nothing can help the contention. But usually it's not, it's just that the code was written by someone who didn't think about cache contention. That kind of performance bug is really easy to write. And a reasonable fix for it is "don't use multithreaded architectures when you don't need them".