LLVM is geared towards providing a single toolchain for frontends to target, but is essentially made to target von Neumann/serial-ish/homogenous machines. You can get an overview of its philosophy in Lattner’s original 2004 paper [1].
MLIR (Lattner 2021 [2]) aims to generalize the compiler toolchain to support different compute paradigms and heterogeneous targets (polyhedral compilation/parallel kernels/afine loops) and so on, by defining a declarative framework of IR _dialects_ and transforms between them.
I’m writing my MSc thesis on using C/C++ as the IR, assuming it has enough compiler support for different targets to provide a suitable platform for optimisation transforms using a source-to-source compiler, but MLIR seems like a better first-principles approach, maybe suffering from the impracticality of using it _right now_.
[1] "LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation", Lattner and Adve. https://llvm.org/pubs/2004-01-30-CGO-LLVM.pdf
[2] “MLIR: Scaling Compiler Infrastructure for Domain Specific Computation”, Lattner et al. https://ieeexplore.ieee.org/document/9370308