Is it? The microarchitectures of the big MPUs is essentially RISC -- as it always has been, but microcode isn't written 100% by scratch any more.
At the compiler level it's kind of true, and for what turned out to be generally good reasons such as compilers aren't as smart as Radin & co thought they'd be, or concomitant ideas like delay slots turned out to be incompatible with advances in memory architecture. So in that regard I'd say that it isn't technically superior at all, which was a surprise to me and many may people.
However CISC evolved too. The original CISC architectures that RISC was a reaction to had lots of features for programmers (think VAX string processing or function call instruction!). Nobody writes code like that (except for MPUs); those residual instructions are in fact much slower than compiled code because Intel et al won't pay anyone to optimize them. Instead the focus (apart from vector and some housekeeping) has been on writing instructions that communicate better to the CPU's instruction interpreter and scheduler what the programmer's overall intent was. And structured in a way that is easier for compilers than for humans.
So which won?