You are right, my bad. Taking my words back on that.
A bit more details from the Pierre-Marie Pédrot:
> Lean does not escape this limitation, as it defines the equality type inductively as in Coq. Rather, in a mastermind PR campaign, they successfully convinced non-experts of type theory that they could give them quotient types without breaking everything around.
> The first thing they do is to axiomatize quotient types, which is somewhat fine and can be done the same in Coq. As any use of axioms, this breaks canonicity, meaning your computations may get stuck on some opaque term. (While slightly cumbersome, axiomatizing quotients already solves 99,9% of the problems of the working mathematician).
> Now, were they do evil things that would make decent type theorists shake in terror, they add a definitional reduction rule over the quotient induction principle. We could do the same in the Coq kernel, but there is a very good reason not to do this. Namely, this definitional rule breaks subject reduction, which is a property deemed more important than loss of canonicity.
> In Lean, you can write a term t : A where t reduces to u, but u doesn't have type A (or equivalently may not typecheck at all). This is BAD for a flurry of reasons, mark my words. Reinstating it while keeping quotients requires an undecidable type checking algorithm, which is another can of worms you don't want to wander into. The same kind of trouble arises from similar Lean extensions like choice.