Perhaps that has to do with the fact that aligning LLM-based AI systems has become a pseudo predictable engineering problem solvable via a "target, measure and reiterate cycle" rather than the highly philosophical and moral task old AI Alignment researchers thought it would be.