We may be playing semantics games, here?
Mechanisms that optimize for increased engagement via dynamic suggestions for a user's feed or ~related content are not moderation (unless, perhaps, the algorithmic petting zoo is the only way to use the service).
This is exactly why I'm drawing a distinction.
Many of a platform's legal and civil liabilities for user-submitted content are poorly correlated with how many people see it and whether it is promoted by The Algorithm (though the chance it gets noticed probably correlates). This is ~compliance work.
Their reputational liabilities are a little more correlated with whether or not anyone is actually encountering the content (and more about how the content is affecting people, than its legality). This is ~PR work.