Could a possible solution there be to use the same language detection platforms used for detecting terrorist activity to also flag possible grooming for human moderator review? Or might that be too subjective for current language models leading to many false positives?
This is far too pat a dismissal of something which happens regularly. You can argue that it’s not frequent enough to justify this action or would happen anyway through other means but it’s a real problem which isn’t so freakishly rare that we can dismiss it.