Or is this task still considered to be a hard one?
LLMs seem to understand the text much better than any previous technologies, so anaphoric resolution, and complex tenses, and POS choice, and rare constructs, and cross-language boundaries all don't seem to be hard issues for them.
There are so many research papers published on LLMs and transformer now. With all kinds of applications, but they wll not quite there at all.