Your understanding of 'understanding a language' is obviously different from mine when you write that "the language model discussed above would presumably understand Alienese just as well as it would understand English" and "language models today seem to be quite good at understanding English".
Language models don't understand any natural language, they're very good at manipulating it (and us!) in terms of continuing patterns across the scale from letter (orthography) to phrases and paragraphs of seemingly utility and correctness. In that regard, yes, the aforementioned model will likely have no difficulty in reproducing novel outputs that would appear likewise useful and correct to Alienese speakers as is the case for English. However this assumption, too, should come with the disclaimer that unless someone produces a reliable test for the utility and correctness of the same LM for a variety of natural and invented languages with divergent grammars (such as including e.g. polysynthetic languages which have a very different view of what constitutes a 'word') without having to tweak any of the many finnicky parameters of these models—we can't be sure the model won't produce garbage when trained on the next 'exotic' language. So who knows, in English you use very few infixes and a lot of grammar takes places between fairly constant, fairly short words; a model with a given set of parameters that works well for such languages may not be very good at languages that has words built from many specific prefixes, infixes and suffixes that are as expressive as entire phrases in English. Just like the current generation of text-to-image generators are pretty good at a lot of things but then screw up when asked to picture a cornfield.