The article itself is uncertainty at the level of the next token rather than of the entire response, which is different: "Capital of Germany is" followed by "Berlin" is correct but it would have also been valid for the full answer to have been ", since reunification in 1990, Berlin; before this…" - correct at the conceptual level, uncertainty at the token level.
Most of the users aren't aware of the maths and use words in more every-day manners, to the annoyance of those of us who care about the precise technical definitions.
The listed types of uncertainty can and do have different uses in different cases.
Especially the difference between "I don't know the answer" and "I do know absolutely that the answer is that nobody knows".
As a chatbot it's also important to say "I don't understand your question" when appropriate, rather than to say "dunno" in response to e.g. "how do I flopragate my lycanthrope?"
The article is talking about inference. Most models people are actually using have gone through RLHF or DPO. So the uncertainty at inference includes all dimensions of uncertainty. A token choice can effectively be a branch from a conceptual perspective.