Hallucinations can be manifested in different ways. Using nonexistent APIs is just one of them. The LLM could just as well hallucinate code that doesn't fix a problem, or hallucinate that a problem exists in the first place, all while using existing APIs. This might not be a major issue for tasks like programming where humans can relatively easily verify the output, but in other scientific fields this can be much more labor-intensive and practically infeasible to do, as this recent example[1] showcases. So hallucination is a problem that involves any fabricated output that isn't grounded in reality.
Which isn't to say that it is a universal problem. In some applications such as image, video or audio generation, especially in entertainment industries, hallucinations can be desirable. They're partly what we identify as "creativity", and the results can be fun and interesting. But in applications where facts and reality matter, they're a big problem.
[1]: https://news.ycombinator.com/item?id=44174965