Any large language model generates embedding representations at every layer of the model, and these can be trivially extracted. So, large language models are indeed embedding models.
This leaderboard doesn't compare these custom tailored embedding models vs the obvious thing of average pooling layered with any traditional LLM, which is easily implemented using sentence transformers.