Language models obviously have some form of intelligence right now. You can have GPT-4 take SAT tests, play Chess, write poetry, predict what will happen in different social scenarios, answer theory of mind questions, ask questions, solve programming puzzles, etc. There are some measures that GPTs are clearly below human levels, some where they are far beyond, and some where they are within human range. The question as to whether or not language models have any form of intelligence has been definitively answered - yes, they can and do - by existence proof.
What definition or description of intelligence do you use such that you doubt that language models could have it? Would you have had this same definition in the year 2010?