This is a philosophical question more than a technical one. We already have LLMs that can produce output that
appears to be intelligent and is close enough to be used in many contexts where "intelligent" output would be helpful. So the technical question is solved. An essay called
The Bitter Lesson[0] suggests that with enough nodes we can capture every kind of context.
The philosophical question hinges on whether or not transformer-type behavior actually is intelligence. Some suggest that the human brain actually operates this way, and that belief is the basis for all the vocabulary and context in machine learning. Others believe (philosophically) that intelligence is more than the sum of these node parts and signals.
Will medical science be able to answer the question? Are humans able to escape the observation paradox enough to find out how their own intelligence operates, when such information could only be produced through intelligence itself?
Personally, I think intelligence is more than node firings but less than mystical, spiritual soup. While we don't know how it really works, we can push LLMs to be close enough to "intelligence" to serve many useful purposes.
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html