Here might be some definitions of intelligence for example:
> The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.
> "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".
> Goal-directed adaptive behavior.
> a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation
But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis.
> Because the only brute-forced aspect of LLM intelligence is its creation.
I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching.
But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look.
> This very post, with the transcript available is an example of how untrue it is.
The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.
> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.
You're so close to getting it lol