I still find that my uses of GPT and others still struggle with a sort of tunnel vision.
Does it mean that it uses the data it has to the maximum possible level to produce new reasoning (that add to those produced by less algorithms). IOW, are we still in the realm of: with a given data set, A.I. can produce up to N reasoning capabilities and consequently, can't produce more than that ? IOW, reasoning is bound by knowledge ? And therefore, maybe we could just start from a data/knowledge set in which we add some randomness and self play until some form of reasoning emerge ?