The better you know a field the more it looks incremental. In other words, incrementalness is more a function of how much attention you pay or how deep you research it. Relativity and quantum mechanics were also incremental. Copernicus and Kepler were incremental. Deep learning itself was incremental. Based on almost identical networks from the 90s (CNN), which were using methods from the 80s (backprop) on architectures from the 70s (neocognitron) using activation functions from the 60s and the basic neuron model from the 40s (McCullough and Pitts), which was just a mathematization of observations in biology via microscopy integration with mathematical logic and electrical logic gates developed around the same time (Shannon), so it's just logic as formalized by Gödel and others and it goes back to Hilbert's program, which can be extrapolated from Leibniz etc. etc. It's not hard to say that "it's really just previous thing X plus previous thing Y, nothing new under the sun" to literally anything.
"It just suddenly appeared out of nowhere" is just a perception based on missing info. Many average people think ChatGPT was a sudden innovation specifically by OpenAI seemingly out of nowhere. Because they didn't follow it.