It is quite incredible that nothing changed about the architecture in gpt-2 vs gpt-3 (just way more connections), yet it aquired fundamentally new behavior - that if performing arithmetic calculation - despite not having large amounts of training data on the subject. I think this is the type of phenomenon that shows we are quite poor at estimating what these systems will be capable of when scaling up. So acting as if we're sure it won't lead to improvements in AI is as idiotic as claiming that it will. There are far too many people on hacker news that follow this fad of being dismissive of AI, because they make the common mistake of equating cynicism with intelligence.