That aside, it seems like AI has had the most empirical success by not imposing hard constraints/structure, but letting models learn completely "organically". The computationalists (the folks who have historically been more into this "AI has to have things like logical consistency embedded into its structure" kind of thinking) seem to have basically lost, empirically. Who even knows what Soar[1] is nowadays? Maybe some marriage of the two paradigms will lead to better results, but I doubt that things will head in that direction anytime soon given how massively far just having parallelizable architectures and adding more parameters has gotten us.
[1] https://en.wikipedia.org/wiki/Soar_(cognitive_architecture)