It seems quite clear to me that human brains are not actually doing much symbolic logic. What symbolic logic we do do has been bolted on using other faculties.I agree. But my interest is in engineering something that works, not necessarily in creating an exact replica of the human brain. That's why my interest falls into the domain of symbolic / sub-symbolic integration - because it strikes me as a faster path to more usable computer intelligence.
I have no problem believing that a sufficiently large ANN, with the right training and inference algorithms, could achieve AGI. My problem is that A. right now achieving that seems very out of reach to me (but I could be wrong) and B. it seems unnecessary to me to remain wedded to the idea of 100% (or even 90% or 80% etc.) fidelity with our biological brains. After all, if we want something just like a human brain, we just need a man, a woman, and 9 months of time.
Anyway, I think it's OK to think of engineering in "short cuts" by using things we know computers are good at, and things we already know how to do, and trying to combine them with ANN's in such a way as to make something useful. Will it ever yield AGI? I have no way of knowing. And even if it does, would that approach actually be faster than a pure ANN approach? Again, I don't know. But for now, I spend my time on symbolic/sub-symbolic integration nonetheless.
I think the problem is that reasoning about our own minds is incredible tough.
Yes, definitely.