What you might see as logical operations "not mattering", I would see as logical operations integrated so deeply into reflexive operations that it's hard to see where one ends and the other begins. The contrast is that humans can do pattern recognition in a neural net fashion, taking something like the multidimensional average of a set of things. But a human can also receive a language-level input that some characteristic is or isn't important for recognizing a given thing and incorporate that input into their broad-average concepts. That kind of thing can't be done by deep learning currently - well, not a non-kludgey sort of way.
Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs.
It depends on how you want to mean that. A human can take inputs on one thing and apply them seamlessly to another thing. Neural nets tend to be very dependent on the task-focused content fed them.