The logic seems to be: Thinking is mysterious [true]. Neural nets aren't mysterious [true]. Therefore neural nets must not be thinking [?].
That's a fallacy. Compare: The location of the buried treasure is mysterious. The place I'm about to dig isn't mysterious. Therefore, the treasure must not be where I'm about to dig.
It's a fallacy because as soon as you find the treasure, it's location isn't mysterious any more. Mysteriousness isn't an inherent property of things, it's a statement about our own limited knowledge which changes over time.
The same will be true of thinking (for any definition of that term you might care to use). When we figure out how to make artificial systems that think, thinking will no longer be (completely) mysterious.