If you're content with the current orthodox view / approach(es) then just browse around Github and find one of those "Awesome X" lists like "Awesome Machine Learning"[1], "Awesome Deep Learning"[2], "Awesome Artificial Intelligence"[3] and so on, and go to town.
If you want to go deeper, including taking a step back in time and retracing the path(s) taken, to explore whether or not you might want to choose a different fork... well, that's doable, but it's a slog. I should probably write up a reading list for this approach and put it up on Github. It leads to some weird places though... like right before jumping over to HN and noticing this post, I'd been spending the last hour or so trying to track down two obscure Russian books on neural nets from back in the 1980's / 1990's... where print copies do not appear to be available in the US (and definitely not translated to English) and the nearest library to me with a copy of the one is the Library of Congress in D.C.
Do you need to go down that particular rabbit-hole? Probably not. I'm just particularly interested in revisiting some earlier techniques / theorizing about NN's, that have fallen out of favor and aren't really taught much anymore.
[1]: https://github.com/josephmisiti/awesome-machine-learning
[2]: https://github.com/ChristosChristofidis/awesome-deep-learnin...
[3]: https://github.com/owainlewis/awesome-artificial-intelligenc...