With all due respect to Minsky, I find this zen style story a little silly. If Minsky want to say something informative why don't he use formal concepts like Jeffreys priors, mixing time, high dimensional varieties, minimun description length, entropy, etc. Is that style of telling stories a projection from a high dimensional mind to a zero dimensional dumb style space?, is that a PCA reduction from ideas to cliches? I apologize in advance from being harsh, but I am entitle to speak from my heart and I reiterate my appreciation for Minsky's work.
It should be nice using a more informative language for giving advice. If this story is tagged as "popular story for dummies" I would feel we are making real progress.
Just one of Minsky great ideas related to reinforcement learning: The credit assignment problem:How do you distribute credit for success among the many decisions that may have been involved in producing it?, in "Steps Toward Artificial Intelligence" (Minsky, 1961): All of the methods we discuss in this book are, in a sense, directed toward solving this problem.
That book is linked from HN and it has just one comment, so I think that NDNS, no dumb nerd stories, will never become popular.
(1) https://news.ycombinator.com/item?id=10972522
More from (2) Minsky in 1951 built the world's first “randomly wired neural network learning machine,” called the stochastic neural-analog reinforcement computer (snarc)
https://www.geek.com/blurb/marvin-minsky-ai-has-been-brain-d...
A fair paper: Exploring Randomly Wired Neural Network for Image Recognition.
Was Sussan at the edge of envisioning deep learning?, then in fact the room has dissapeared!