Being a truly good web crawler takes a lot of work, and being a polite web crawler takes yet more different work.
And then, of course, you add the bad coding practices on top of it, ignoring robots.txt or using robots.txt as a list of URLs to scrape (which can be either deliberate or accidental), hammering the same pages over and over, preferentially "retrying" the very pages that are timing out because you found the page that locks the DB for 30 seconds in a hard query that even the website owners themselves didn't know was possible until you showed them by taking down the rest of their site in the process... it just goes downhill from there. Being "not bad" is already not good enough and there's plenty of "bad" out there.