Some of our prior work, AutoMan, (
https://dl.acm.org/doi/10.1145/2927928,
https://www.microsoft.com/en-us/research/publication/voxpl-p...) which developed this very idea (almost a decade ago!), explicitly addresses many of the issues around exploitative practices. By default the system pays the minimum wage. We usually paid workers quite a bit more than minimum wage while developing the system. You can read more about it in our CACM letter response (
http://www.cs.williams.edu/~dbarowy/acknowledge_crowdworkers... scroll down the the bit titled "Acknowledge Crowdworkers in Crowdwork Research".
Also, while it does not have the convenience of being a web service, you can download and use AutoMan now (https://docs.automanlang.org/). Most importantly, AutoMan provides statistical quality guarantees. It looks like Human Lambdas uses a manual auditing approach, which does not scale. It's surprisingly common for some crowdsourcing tasks to be as hard to audit as it is to do them in the first place. For the kinds of tasks that AutoMan supports, this is basically a non-issue.
I still actively maintain the library, and I am always happy to talk to people about use cases.