still shocks me. It's a company that uses AI to produce survey results. I'll let you read their pitch/description and decide for yourself, but I think it's very fair to say that this is a service to fabricate survey results to validate whatever idea it is you had beforehand. But even side-stepping that, they claim to have overcome bias in their datasets and refused to elaborate on 1) how they did that and 2) how they could prove that they did that.
As long as this community, which is far more technically sophisticated than the general public, isn't laughing companies like that out of the room, we're in serious trouble.
There was also this thread https://news.ycombinator.com/item?id=37259753 which was an individual's project to provide an AI therapist and while people here and there did mention the cons of having a program provide medical treatment, the overall sentiment wasn't at all negative.
I'm not even some AI luddite: I use and greatly benefit from some AI tools. But just like crypto, AI isn't the be-all-end-all technology. The difference is that where crypto is primarily a financial risk to people duped into using dubious-at-best-scams-at-worse products, AI will cause real, concrete harm, i.e., https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...