Hypothetically, if you believe there's no such thing as a soul or consciousness, it's all just neurons and they can be simulated, and we're close to being able to simulate them - you're much more likely to think lofty AI goals can be achieved.
If you follow a religious tradition like Shinto where even things like rocks can have spirits - the idea of your phone having a certain, limited form of intelligence might already be cool with you.
If you think, much like a camera does most of the work in photography but it's the photographer that takes the credit, that when a person uses AI the output is nobody's work but the user - you might be completely fine with an AI-written wedding speech.
If you think the relentless march of technology can't be stopped and can barely be directed, you might think advanced AIs are coming anyway, and if we don't invent it the Chinese will - you might be fine with pretty much whatever.
If you're extremely trusting of big corporations, who you see as more moral than the government; or you think that censorship is vital to maintain AI safety and stamp out deep fakes; you might think it a great thing for these technologies to be jealously guarded by a handful of huge corporations.
Or hell, maybe you're just a parent who's had their kid want to hear the same Peppa Pig book 90 nights in a row and you've got a hankering for something that would introduce a bit of variety.
Of course these are all things reasonable people could disagree on - but if you didn't like openai's work, would you end up working at openai?