I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.
If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text.
Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.
I've always found this funny. Doesn't macOS' default text substitution enable (annoying to me) things like em-dash, smart quotes, etc?
....until you start scrolling down the page and it becomes screamingly obvious that everything it says comes from the same template.
Maybe the problem isn't just that AI produces gobs of useless crap. Maybe what's worse is that it can produce even more mediocre crap that crowds out the good?
All oatmeal, no steak, leads to "starvation" by poor nutrition.