I appreciate the pushback on this process, it made me think.
I actually asked the LLM for supporting or refuting sources. I didn’t think I was cherry picking. Looking at its response… maybe CharGPT didn’t pick up on the “refuting” detail, or maybe observationist was correct. So maybe next time a prompt “find supporting” and another prompt “find refuting” would be useful to ensure coverage of both sides.
My value add in the human+AI workflow was to check the links. They seem high quality and directly applicable to statements made. I took pressure off observationist to go find directly applicable links (and I saved time by not googling for each separate fact). That being said, I probably didn’t need to requote ChatGPT in full. I liked the full answer because it assured me ChatGPT was responding on each claim but the important thing was the links. So, more efficiency was possible in my yc comment.