Yes, for a couple of reasons:
- I control the system prompt to guide the model to do exactly what I want, rather than begging and cajoling ChatGPT. I will often use the API playground for this reason where someone else might use ChatGPT.
- For more complex problems, having the input and output in Python gives me a ton of options for data manipulation, storage, etc.
- Using an API call lets me daisy chain different models and other APIs/tools together. ChatGPT can sort of do this via plugins, but I haven't found it to be a great experience.
GPT-4 is definitely better than the local models by itself. It's not a clear winner when you start to get into fine-tuning/ensemble workflows. It's also less attractive in situations where I want to chew through a ton of tokens for a personal project and GPT-4 would get expensive.