GPT-J is 6B, the biggest version of GPT-3 available with the API is 175B, those two models are nothing alike in term of quality. Even the 6B version of GPT-3 (curie) is better quality than GPT-J IIRC.
So if you need better quality than GPT-J there are basically no alternatives.
And even if 6B is enough for you, but you care about latency, OpenAI has the best inference runtime by far, and you are not going to replicate that on your cloud/bare-metal. Unless your scenario specifically benefits from your API and your other servives to be colocated.
Edit: I forgot about finetuning. OpenAI gives you the ability to finetune all of their variants. Maybe you already have the knowledge to finetune something like GPT-J yourself, but I would guess that most potential users of the API do not have it.