Nice part about that one is no separate server, no telemetry, and the backing file format is simple JSON you can directly import from your production application. However the range of supported models is smaller (basically only LLaMa-style and OpenAI-style interfaces are supported).
* Testing prompt behavior across various LLMs * Sharing those prompts across multiple applications
We currently use a jupyter notebook to iterate, test, and validate prompts. Then move those prompts to our production app written in C#. If there were a C# SDK, I could use this tool to create a prompts config file and share it between the jupyter notebook and the C# app. The config file could also be added to version control.
Having said that, I don't understand why it saves the output of the LLM so maybe I'm missing something.