> Deal with invalid calls? Encode and tie results to the original call? Deal with error states? Is it custom work to bring in each new api or do you have common pieces dealing with, say, rest APIs or shelling out, etc?
Why would any LLM framework deal with that? That is your basic architecture 101. I don't want to stack another architecture on top of an existing one.
>If it’s reused, then is it that different from creating abstractions?
Because you have control over the abstractions. You have control over what goes into the context. You have control over updating those abstractions and prompts based on your context. You have control over choosing your models instead of depending on models supported by the library or the tool you're using.
>As an aside - models are getting explicitly trained to use tool calls rather than custom things.
That's great,but also they are great at generating code and guess what the code does? Calls functions.