However, AFAIK it's only ever at inference time, an interpreter isn't included during LLM training? I wonder if it would be possible to fine tune a model for coding with an interpreter. Though if noone has done it yet there is presumably a good reason why not.