Cheers, thanks for your interest:
Telosnex, @ telosnex.com --- fwiw, general positioning is around paid AIs, but there's a labor-of-love llama.cpp backed on device LLM integration that makes them true peers, both in UI and functionality. albeit with a warning sign because normie testers all too often wander into trying it on their phone and killing their battery.
My curse is the standard engineer one - only place I really mention it is one-off in comments like here to provide some authority on a point I want to make...I'm always one release away from it being perfect enough to talk up regularly.
I really really need to snap myself awake and ban myself from the IDE for a month.
But this next release is a BFD, full agentic coding, with tons of tools baked in, and I'm so damn proud to see the extra month I've spent getting llama.cpp tools working agentically too. (https://x.com/jpohhhh/status/1897717300330926109, real thanks is due to @ochafik at Google, he spent a very long term making a lot of haphazard stuff in llama.cpp coalesce. also phi-4 mini. this is the first local LLM that is reasonably fast and an actual drop-in replacement for RAG and tools, after my llama.cpp patch)
Please, feel free to reach out if you try it and have any thoughts, positive or negative. james @ the app name.com