1Show HN: Go LLM inference with a Vulkan GPU back end that beats Ollama's CUDA (opens in new tab)(github.com)1computerex18d ago0
2Show HN: I wrote an LLM inference engine in pure Go – 48 tok/s zero dependencies (opens in new tab)(github.com)2computerex19d ago0
3Show HN: 100% local speech dictation app with wakeword detection (opens in new tab)(mohdali7.gumroad.com)1computerex21d ago0