I've had this in mind for a while, and I finally did it: it shows system information like regular NeoFetch, but I've added extra features for those using local LLMs (Ollama, llama.cpp, etc.).
For example: -How much VRAM does your GPU have, which model (NVIDIA, AMD, Intel, Apple M series)? -How many billions of parameters can your machine comfortably run (is 70B or 13B more sensible?) -Which GGUF quantization does what (Q4_K_M vs Q8_0 vs)? -Comparison of Ollama / llama.cpp / vLLM / LM Studio -Disk speed test + JSON/Markdown export
Simple installation: pip install llm-neofetch-plus
llm-neofetch -d 3 ← This is the detailed version, showing suggestions etc.
GitHub: https://github.com/HFerrahoglu/llm-neofetch-plus
If anyone tries it, could you tell me if you liked it or not, and what we should change? Thanks!