Or anything which is free (at least as in beer) and readily bundled in distro-specific installation packages?
Edit: And Arch packages ollama officially - https://archlinux.org/packages/?sort=&q=llama&maintainer=&fl... - and a few things in the AUR - https://aur.archlinux.org/packages?O=0&K=llama
Running one as a background desktop assistant is whole different animal than calling a Microsoft API.
Huh? That's not at all true. It's only using processing power (CPU) while it actually generates text, otherwise it sits and wait. Although yes it occupies memory (RAM or VRAM) if you don't unload it, but you can configure it to startup when you need it, and shut down when you don't.
A search in Fedora yields a single GSoC project[0] limited in scope to NetworkManager and it's not clear if anyone actually is working on that.
If the use case you're interested in is actually having the LLM doing things for you in SaaS applications, that wouldn't need deep integration but, considering Google is yet to deliver a Google Drive client for Linux, I wouldn't hold my breath waiting for a native Linux AI-assisted assistant.
Your best option right now is to interface with the assistants through their web interface and hope they have plugins/extensions to interact with things you want.
Other than that, some people have built prototypes running LLMs locally that talk to things like Home Assistant. But again, no deep desktop integration.
0 - https://docs.fedoraproject.org/en-US/mentored-projects/gsoc/...
The other day I wanted to figure out how to turn my dock red if I dropped the vpn in gnome. I found the file that controlled my wireguard gnome shell extension and with the help of gpt3.5 and some very rudimentary js knowledge (I'm a backend dev, don't hate me), I was able to add a js function to toggle the color on vpn up / down events. This didn't even take me an hour to do and I'd never even thought to try it before GPT.
Sure, things are janky now, but the future potential of LLMs with linux and OSS is huge.
A simple chat window and a automated script to install a existing small modell should be doable, but sounds not very exciting to me.
But mid term, having a locally run LLM and integrated into the OS that scans my files and can summarize folders for me, would be nice. I have big folders with mixed stuff, AI would be nice to sort that. I do believe some people are working on something like this, but the bulk of it is not OS specific. And not OSS.
But don't most LLMs have about a max 32k token context?
Can you tell me what exactly you want it to do? You have a bunch of files in folders and you want the AI to tell you what exactly?
On Mac when I press Command + Space, it brings up Spotlight search
That can't easily be added to be the equivalent of some kind of LLM prompt on GNOME/KDE/XFCE?
I don't quite know what you'd ask it/do with it that would be of much value? Seems like a quicker way/a wrapper around either asking an LLM questions via CLI or basically Electron wrapping HTML (like this https://github.com/lencx/ChatGPT)?
Both GNOME and KDE have that already. Shouldn't be too hard to implement what you're thinking if the APIs/services are available.
My frontend side is very weak so it’s going to be very barebones but contributions are welcome once it’s stable:
I really don't think that's true. There have always been distros that are based on being tiny, of course, but I think most of the normal distros are concerned with hitting a happy medium of size and features. Otherwise I can't imagine why anything would be shipping GNOME or KDE over LXDE, or why libreoffice would be installed by default. So the question is more where LLMs are on cost/benefit... which granted, may not be there yet, but I could easily see it turning into a checkbox at install time - "this machine has 16+GB of RAM; add SomeLLM?"
it is not distro bundled (yet), but I have it running on my Fedora Linux 39 running on a NUC with 16GB of RAM. Performance is good enough for me.