I tried my best to use this tool and many other all in one tools (asdf, rtx etc) but individual tools like tfenv, pyenv, nvm feel way more ergonomic that all the all in one tools to a point that i don't mind having so many tools
Which has been turned into some kind of system aimed at generating/distributing F/OSS revenue based on usage via crypto. Pkgx is the package manager that drives it, which used to be called 'tea'.
Here's a previous discussion: https://news.ycombinator.com/item?id=30778924
And a more recent one: https://news.ycombinator.com/item?id=33681216
This was at the "let's download the package repository to get started" step. So, meh...
1. Install Nix from https://nixos.org/download.html
2. Find the package from https://search.nixos.org/packages
3. Start a shell with `nix-shell -p <name>`, have the package available in it
Sorry, no.
It surely is complex, but I think you're intrinsically devaluing the importance of dependency management. Your important 'stuff to do' is built on top of a lot of software, getting that supply chain right (and reproducible) is at least as important.
For example, if you have a JS project with a package.json, Nix offers node2nix as a way to transform that package.json into Nix-like expressions. But in an alternate universe npm and lockfiles would "work" well enough to where we wouldn't need to rely on nix for package pinning.
There's all this work put into reproducibility that thinks that the answers are around adding wrappers around the existing tooling. It's good as a last resort, but if those efforts were going more into each language's ecosystem may we would end up in a scenario where each packaging tool didn't have to come up with its own magic way of doing things.
Nix and Bazel are complex because they try to hard to work well despite the tooling, rather than getting tooling to a place where all these layers of hacks were not an issue. And so downstream of that, "simple" tools become too complex from all the incidental complexity introduced by this way of doing things.
This feels like a solution for a problem that doesn’t exist.
This is solved for many languages: have one directory per project (node_modules, target) and optionally have a user cache so you don't have to redownload stuff. Or, have one usercache, but still separate by versions. I think people know how painful c and c++ versions are and don't want to do that again. Even with c/c++: cmake, automake, and bazel are there to wrangle package versions.
Though, I do agree things like Firefox or Thunderbird don't need to be hidden in a mysterious cache location
That’s the part I don’t get; this reads like it’s replacing brew. Libraries are entirely different from applications. I can easily see wanting a library temporarily to satisfy a version constraint, but something I’m going to use directly? Why?
If it’s for the interpreter (Python for example), and someone has pinned it to a specific - not floating up - requirement, that’s a problem with the author IMO. I shouldn’t need to install 3.7.3 when semver (if taken seriously, which Python does) states that >= 3.7 suffices.
I have no comment on Node because the entire ecosystem is a hellscape.
To each their own. If your workflow works for you, have at it.
> written by homebrew author (brew is notoriously slow)
> written in typescript
hmmm... I have my doubts on that claim, especially when there is no evidence to support it.
$ bun
command not found: bun
^^ type `pkgx` to run that
$ pkgx
running `bun`…
Bun: a fast JavaScript runtime, package manager, bundler and test runner.
# …I'm also genuinely surprised they abandoned the sha256 from brew (e.g. "welp, it is what it is" https://github.com/pkgxdev/pantry/blob/main/projects/httpie.... ). Ah, it's an implied .sha256 path from their magic distribution something something: https://dist.pkgx.dev/?prefix=httpie.io/
I find it actually useful, sometimes I may want to try something temporarily for intermediate result now and then and having it installed doesn’t yield benefits.
Good example: I generate asyncapi docs and doing an npx is fine. Alternative approach like pulling a docker image and the executing something with all volume sharing, ports what not is a but cumbersome because am too lazy to press too many keys.
If Max added “offline caching” to pkgx, you could still continue to use utilities even after the shell session ended or if you lost Internet connectivity!
/s