Make is one of my favorite tools, I believe the author built a simple and robust system. I have certain degree of envy.
I made something similar but with Ansible wrapped as an uv script.
What I like about ansible is that it's higher level so I'm able to do complex modifications to my machine without having to write them myself and handle the errors myself because the community behind the tasks have already done it. The idempotency of ansible out of the box is also very nice.
Or just use nix with home manager. Battle tested, lots of built-in functionality, works perfectly. Author claims the learning curve for it is weeks, but I had my setup up and running in a 1-2 hours at most and have been super happy with it.
IshKebab 18 hours ago [-]
This must be a different "just" from the just I'm used to!
Weeks sounds way more accurate than 1-2 hours.
loveparade 18 hours ago [-]
Now with LLMs it's even easier. Writing nix code is hard, but reading it is straightforward because it's declarative, so you can easily review what an LLM produces. And it's not much code either, a simple home manager setup is maybe 100 lines total.
Ah yeah I wouldn't count that as being a small learning curve because you haven't actually learnt anything.
Valid approach though I guess.
loveparade 14 hours ago [-]
I'd disagree. You're not learning anything if you close your eyes and tell your LLM "set up home manager" - but you'll learn a lot if you read the code it produces, ask clarifying questions, and actually try to understand what is happening. It's just a tool that helps you avoid doing tons of research manually by searching google, and helps you avoid dealing with ugly nix syntax.
I recently used clause code to help me learn nix + home-manager!
For anyone considering it - it’s been fun, genuinely useful in my day to day, and I can’t recommend it enough - I now have a source controlled tool kit that I can take with me anywhere I go
l1ng0 11 hours ago [-]
If the wheel was a stellated rhombicosidodecahedron
ika 22 hours ago [-]
I agree. I started with Nix flakes in my project and fell in love with them. Then I started using Home Manager, and now I feel complete. I even played with nix-darwin and NixOS. It's an amazing piece of software.
dewey 20 hours ago [-]
I’ve gotten used to it and with LLM it’s easier to set up the config without learning all the obscure syntax but on macOS it’s still a very un-native feeling compared to home brew. Having to sudo all the time feels weird for just updating user space apps and configs.
rekado 19 hours ago [-]
I use a Guix manifest for every project, which describes what dev tools and dependencies I want. When I enter a directory the shell automatically evaluates the manifest and all my tools are ready.
With tooling for deployment I prefer to heed an adaptation of Greenspun's Tenth Rule. Neither Guix nor Nix are really all that "complex" from a user's perspective.
skeledrew 15 hours ago [-]
I very much have this problem, but this doesn't solve it. I've tried tracking my installs before and it doesn't work. Thing is I just install stuff on demand, and never think about recording the installs... until I need that record. Especially when I'm solving an issue. What I need is a universal automatic tracker that just captures out all.
> Every developer on Linux already knows both.
I've been developing on Linux for over 10 years and I don't. It's like exiting vim: whenever I want to do anything beyond running a command or basic variable use, I have to go lookup how to do it online. Every time.
jiehong 15 hours ago [-]
I'd recommend trying to get 1 package manager to handle all of it, like Brewfile [0].
> Docker/containers solve isolation, not tool installation. You do not want to run your editor, terminal, and CLI tools inside containers.
I'm not in agreement here. You can have a Dockerfile in which all tools get installed. You build it, and tag it with, let's say `proj-builder`.
Then you can run commands with a mounted volume like `docker run --volume $(pwd):/sources <tool call>`. And alias it.
That's unwanted overhead IMO. And I definitely don't want to be running my regular stuff in containers; like I did a full disable and yank on snap so I never accidentally install anything with it. And every time I get into a situation where I have to reach for docker, I find that I suddenly have to be watching my disk space. Absolutely hate it.
theowaway213456 22 hours ago [-]
Five years ago, I would've loved this. I love the simplicity and power of good old Make. And I obsess over my workstation's configuration. I used to have a massive bash script I would use to reprovision my workstation after every clean upgrade of Ubuntu.
But these days, I just tell codex to install things for me. I basically use it as a universal package manager. It's more reliable honestly than trying to keep up to date with "what's the current recommended way to install this package?"
I also have it keep a list of packages I have installed, which is synced to GitHub every time the list changes.
Gigachad 20 hours ago [-]
I feel like even iPad kids are more capable with a computer than HN users these days.
I like the idea, and I would like to use it. It still requres me to be conscious enough to add a new tool to it.
I'd love to see something like a "discover" option that attempts to scan what things I have added to assist me in building the make for the next time.
duskdozer 17 hours ago [-]
I've ended up using a pseudo-make bash script with a helper that runs functions only once, mainly because I find adding new stuff to a makefile more annoying, and less intuitive and readable. Haven't come up with something easier so far
redoh 17 hours ago [-]
The fzf integration is a really nice touch here. Half the battle with dev tool management isn't installing things, it's remembering what you installed and how six months later. I know everyone's going to recommend Nix (and they already have), but there's something to be said for a solution where the entire logic fits in your head on first read. I've had a similar Makefile-based setup for years and the biggest win is onboarding new team members who can just read the targets and immediately know what's available.
sudonem 16 hours ago [-]
My approach might be an outlier, but I’ll share since it’s a bit more platform agnostic.
I do almost all of it work in the terminal, so I had already been using chezmoi to manage my dotfiles for a few years. Eventually I added an Ansible bootstrapping playbook that runs whenever I setup a new environment to install and configure whatever I like.
I’m already living & breathing Ansible most days so it wasn’t a heavy lift, but it’s a pretty flexible approach that doesn’t bind me to any specific type of package manager or distro.
fer 17 hours ago [-]
There's already a bunch of comments about Nix, so I don't want to repeat them, but really Nix is less complex than a handcrafted series of Makefiles, and significantly more versatile.
With home-manager I have the same packages, same versions, same configuration, across macOS, NixOS, Amazon Linux, Debian/Ubuntu... That made me completely abandon ansible to manage my homelab/vms.
Also adding flake.nix+direnv on a per project basis is just magical; I don't want to think how much time I would have wasted otherwise battling library versioning, linking failures, etc.
Brian_K_White 16 hours ago [-]
"This problem has already been solved in Canada. Just move to Canada."
Make is generic. Nix is not.
Before I even look at the actual code I already know that it is something I can use immediately on my existing system, no matter what that happens to be, right now, without changing anything else.
It doesn't matter how great nix is because it's not alpine or xubuntu or suse or freebsd or sco osr5 or solaris or cygwin, it's nix.
Even if you're only talking about nix the package manager, or nix the language, and not nix the os, it actually still applies because Make is everywhere and nix is not.
Even if this thing has bash-isms and gnumake-isms, I bet with minimal grief I can still use it on a Xenix system that doesn't even have a compiler (so no building nix) but does have ksh93 and make, even without leaning on the old versions of actual gnu make and bash that do exist.
fer 14 hours ago [-]
>Make is generic. Nix is not. Before I even look at the actual code I already know that it is something I can use immediately on my existing system.
Hard disagree on this one. It's a series of makefiles that depend on apt (or whatever pacman you choose), so for any heterogeneous environment it's going to constantly be uphill battle to keep working in terms of package naming, existence of dependencies, etc. You'd find yourself reinventing Ansible, but worse.
> It doesn't matter how great nix is because it's not alpine or xubuntu or suse or freebsd or sco osr5 or solaris or cygwin, it's nix.
Nix runs fine on most (all?) modern Linux distros, macOS, even WSL, and there are workarounds to make it run on BSD, though I admittedly haven't tested those.
> Even if this thing has bash-isms and gnumake-isms, I bet with minimal grief I can still use it on a Xenix system that doesn't even have a compiler (so no building nix) but does have ksh93 and make, even without leaning on the old versions of actual gnu make and bash that do exist.
Use it on Xenix (which last shipped in 1991) to do what? The package management was tarballs and compiling. Instead of reinventing Ansible, you'd be reinventing pkgsrc. Not sure what your point here is.
landdate 19 hours ago [-]
Alternatively, you can use the guix package manager. See here: https://guix.gnu.org/
Configuration is in scheme (guile) so that may be a turn off though.
0xbadcafebee 20 hours ago [-]
I codify all my AI install/setup/running junk (https://codeberg.org/mutablecc/ai-agent-coding) with Makefiles. You can make DRY Makefiles real easy, reuse them, override settings, without the fancy stuff in the author's post. The more you build up a reusable Makefile, the easier everything gets. But at the same time: don't be afraid to write a one-off, three-line, do-almost-nothing Makefile. If it's so simple it seems stupid, it's probably just right.
The main difference is I initially only needed a mechanism to check if my "Manually-Installed or Source -Compiled" (MISC) packages have updates, but now it also supports install/upgrading too.
In other words, things I am forced to do by hand outside of a package manager, I now only do by hand once, save it as an 'install' script, and then incorporate it into this system for future use and to check for updates. Pretty happy with it.
weitzj 12 hours ago [-]
I tried Nix. Worked. Then I forgot the syntax.
Therefore my middle ground is devbox.
It is like python vietualenv but backed by Nix. So I have a devbox.json file to define packages and devbox will do the Nix part for me.
I am getting a MacOS and Linux setup from this for aarch64 and x86
stevekemp 19 hours ago [-]
I like the way that golang supports the use of tools in the go.mod file.
Something like:
go get -tool github.com/golangci/golangci-lint/cmd/golangci-lint@v1.64.4
And youre ready to go, everything confined to the venv within the directory
Daegalus 11 hours ago [-]
You can literally use Brew for a lot of things. Linux brew is pretty good these days.
If you want this makefile way, use Justfiles,which are modernized Makefiles so you dont have 30 years of cruft and such being injected into it.
But also at the end of the day, Mise exists and is directly targetted for this.
I personally use Brew bundle files + Dotter for my home/tools management. Brew bundles also support Flatpaks these days and i can install those too.
embedding-shape 11 hours ago [-]
> Linux brew is pretty good these days.
What's the use case for homebrew on Linux these days? Most distributions have their own package manager, which you almost always end up using anyways, so already you're adding an extra package manager. Besides that, most of the community isn't using homebrew, so clearly won't have "more" packages, and the packages it'll have will be less reviewed than the ones in your distribution. I can't really see any point to use homebrew on Linux except "I used to use it on macOS", which doesn't feel that strong of a use case really.
Daegalus 10 hours ago [-]
I use an immutable distribution, i dont use the package manager as it is antithesis to the concept. The current most popular immutable distros (Bluefin, Bazzite, Aurora, etc) use Bluefin for CLI tools, or even some apps that are tricky to get full functionality from Flatpaks but cant do system install.
Sooo, i dont have a system package manager to use to add more packages, not without building my own image ontop of Bluefin/Bazzite.
Also, all the packages on Brew are fairly well tested, while mostly on OSX, they officially release Linux prebuilts for Linux and get tested equally. Brew has been around for ages.
And I havent used MacOS for 8-9 years, and only for a small stint. Not long enough for it to do things.
There is absolutely a usecase for it and its just as good if not better, as most tools are more likely to be statically built, and you donthave a giant dependency mess and other nonsense to deal with. Its cleaner.
On an immutable distro, its a lot of Flatpak, AppImage, and Brew/Mise/etc. Layering packages is greatly discouraged and as the ecosystem moves towards Bootc images over OSTREE ones, the option will go away entirely. You either build a custom image with yoru custom stuff layered on yourself (there are templates and Github CI stuff to help with it.) or you use other package managers.
Also another win is weith Brew, i can reproduced my tools and environment quickly and dont have to deal with Distro quirks. Brew works the same on almost every distro, same pathings, same behavior, and even offers the Brewfiles to let me specify my setup.
I recently switched jobs and had my work setup installed and created within mins of booting into a fresh install and I was working shortly after.
embedding-shape 10 hours ago [-]
> I use an immutable distribution, i dont use the package manager as it is antithesis to the concept
I don't think "immutable distribution" typically means "can't install applications", it's more about the system files than anything, not across absolutely everything, similar to "functional programming" doesn't mean "no side-effects allow anywhere" because then you couldn't draw to the screen. All those OSes have included utilities for installing packages ("programs"), otherwise they wouldn't be very useful.
Besides that, even going by your own understanding, if you install homebrew on a immutable distribution, doesn't that mean homebrew is "antithesis to the concept" too, as much as any other package/program manager?
Daegalus 10 hours ago [-]
No, because installing something in the userspace is different from system. Most package managers install to system locations, like /usr and so on. Homebrew installs into /home/linuxbrew/.linuxbrew and is useable from userspace.
Immutable might not be the best term, its more atomic. And while you can install packages with rpm-ostree for example, it gets layered ontop, and the more packages you layer, the more likely an upgrade fails, or a rebase fails. Hence you build a custom image, or adopt a user-space solution.
The method to install applications is again, userspace focused ones. for GUI apps its Flatpak and AppImage. For CLI tools it can be appImage, but for others its Mise, Brew, asdf, or even Nix.
The antithesis is installing applications onto the immutable portion of the system, or messing with it in any way (by layer packages ontop of the immutable parts). Installing into userspace is the preferred method. So these "immutable distributions" do have ways to install "packages (programs)" and that is Flatpak, Brew, AppImage, etc and not the system package manager.
It is why they are moving away from even having Layering as an option.
embedding-shape 7 hours ago [-]
> No, because installing something in the userspace is different from system. Most package managers install to system locations, like /usr and so on. Homebrew installs into /home/linuxbrew/.linuxbrew and is useable from userspace.
I see, so it's the default settings of the package managers you don't like? And prefer to use homebrew with sane defaults, rather than configuring your package manager to install things somewhere else?
I guess I was confused about the whole "immutable and no package manager" but then also "immutable and yes, other package manager" thing, but if it makes sense for you, I'm happy you found a setup that works for you :)
im surprised this is so far down, this has changed so much of my setup. Ive swapped from pretty much all managers to this and its been a lifechanger
tmarice 21 hours ago [-]
I’ve been using devenv.sh for the last year for this, and never been happier.
axegon_ 19 hours ago [-]
I used to do that but there are a few catches. As much as I brush off people who use any OS other than Linux, there is a time when you will have to do something on another operating system. A lesson I learned the hard way: Make on Windows sucks royally. While I agree with the general idea and I also tend to be conservative about new technologies (even more so with all the slop-coding lately), just[1] is now a very mature and well thought out alternative.
I love this for two reasons. 1) it's using make. I love make. I am a noob and only use it on its surface but I'm a huge fan. and 2) kinda related to 1) I learned a ton about make from this very project.
Kudos to you!
bargainbin 22 hours ago [-]
If you haven’t tried it, I highly recommend Mise. It manages everything at the user level so it’s not as “all encompassing” as Nix and is readily compatible with immutable distros.
Your solution is akin to putting your dotfiles in the code repo, which is going to cause issues with languages with poor version compatibility (such as node and python) when switching between old projects.
Also, bold of you to assume developers know make and bash just because they’re using Linux!
ManuelKiessling 21 hours ago [-]
These days, all dev tooling of my projects lives behind mise tasks, and the runtime for my projects is Docker.
This means that getting a project in shape for development on a new system looks like this:
- clone project
- `mise run setup`
I have zero dev tools on my host, projects are 100% self-contained.
I cannot endorse mise more highly. I commit it to my repos to make sure every engineer has the same environment. I use it in CI for consistency there as well. I keep all commands that would normally be documented in a readme as mise tasks. I use mise to load the environment, independent of language specific tools like dotenv. I use a gitignored mise.local to put real creds into the environment for testing.
zelphirkalt 19 hours ago [-]
Question about Mise: Does it manage checksums or a lock file per environment somewhere? I scrolled through the getting started page and didn't see anything at first glance.
ricardobeat 18 hours ago [-]
Releases are signed, but lockfiles are not commonly used for this purpose. For your home env you'll usually want the latest version of every tool.
When installing tools, or via mise.toml, you can define version ranges with the precision you'd like - "3" / "3.1" / "3.1.2".
figmert 17 hours ago [-]
Mise supports lock files but also validates checksums when possible.
0xbadcafebee 20 hours ago [-]
I use mise, but its conclusion that everybody needs to write an aqua plugin now is annoying. They need to make plugin-making a lot easier.
saint_yossarian 19 hours ago [-]
What conclusion do you mean? Aqua is just one of the many backends it supports.
For example there's also the GitHub backend which lets you install binaries from releases, no plugin needed at all.
0xbadcafebee 11 hours ago [-]
https://github.com/mise-plugins <-- First they say "Try to get your tool into aqua or see if it can be installed with the github backend, then it may be added to the mise registry", and then later they say "The rest of this doc is outdated and does not reflect the current state of preferring aqua/ubi.".
Overall there's too many ways to install things and it's not easy to add any of them. Asdf plugins were easy, but insecure (which could be fixed, but whatever). Everything else requires more research because it's more technical.
saint_yossarian 8 hours ago [-]
> it's not easy to add any of them
For most of them there's nothing to add though, you simply publish tools on GitHub/Cargo/etc. and mise will know how to install them.
Only if they have a plugin that describes how to install them. Many popular tools are much more complex to install and set up than just downloading a binary and making it executable. For those you need to create a plugin for mise to be able to install them. Luckily, very often some other generous person has gone through all the trouble of learning how to make the plugin, going to the official repos, making a PR, and finally getting it merged. But if somebody hasn't done that already, it's painful (more painful than, say, an asdf plugin). It depends on the language, on the tool and system requirements, etc. Overall it's kind of a mess. Mise leaves you with the trouble of figuring all that out, rather than making some kind of convenience function to get the process started easily.
rheakapoor 21 hours ago [-]
[dead]
beardicus 15 hours ago [-]
[dead]
jellyfishbrain 19 hours ago [-]
[dead]
weiyong1024 20 hours ago [-]
[dead]
Rendered at 02:56:03 GMT+0000 (Coordinated Universal Time) with Vercel.
I made something similar but with Ansible wrapped as an uv script. What I like about ansible is that it's higher level so I'm able to do complex modifications to my machine without having to write them myself and handle the errors myself because the community behind the tasks have already done it. The idempotency of ansible out of the box is also very nice.
Here is my ansible/uv-script project if someone is interested: https://camilo.matajira.com/?p=591
Weeks sounds way more accurate than 1-2 hours.
1. Install nix / determinate nix
2. Tell your favorite llm to set up https://github.com/nix-darwin/nix-darwin with home manager if you are on mac, or just home manager if you are on linux
3. Review the code and ask for clarifications
You'll have a set up in 20 minutes.
Valid approach though I guess.
With tooling for deployment I prefer to heed an adaptation of Greenspun's Tenth Rule. Neither Guix nor Nix are really all that "complex" from a user's perspective.
> Every developer on Linux already knows both.
I've been developing on Linux for over 10 years and I don't. It's like exiting vim: whenever I want to do anything beyond running a command or basic variable use, I have to go lookup how to do it online. Every time.
> Docker/containers solve isolation, not tool installation. You do not want to run your editor, terminal, and CLI tools inside containers.
I'm not in agreement here. You can have a Dockerfile in which all tools get installed. You build it, and tag it with, let's say `proj-builder`.
Then you can run commands with a mounted volume like `docker run --volume $(pwd):/sources <tool call>`. And alias it.
[0]: https://docs.brew.sh/Brew-Bundle-and-Brewfile#types
But these days, I just tell codex to install things for me. I basically use it as a universal package manager. It's more reliable honestly than trying to keep up to date with "what's the current recommended way to install this package?"
I also have it keep a list of packages I have installed, which is synced to GitHub every time the list changes.
I'd love to see something like a "discover" option that attempts to scan what things I have added to assist me in building the make for the next time.
I do almost all of it work in the terminal, so I had already been using chezmoi to manage my dotfiles for a few years. Eventually I added an Ansible bootstrapping playbook that runs whenever I setup a new environment to install and configure whatever I like.
I’m already living & breathing Ansible most days so it wasn’t a heavy lift, but it’s a pretty flexible approach that doesn’t bind me to any specific type of package manager or distro.
With home-manager I have the same packages, same versions, same configuration, across macOS, NixOS, Amazon Linux, Debian/Ubuntu... That made me completely abandon ansible to manage my homelab/vms.
Also adding flake.nix+direnv on a per project basis is just magical; I don't want to think how much time I would have wasted otherwise battling library versioning, linking failures, etc.
Make is generic. Nix is not.
Before I even look at the actual code I already know that it is something I can use immediately on my existing system, no matter what that happens to be, right now, without changing anything else.
It doesn't matter how great nix is because it's not alpine or xubuntu or suse or freebsd or sco osr5 or solaris or cygwin, it's nix.
Even if you're only talking about nix the package manager, or nix the language, and not nix the os, it actually still applies because Make is everywhere and nix is not.
Even if this thing has bash-isms and gnumake-isms, I bet with minimal grief I can still use it on a Xenix system that doesn't even have a compiler (so no building nix) but does have ksh93 and make, even without leaning on the old versions of actual gnu make and bash that do exist.
Hard disagree on this one. It's a series of makefiles that depend on apt (or whatever pacman you choose), so for any heterogeneous environment it's going to constantly be uphill battle to keep working in terms of package naming, existence of dependencies, etc. You'd find yourself reinventing Ansible, but worse.
> It doesn't matter how great nix is because it's not alpine or xubuntu or suse or freebsd or sco osr5 or solaris or cygwin, it's nix.
Nix runs fine on most (all?) modern Linux distros, macOS, even WSL, and there are workarounds to make it run on BSD, though I admittedly haven't tested those.
> Even if this thing has bash-isms and gnumake-isms, I bet with minimal grief I can still use it on a Xenix system that doesn't even have a compiler (so no building nix) but does have ksh93 and make, even without leaning on the old versions of actual gnu make and bash that do exist.
Use it on Xenix (which last shipped in 1991) to do what? The package management was tarballs and compiling. Instead of reinventing Ansible, you'd be reinventing pkgsrc. Not sure what your point here is.
Configuration is in scheme (guile) so that may be a turn off though.
The main difference is I initially only needed a mechanism to check if my "Manually-Installed or Source -Compiled" (MISC) packages have updates, but now it also supports install/upgrading too.
In other words, things I am forced to do by hand outside of a package manager, I now only do by hand once, save it as an 'install' script, and then incorporate it into this system for future use and to check for updates. Pretty happy with it.
Therefore my middle ground is devbox.
It is like python vietualenv but backed by Nix. So I have a devbox.json file to define packages and devbox will do the Nix part for me.
I am getting a MacOS and Linux setup from this for aarch64 and x86
Something like:
And then you can list tools : ANd run them:pixi init && pixi add wget
And youre ready to go, everything confined to the venv within the directory
If you want this makefile way, use Justfiles,which are modernized Makefiles so you dont have 30 years of cruft and such being injected into it.
But also at the end of the day, Mise exists and is directly targetted for this.
I personally use Brew bundle files + Dotter for my home/tools management. Brew bundles also support Flatpaks these days and i can install those too.
What's the use case for homebrew on Linux these days? Most distributions have their own package manager, which you almost always end up using anyways, so already you're adding an extra package manager. Besides that, most of the community isn't using homebrew, so clearly won't have "more" packages, and the packages it'll have will be less reviewed than the ones in your distribution. I can't really see any point to use homebrew on Linux except "I used to use it on macOS", which doesn't feel that strong of a use case really.
Sooo, i dont have a system package manager to use to add more packages, not without building my own image ontop of Bluefin/Bazzite.
Also, all the packages on Brew are fairly well tested, while mostly on OSX, they officially release Linux prebuilts for Linux and get tested equally. Brew has been around for ages.
And I havent used MacOS for 8-9 years, and only for a small stint. Not long enough for it to do things.
Also per official stats: https://formulae.brew.sh/analytics/os-version/90d/ Ubuntu makes up ~20% of brew usage, and the Universal Blue family is about 2% and growing.
There is absolutely a usecase for it and its just as good if not better, as most tools are more likely to be statically built, and you donthave a giant dependency mess and other nonsense to deal with. Its cleaner.
On an immutable distro, its a lot of Flatpak, AppImage, and Brew/Mise/etc. Layering packages is greatly discouraged and as the ecosystem moves towards Bootc images over OSTREE ones, the option will go away entirely. You either build a custom image with yoru custom stuff layered on yourself (there are templates and Github CI stuff to help with it.) or you use other package managers.
Also another win is weith Brew, i can reproduced my tools and environment quickly and dont have to deal with Distro quirks. Brew works the same on almost every distro, same pathings, same behavior, and even offers the Brewfiles to let me specify my setup.
I recently switched jobs and had my work setup installed and created within mins of booting into a fresh install and I was working shortly after.
I don't think "immutable distribution" typically means "can't install applications", it's more about the system files than anything, not across absolutely everything, similar to "functional programming" doesn't mean "no side-effects allow anywhere" because then you couldn't draw to the screen. All those OSes have included utilities for installing packages ("programs"), otherwise they wouldn't be very useful.
Besides that, even going by your own understanding, if you install homebrew on a immutable distribution, doesn't that mean homebrew is "antithesis to the concept" too, as much as any other package/program manager?
Immutable might not be the best term, its more atomic. And while you can install packages with rpm-ostree for example, it gets layered ontop, and the more packages you layer, the more likely an upgrade fails, or a rebase fails. Hence you build a custom image, or adopt a user-space solution.
The method to install applications is again, userspace focused ones. for GUI apps its Flatpak and AppImage. For CLI tools it can be appImage, but for others its Mise, Brew, asdf, or even Nix.
The antithesis is installing applications onto the immutable portion of the system, or messing with it in any way (by layer packages ontop of the immutable parts). Installing into userspace is the preferred method. So these "immutable distributions" do have ways to install "packages (programs)" and that is Flatpak, Brew, AppImage, etc and not the system package manager.
It is why they are moving away from even having Layering as an option.
I see, so it's the default settings of the package managers you don't like? And prefer to use homebrew with sane defaults, rather than configuring your package manager to install things somewhere else?
I guess I was confused about the whole "immutable and no package manager" but then also "immutable and yes, other package manager" thing, but if it makes sense for you, I'm happy you found a setup that works for you :)
[1] https://github.com/casey/just
Kudos to you!
https://mise.jdx.dev/
Your solution is akin to putting your dotfiles in the code repo, which is going to cause issues with languages with poor version compatibility (such as node and python) when switching between old projects.
Also, bold of you to assume developers know make and bash just because they’re using Linux!
This means that getting a project in shape for development on a new system looks like this:
- clone project
- `mise run setup`
I have zero dev tools on my host, projects are 100% self-contained.
Pure bliss.
See https://github.com/dx-tooling/sitebuilder-webapp for an example.
When installing tools, or via mise.toml, you can define version ranges with the precision you'd like - "3" / "3.1" / "3.1.2".
For example there's also the GitHub backend which lets you install binaries from releases, no plugin needed at all.
Overall there's too many ways to install things and it's not easy to add any of them. Asdf plugins were easy, but insecure (which could be fixed, but whatever). Everything else requires more research because it's more technical.
For most of them there's nothing to add though, you simply publish tools on GitHub/Cargo/etc. and mise will know how to install them.
https://mise.jdx.dev/registry.html#backends has a bit more current info.