A KISS deployment tool to keep your NixOS fleet (servers & workstations) up to date.
This name was chosen because Bento are good, and comes with the idea of "ready to use". And it doesn't use "nix" in its name.
There is currently no tool to manage a bunch of NixOS systems that could be workstations anywhere in the world, or servers in a datacenter, using flakes or not.
- secure 🛡️: each client can only access its own configuration files (ssh authentication sftp chroot)
- efficient 🏂🏾: configurations can be built on the central management server to serve binary packages if it is used as a substituters by the clients
- organized 💼: system administrators have all configurations files in one repository to easy management
- peace of mind 🧘🏿: configurations validity can be verified locally by system administrators
- smart 💡: secrets (arbitrary files) can (soon) be deployed without storing them in the nix store
- robustness in mind 🦾: clients just need to connect to a remote ssh, there are many ways to bypass firewalls (corkscrew, VPN, Tor hidden service, I2P, ...)
- extensible 🧰 🪡: you can change every component, if you prefer using GitHub repositories to fetch configuration files instead of a remote sftp server, you can change it
- for all NixOS 💻🏭📱: it can be used for remote workstations, smartphones running NixoS, servers in a datacenter
This setup need a machine to be online most of the time. NixOS systems (clients) will regularly check for updates on this machine over ssh.
If you don't absolutely require a public IP, don't worry, you can use tor hidden service, i2p tunnels, a VPN or whatever floats your boat given it permit to connect to ssh.
The ssh server is containing all the configuration files for the machines. When you make a change, you run a script copying all the configuration files to a new directory used by each client as a sftp chroot, each client regularly poll for changes in their dedicated sftp directory and if it changed, they download all the configuration files and run nixos-rebuild. It automatically detects if the configuration is using flakes or not.
Bento is just a framework and a few scripts to make this happening, ideally this should be a command in $PATH
instead of scripts in your configuration directory:
populate_chroot.sh
create copies of configuration files for each host found inhost
into the corresponding chroot directory (default is/home/chroot/$machine/
fleet.nix
file that must be included in the ssh host server configuration, it declares the hosts with their name and ssh key, creates the chroots and enable sftp for each of them. You basically need to update this file when a key change, or a host is added/removedlocal_build.sh
iterates over each host configuration to rundry-build
, but you can passbuild
as a parameter, this ensures each configuration work, and if you use this system as a substituter you can build their configurations to offload compilations on the clientsutils/bento.nix
that have to be imported into each host configuration, it adds a systemd timer triggering a service looking for changes and potentially trigger a rebuild if any
On the client, the system configuration is stored in /var/bento/
and also contains scripts update.sh
and bootstrap.sh
used to look for changes and trigger a rebuild.
There is a diagram showing the design pattern of bento:
Here is the typical directory layout for using bento for three hosts router
, nas
and t470
:
├── fleet.nix
├── hosts
│ ├── router
│ │ ├── configuration.nix
│ │ ├── hardware-configuration.nix
│ │ └── utils -> ../../utils/
│ ├── nas
│ │ ├── configuration.nix
│ │ ├── flake.lock
│ │ ├── flake.nix
│ │ ├── hardware-configuration.nix
│ │ └── utils -> ../../utils/
│ └── t470
│ ├── configuration.nix
│ ├── default-spec.nix
│ ├── flake.lock
│ ├── flake.nix
│ ├── hardware-configuration.nix
│ ├── home.nix
│ ├── minecraft.nix
│ ├── nfs.nix
│ ├── nvidia.nix
│ └── utils -> ../../utils/
├── local_build.sh
├── populate_chroot.sh
├── README.md
└── utils
└── bento.nix
- make configuration changes per host in
hosts/
or a global include file inutils
(you can rename it as you wish) - OPTIONAL: run
./local_build.sh
to check the configurations are valid - OPTIONAL: run
./local_build.sh build
to build systems locally and make them available in the store. This is useful if you want to serve the result as a substituter (requires configuration on each client) - run
./populate_chroot.sh
- hosts will pickup changes and run a rebuild
Here are the steps to add a server named kikimora
to bento:
- generate a ssh-key on
kikimora
for root user - add kikimora's public key to bento
fleet.nix
file - reconfigure the ssh host to allow kikimora's key (it should include the
fleet.nix
file) - copy kikimora's config (usually
/etc/nixos/
in bentohosts/kikimora/
directory - add utils/bento.nix to its config (in
hosts/kikimora
runln -s ../../utils .
and add./utils/bento.nix
inimports
list) - check kikimora's config locally with
./local_build.sh
, you can check only kikimora withenv NAME=kikimora ./local_build.sh
- populate the chroot with
sudo ./populate_chroot.sh
to copy the files in/home/chroot/kikimora/
- run bootstrap script on kikimora to switch to the new configuration from sftp and enable the timer to poll for upgrades
- you can get bento's log with
journalctl -u bento-upgrade.service
and see next timer information withsystemctl status bento-upgrade.timer
Here are the steps to deploy a change in a host managed with bento
- edit its configuration file to make the changes in
hosts/the_host_name/something.nix
- OPTIONAL: run
./local_build.sh
to check if the configurations are valid - run
sudo ./populate_chroot.sh
- wait for the timer of that system to trigger the update
If you don't want to wait for the timer, you can ssh into the machine to run systemctl start bento-upgrade.service
- being able to create a podman compatible NixOS image that would be used as the chroot server, to avoid reconfiguring the host and use sudo to distribute files
- auto rollback like "magicrollback" by deploy-rs in case of losing connectivity after an upgrade
local_build.sh
andpopulate_chroot
should be only one command installed in$PATH
- upgrades could be triggered by the user by accessing a local socket, like opening a web page in a web browser to trigger it, if it returns output that'd be better
- a way to tell a client (when using flakes) to try to update flakes every time even if no configuration changed, to keep them up to date
- a systray info widget could tell the user an upgrade has been done
- DONE
updates should add a log file in the sftp chroot if successful or not - the sftp server could be on another server than the one with the configuration files
- provide more useful modules in the utility nix file (automatically use the host as a binary cache for instance)
- have a local information how to ssh to the client to ease the rebuild trigger (like a SSH file containing ssh command line)