Setting up Incus with ZFS on NixOS

In this guide i setup a NixOS system with ZFS root and bridged networking for use as an Incus container/VM host.

Setting up Incus with ZFS on NixOS

I recently picked up a pre-used Lenovo m910q tiny PC and decided i wanted to try out Incus for container and virtual machine management.

The m910q came with an Intel i5-6500T CPU, 8GiB RAM, and 128GB NVMe, which should be good for a few light-weight containers/VMs. I stuck another 8GiB of RAM in the machine started to investigate options for the host OS.

I loaded Debian on the machine and started to play with Incus. Things were good (Debian always is 😄) but not perfect. The machine only had a single drive in it, and i wanted to run ZFS without faffing around with DKMS.

I very briefly considered Ubuntu as it has ZFS support baked in, but decided on NixOS for its ease of root-on-ZFS installation and declarative nature.

After setting things up i decided to write this guide on how i did it. Mostly for my own reference, but if it's of interest to someone else then great!

If you follow this guide, you should end up with:

  • a host system running NixOS on ZFS,
  • set up with a bridge for the physical network and an internal/NATted bridge managed by Incus.
  • Incus will be pre-seeded with profiles for both bridges.
  • The system will be all set up and ready to go as soon as it reboots from installation.

Partitioning/setting up the drive

Boot your machine from the NixOS installation media (don't forget to switch off Secure Boot in the BIOS).

Ignore the Installer that pops up and open a terminal. We'll want to be root so run the command sudo su -

In the m910q there's a single NVMe drive /dev/nvme0n1. If you have more (or other drive types) make sure you don't use the wrong one because these next steps will erase all the data from the drive specified!

Now let's partition the drive.

parted /dev/nvme0n1 -- mklabel gpt
parted /dev/nvme0n1 -- mkpart ESP fat32 1MB 1GB
parted /dev/nvme0n1 -- mkpart zfs "" 1GB -4GB
parted /dev/nvme0n1 -- mkpart swap linux-swap -4GB 100%
parted /dev/nvme0n1 -- set 1 esp on

This creates the partition table for the drive, creates EFI, ZFS and Swap partitions, then sets the EFI partition as bootable.

Now we setup a Zpool on the ZFS partition.

zpool create -O compression=on -O mountpoint=none -O xattr=sa -O acltype=posixacl -o ashift=12 tank /dev/nvme0n1p2

Next up is creating some datasets. The first four are for the NixOS system and the final one is what Incus will manage.

zfs create -o mountpoint=legacy tank/root
zfs create -o mountpoint=legacy tank/nix
zfs create -o mountpoint=legacy tank/var
zfs create -o mountpoint=legacy tank/home
zfs create tank/incus

Now we mount the datasets necessary for installation

mount -t zfs tank/root /mnt
mkdir /mnt/nix /mnt/var /mnt/home
mount -t zfs tank/nix /mnt/nix
mount -t zfs tank/var /mnt/var
mount -t zfs tank/home /mnt/home

Format the boot partition then mount it

mkfs.fat -F 32 -n boot /dev/nvme0n1p1
mkdir -p /mnt/boot
mount -o umask=077 /dev/disk/by-label/boot /mnt/boot

and finally format and enable the swap partition

mkswap -L swap /dev/nvme0n1p3
swapon /dev/nvme0n1p3

That's the drive setup. Now on to the configuration.nix file.

Creating the configuration.nix file

Run

nixos-generate-config --root /mnt
rm /mnt/etc/nixos/configuration.nix
nano -w /mnt/etc/nixos/configuration.nix

Now copy and paste the following into the empty configuration.nix file

{ config, pkgs, ... }:

{
  imports =
    [ # Include the results of the hardware scan.
      ./hardware-configuration.nix
    ];

  # Bootloader.

  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;
  boot.kernelModules = [ "drivetemp" ];
  boot.supportedFilesystems = [ "zfs" ];
  boot.extraModprobeConfig = ''
      options kvm_intel nested=1 enable_shadow_vmcs=1 enable_apicv=1 ept=1
      options kvm ignore_msrs=1 report_ignored_msrs=0
    '';

  fileSystems."/" = {
    device = "tank/root";
    fsType = "zfs";
    };
  fileSystems."/nix" = {
    device = "tank/nix";
    fsType = "zfs";
    };
  fileSystems."/var" = {
    device = "tank/var";
    fsType = "zfs";
    };
  fileSystems."/home" = {
    device = "tank/home";
    fsType = "zfs";
    };

  zramSwap = {
    enable = true;
    algorithm = "zstd";
  };

  # Networking
  networking = {
    hostId = "330228af";
    hostName = "blahblahblah";
    tempAddresses = "disabled";
    nftables.enable = true;
    useDHCP = false;
    bridges = {
      externalbr0 = {
        interfaces = [ "enp0s31f6" ];
      };
    };
    interfaces = {
      externalbr0 = {
        useDHCP = true;
        macAddress = "87:11:03:5a:2f:58";
      };
      tailscale0 = {
        useDHCP = false;
      };
    };
    # Firewall
    firewall = {
      enable = true;
      # Allow all traffic from the tailnet & Incus' internal (NATted) bridge
      trustedInterfaces = [
        "tailscale0"
        "internalbr0"
      ];
      allowedTCPPorts = [
        8443
      ];
      allowedUDPPorts = [
        config.services.tailscale.port
      ];
    };
  };

  # System Settings.
  time.timeZone = "Europe/London";
  console.keyMap = "uk";
  i18n.defaultLocale = "en_GB.UTF-8";
  system.stateVersion = "24.11";
  system.autoUpgrade = {
    enable = true;
    dates = "weekly";
    allowReboot = true;
  };

  environment.systemPackages = with pkgs; [
    lm_sensors
    tmux
  ];

# Services
  services = {

    # ZFS
    zfs = {
      autoScrub.enable = true;
      trim.enable = true;
    };

    # Firmware updates
    fwupd.enable = true;

    # Tailscale
    tailscale.enable = true;

    # OpenSSH
    openssh = {
      enable = true;
      settings = {
        PasswordAuthentication = false;
        KbdInteractiveAuthentication = false;
        PermitRootLogin = "no";
      };
    };
  };

# Virtualisation

  virtualisation = {
    # Incus (Virtual Machine and System Container management)
    incus = {
      enable = true;
      ui.enable = true;
      package = pkgs.incus-lts; # use 'pkgs.incus' for feature releases
      preseed = {
        networks = [
          {
            name = "internalbr0";
            type = "bridge";
            description = "Internal/NATted bridge";
            config = {
              "ipv4.address" = "auto";
              "ipv4.nat" = "true";
              "ipv6.address" = "auto";
              "ipv6.nat" = "true";
            };
          }
        ];
        profiles = [
          {
            name = "default";
            description = "Default Incus Profile";
            devices = {
              eth0 = {
                name = "eth0";
                network = "internalbr0";
                type = "nic";
              };
              root = {
                path = "/";
                pool = "default";
                type = "disk";
              };
            };
          }
          {
            name = "bridged";
            description = "Instances bridged to LAN";
            devices = {
              eth0 = {
                name = "eth0";
                nictype = "bridged";
                parent = "externalbr0";
                type = "nic";
              };
              root = {
                path = "/";
                pool = "default";
                type = "disk";
              };
            };
          }
        ];
        storage_pools = [
          {
            name = "default";
            driver = "zfs";
            config = {
              source = "tank/incus";
            };
          }
        ];
      };
    };  
  };

  # Users. Don't forget to set a password with "passwd"!
  users.users.USERNAME = {
    isNormalUser = true;
    description = "FIRSTNAME LASTNAME";
    extraGroups = [ "wheel" "incus-admin" ];
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 AAAAC... 1st@SSHKEY"
      "ssh-ed25519 AAAAC... 2nd@SSHKEY"
      "ssh-ed25519 AAAAC... 3rd@SSHKEY"
    ];
  };

}

A few things you will want to adjust

Under #System Settings:

  • time.timeZone
  • console.keyMap
  • defaultLocale

Under #Users:

  • USERNAME
  • Add your SSH keys

Under #Networking:

  • hostId
  • hostName
  • enp0s31f6
  • macAddress

hostID should be unique among your machines. You can generate an ID using head -c 8 /etc/machine-id.

You can generate a random mac address at this site.

enp0s31f6 is the ethernet device. If you're not using a Lenovo m910q then it'll likely be different. An ip link show will give you the correct value.

If your zpool is called something other than tank then you'll also need to change it in the Incus storage section of configuration.nix.

The configuration.nix above uses the LTS version of Incus. If you'd prefer the feature/development version then change package = pkgs.incus-lts; to package = pkgs.incus in configuration.nix.

Installation

That's the configuration done. Let's install.

nixos-install

After a short while you'll be asked for a root password. Enter it.

Now set the password for the user you created in the configuration.nix file

nixos-enter --root /mnt -c 'passwd USERNAME'

and reboot

reboot

You new Incus node should now come up. It'll be on a different IP address due to the bridge using the new MAC address specified in configuration.nix. Set up an IP reservation on your router for that MAC.

Let's run some instances!

Your Incus host should be up and running, now let's create some containers:

incus launch images:debian/bookworm test1

Incus will pull the container image and, after a short while, launch it.

incus list

should give you some info about your container. Notice that it is NATted from your LAN. If you want a container accessible from other machines on your LAN you'll need to use the bridged profile

incus launch images:debian/bookworm test2 --profile bridged

Another incus list should show that the test2 container has IP addresses on your LAN, acquired via DHCP.

If you want to launch VMs rather than containers tack --vm to the end of the command

incus launch images:debian/bookworm test3 --profile bridged --vm

If you opted for the development version of Incus then OCI (Podman/Docker-style) App containers are also supported. Add an OCI container registry like so

incus remote add docker https://docker.io --protocol=oci

to run an App container

incus launch docker:httpd test4 --profile bridged

More early steps with Incus can be found in the getting started guide.

Final touches

If you want to enable tailscale run sudo tailscale up and authenticate.

If you want to access the webui or API of Incus over Tailscale

incus config set core.https_address <TAILSCALE_ADDRESS>:8443

or on all interfaces

incus config set core.https_address :8443