Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

Leveraging ZFS for Domino native or via Proxmox LXC containers

Daniel Nashed – 7 July 2025 07:06:18

ZFS is a very interesting file system for many reasons.
It offers compression, deduplication, snapshots, encryption and a very flexible volume manager.
ZFS is also the file system leveraged by Proxmox as the strategic file-system for local disks.

On Proxmox you can use ZFS in three different ways


1. Proxmox host level

2. LXC container as a direct mount without another file system in the LXC container

3. VM with a zvol which is kind of a raw device provided to the VM to add it's own file-system on top


The direct mount into the LXC container is a very interesting option which I tested before.

But now bringing ZFS to the picture this might make even more sense.


Of course this option only makes sense if you use ZFS native on Proxmox.

If you are running a larger Proxmox cluster, your storage is likely to use other options like Ceph.

But the following is also intended as food for thought to look into your own optimized storage.


One way that always works is to provide ZFS storage to a server over NFS.

NFS support is part of ZFS and allows to access ZFS over a network.


A simple configuration could look like the following.

This scenario would work with any machine which supports native ZFS.

It could be a Linux machine or a Proxmox host. Or an appliance like TrueNAS.



-- NFS Server --


Server Side on Ubuntu


Install packages for ZFS and NFS


apt install zfsutils-linux nfs-kernel-server


Create a pool and a volume with the right attributes for backup


zpool create tank /dev/sdb

zfs create -o mountpoint=/local/backup tank/backup

zfs set atime=off tank/backup

zfs set dedup=on tank/backup

zfs set recordsize=16K tank/backup



Enable NFS read/write sharing for the volume


zfs set sharenfs="rw=@192.168.96.42/32" tank/backup



Client Side on Ubuntu


Install package for NFS client


apt install nfs-common


Create a directory and mount the NFS volume (leaving out special attributes like noatime etc )


mkdir -p /local/backup

chown
notes:notes /local/backup
mount -t nfs 192.168.96.42:/local/backup /local/backup



The resulting performance for a Domino backup of larger NSF files:


Data Rate: 521.2 MB/sec



-- Proxmox LXC container mount --


If you are running a LXC container on Proxmox, you can create a ZFS volume and directly mount it into the LXC container without any additional overhead.


Create a mount with the right options


zfs create rpool/backup

zfs set atime=off rpool/backup

zfs set compression=lz4 rpool/backup

zfs set dedup=on rpool/backup

zfs set recordsize=16K rpool/backup

chown 101000:101000 /rpool/backup


Modify the settings of your LXC container for example /etc/pve/lxc/101.conf

Append the following type of line and restart your LXC container


lxc.mount.entry = /rpool/backup local/backup none bind,create=dir 0 0


Aligning the recordsize to 16K improves deduplication but reduces the performance a bit.

The performance is still almost double then using native NFS and the 16K block size is probably the better match.


Data Rate:   955.1 MB/sec  (with  16K recordsize)

Data Rate: 1,430.3 MB/sec  (with 128K recordsize)



Why is the performance better mounting a volume into the LXC container?


Using NFS the network connection is used. Even this is the local network on the same Proxmox host, this causes overheard and limits the performance to the speed of the network.

Leveraging the underlying ZFS directly does not have any overhead and provides the performance of the underlaying storage.


The fast SSDs used could provide much higher performance without de-duplication.

But this is de-duplicating ZFS write performance, which is quite impressive.


I have been using my mail file as test data. But in real life with more data the performance might drop. But this shows the potential of the setup.



Other benefits


Another big advantages using native ZFS volumes is the very flexible storage allocation.

Mounting a volume into the container allows you to use the full flexibility in contrast as you can see in below example.


The more I look into Proxmox and LXC containers the more I would want to conside LXC container on Proxmox for hosting Domino servers.


---


Example list of volumes:


root@pve:/rpool# zfs list

NAME                           USED  AVAIL  REFER  MOUNTPOINT

rpool                          482G  1.33T   104K  /rpool

rpool/ROOT                    4.46G  1.33T    96K  /rpool/ROOT

rpool/backup                  58.6G  1.33T  58.6G  /rpool/backup

rpool/data/subvol-100-disk-0  13.1G  86.9G  13.1G  /rpool/data/subvol-100-disk
-0



Deduplication status of the pool after a couple of backups:


zpool list

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

rpool  1.81T   441G  1.38T        -         -     3%    23%  
3.29x    ONLINE  -


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]