Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

High Domino Backup performance with native ZFS storage on Proxmox

Daniel Nashed – 17 March 2024 10:50:56

Introduction


Domino 12+ default native backup is a very easy to use option, which also works on Docker containers.

The resulting backup to a file target is always consistent, because delta information is always applied to the backup file.


But a file target raises the challenge that the whole NSF data will be copied to the target file-share or disk. Therefore a de-duplicating target is highly recommended.

I took a look into ZFS in detail in my new local setup to test out performance.



Protect your target file copy data


In addition to a file copy operation the resulting target should be always protected against Ransomware attacks.

There are multiple ways to protect the resulting file copy data, which isn't scope of this performance write-up.
Valid approaches would include a snapshot of the resulting ZFS data, copying the resulting consistent NSF data to a different backup media.

Any kind of secondary backup would work, because the data is consistent and does not need recovery operations on restore.



ZFS File System


ZFS is a quite special enterprise grade file-system offering couple of very interesting options.


It comes with an own very flexible pool management of native disks and also provides enterprise grade software RAID.

Beside snapshots it also supports compression and de-duplication.


In addition to space saving compression reduces the I/O load by taking just minimal CPU overhead which works perfectly OK with Domino NSF data.


De-duplication in contrast isn't a good idea for active Domino NSF data. But it is a perfect match for a backup target with Domino backup.


My ZFS backup performance on my Hetzner server isn't great. With a native setup of ZFS directly on the Proxmox hypervisor, the performance looks dramatically better.



Test Setup


Hardware


Intel NUC Intel(R) Core(TM) i3-8109U CPU @ 3.00GHz (NUC8i3BEH)

Samsung 980 PRO NVMe M.2 SSD 2TB



Software


Proxmox  8.1.4

LXC container with Ubuntu 22.04.4 LTS ((Jammy Jellyfish)

Domino 14.0 container


File System


With a LXC the file system is a ZFS file-system directly mounted from host. I added a root data disk and a backup volume



Backup Setup and Test


Domino backup comes with a standard configuration.The default target is /local/backup.
The directory inside the container points to the ZFS sub volume tuned as a backup target.


I increased the backup file copy buffer from 128 KB to 1 MB via a special notes.ini parameter --> notes.ini FILE_COPY_BUFFER_SIZE=1048576.

It turned for ZFS with 128 KB record size, this didn't make a big performance difference.
But it is a recommended parameter to push file copy operations to a bigger buffer size and optimize File I/O operations on Linux side.


To a fresh server I copied my 4.6 GB production mailfile for testing.

And enabled de-duplication on the ZFS target volume.


zfs set atime=off tank/subvol-100-disk-3

zfs set dedup=on  tank/subvol-100-disk-3



Basic backup performance is up to 500 MB/sec with compression and de-duplication.


The first backup already showed great performance I didn't expect.

Performance varies a bit. But even like 20% less performance would be already beyond anything I have seen in most corporate environments.


load backup

Backup: Domino Database Backup

Backup: Started

Backup: Pruning backups

Backup: BackupNode: [my-domino-server], BackupName: [default], Translog Mode: [CIRCULAR],  Backup Mode: [FULL]

Backup: LastBackupTime: 03/17/2024 09:28:13

Backup: Starting backup for 123 database(s)

Backup:

Backup: --- Backup Summary ---

Backup: Previous Backup  : 03/17/2024 09:28:13

Backup: Start Time       : 03/17/2024 09:29:02

Backup: End Time         : 03/17/2024 09:29:18

Backup: Runtime          : 00:00:15.47

Backup:

Backup: All              :   123

Backup: Processed        :   123

Backup: Excluded         :     0

Backup: Pending Compact  :     0

Backup: Compact Retries  :     0

Backup: Backup Errors    :     0

Backup: Not Modified     :     0

Backup: Delta Files      :     0

Backup: Delta applied    :     0

Backup:

Backup: Total DB Size    :     7.3 GB

Backup: Total DeltaSize  :     0.0 Bytes

Backup: Data Rate        :   496.9 MB/sec

Backup: --- Backup Summary ---

Backup:

Backup: Finished



More test results


Another first Backup resulted in 581.0 MB/sec

Second Backup (immediately afterwards) had almost the same speed:  577.3 MB/sec


Backup after DBMT did vary in size and might get lower than the 540.3 MB/sec

This depends also in how much the data changes. I saw performance dropped to like 270 MB/sec in some cases for data that changed a lot.



Looking at the de-duplication rates


The first backup resulted in almost zero de-duplication. Which was sort of expected with a single mail file.



Second backup


But already the second backup shows the benefit of ZFS de-duplication


zpool list

NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

tank  1.81T  90.5G  1.72T        -         -     0%     4%  
2.00x    ONLINE  -


Backup after DBMT


DBMT re-writes the whole NSF file and should at most done once per week.

Maybe even less when DAOS, NIFNSF are enabled.


But even after a DBMT the de-duplication rate was quite good in my test. This might vary with real world changing data.



zpool list

NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

tank  1.81T  90.7G  1.72T        -         -     0%     4%  
2.85x    ONLINE  -


Conclusion and additional thoughts


ZFS can be a very efficient backup target from performance and cost point of view.

In my case the ZFS target was on the same Proxmox server. But remote Linux hosts running ZFS natively accessed over NFS or CIFS would work as well.

The performance will be very likely not be the same, because of overhead added by NFS, network etc.


Proxmox might become more important in future for Domino.

Local Proxmox ZFS storage in combination with Domino clustering can be a valid approach including backup with the right backup protection strategy.

The ZFS backup volume should be at least a separate ZFS disk pool.


Remote ZFS over NFS


Probably in larger production environments the ZFS pool will be on a different box, which makes the network the next bottleneck to look into.

A 1 GBit network card could only handle at max 112 MB/sec.  But in corporate environments server to server communication should be hopefully handled by 10 Gbit NICs.

For a small server or home-office backup a local ZFS pool with periodic backup to an external disk would be a valid approach.
A 112 MB/sec backup over 1 GBit NIC would be probably more than sufficient in smaller environments.


Local native ZFS is the fastest option


The best optimization is probably on a local ZFS pool, because compression and de-duplication is handled locally and only the delta has to be written.

That's also why the native ZFS sub-volume mount in my LXC container setup is that important.


For a ZFS zvol in a VM in combination with a local file-system the performance would probably look completely different as well.

The setup used removes all intermediate layers and just uses native ZFS for backup operations - almost comparable to a physical host without any virtualization.

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]