Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 RAG 

If you’re interested in RAG, you should watch the OpenRAG Submit video

Daniel Nashed – 27 January 2026 20:53:01

Niklas Heidloff published a short blog post that pointed me to it
https://heidloff.net/article/openrag/
That post was the trigger to look at this excellent OpenRAG Submit video.



OpenRAG Submit

https://www.youtube.com/watch?v=y3Tr1i_ynvI


Even if you never plan to use OpenRAG, this video is absolutely worth to watch.
What makes it stand out is that it explains RAG by breaking it down into individual components, instead of treating it as a black box. It focuses on architecture, responsibilities, and trade-offs rather than on a single framework or product.

This is not just an OpenRAG video — it’s one of the clearest explanations of modern RAG architectures I’ve seen so far.

The speakers are clearly experts in their fields, but just as importantly, they are excellent communicators. The presentation is structured, easy to follow, and genuinely enjoyable to watch.
And yes — I  love the Run MCP T-shirt.


My Takeaways from the video

 
  • Many tools used in AI, RAG, and related components are open source
    That makes perfect sense. Openness, transparency, and extensibility are essential in this space.
    This also helps to accelerate development in the AI space


  • RAG does not automatically mean “vector database”
    Vector databases are useful, but they are not mandatory. The video does a great job of explaining alternative approaches and when they make sense.


  • MCP servers are an important part of the stack
    They play a key role in how components communicate and integrate.


  • Data ingestion and tokenization really matter
    How data is ingested — and how it is tokenized — has a major impact on results. Tokenization strategies change over time and should be flexible and pluggable.


  • Everything should be pluggable
    All components in a RAG architecture should be replaceable. One of the most important emerging standards enabling this is MCP.


Domino Prometheus exporter -- Support for downtime statistics

Daniel Nashed – 27 January 2026 23:29:18

Measuring downtime is an interesting challenge and can be tricky.Here is an idea I am working on.
In the first step I always remove the .prom files when the task was shutdown.
Now I am keeping domino.prom and only keep the stats that make still sense if the server is down.

As long the Node Exporter is still running, the statistics are still provided over the /metric endpoint.

DominoHealth_stat_update_timestamp

Updated whenever the stats are updated.
If the timestamp is not changing over a certain time, Grafana can detect that a server is down.


Maintenance status?

But what if the server is down for maintenance?

I added a way to set maintenance for a server via "tell domprom maintenance .. commands".

DominoHealth_maintenance_status
Provides the current status

DominoHealth_maintenance_start_timestamp
Provides the start time of the maintenance window

DominoHealth_maintenance_end_timestamp
Provides the end time of the maintenance window

In addition the statistics will report the last update time for the stats. And also if the server is restricted
It took me a moment to get the statistics right including showing the configuration.

The maintenance window is stored in memory and in notes.ini to preserve it at server restart.

I think this makes sense for uptime monitoring which will be part of a separate Grafana dashboard.

tell domprom status

01/27/2026 00:25:20   Collection  Interval :   64 seconds)
01/27/2026 00:25:20   Transaction Interval :  180 seconds)
01/27/2026 00:25:20   I/O Stats   Interval :  -Disabled-)
01/27/2026 00:25:20   Statistics File      :  /local/notesdata/domino/stats/domino.prom
01/27/2026 00:25:20   Transactions File    :  /local/notesdata/domino/stats/domino_trans.prom
01/27/2026 00:25:20   Maintenance start    :  01/27/2026 00:09:02 (since 16.28 minutes)
01/27/2026 00:25:20   Maintenance end      :  01/27/2026 01:49:02 (will end in 1.40 hours)


# HELP DominoHealth_Exporter_Build Domino Prometheus Exporter build version 1.0.3
# TYPE DominoHealth_Exporter_Build gauge
DominoHealth_Exporter_Build 10003
# HELP DominoHealth_stat_update_timestamp Domino Statistics last update epoch time
# TYPE DominoHealth_stat_update_timestamp gauge
DominoHealth_stat_update_timestamp 1769469174
# HELP DominoHealth_maintenance_start_timestamp Start of maintenance window in epoch time (01/27/2026 00:09:02)
# TYPE DominoHealth_maintenance_start_timestamp gauge
DominoHealth_maintenance_start_timestamp 1769472542
# HELP DominoHealth_maintenance_end_timestamp End of maintenance window in epoch time (01/27/2026 01:49:02)
# TYPE DominoHealth_maintenance_end_timestamp gauge
DominoHealth_maintenance_end_timestamp 1769478542
# HELP DominoHealth_maintenance_status Domino maintenance status
# TYPE DominoHealth_maintenance_status gauge
DominoHealth_maintenance_status 1
# HELP DominoHealth_server_restricted_status Domino server restricted status
# TYPE DominoHealth_server_restricted_status gauge
DominoHealth_server_restricted_status 4
# HELP DominoHealth_stat_shutdown_timestamp Domino statistic shutdown epoch time
# TYPE DominoHealth_stat_shutdown_timestamp gauge
DominoHealth_stat_shutdown_timestamp 1769472773


SPR #DNADDMUMFD: Certstore import fails with certificates with email or IP SANs

Daniel Nashed – 25 January 2026 16:22:46

SPR #DNADDMUMFD: Certstore import fails with certificates with email or IP SANs


Domino CertMgr only leverages DNS SAN attributes when generating CSRs in manual flow and for ACME (Let's Encrypt & Co).
But when importing the certificate can have different type of SANs (Subject Alternative Name).


  • The email attribute isn't intended for web servers and causes certstore.nsf to show an error in the UI because an e-mail address is not a proper DNS name.
  • IP addresses could be used for web servers in general. But Domino does not leverage IP SANs.

The parsing of IP addresses currently fails and causes "garbage" added to the host name field and sets the status of the certificate to invalid.
Christian pinged me about this issue and reported he was able to manually change the host name field and to change the status of the TLS credentials document to make it load.

I would generally not use e-mail addresses for SANs for web servers (they can be still part of the CN).
For now avoid also IP addresses until the SPR is fixed.


The issue wasn't customer reported since this week (thanks Christian).
I found it a while ago and it got fixed 14.5.1 (planned to ship 2026/3).

The fix will only read DNS SANs for imported certificates (see blue text in example below).
The certificate itself stays unchanged and works as it is.


openssl x509 -in cert.pem -text -noout



Certificate:

 Data:

     Version: 3 (0x2)

     Serial Number:

         43:2d:87:c4:a2:ea:a8:e5:df:69:13:16:5d:86:89:f0:7a:9b:b0:37

     Signature Algorithm: ecdsa-with-SHA256

     Issuer: CN = example.com

     Validity

         Not Before: Jan 25 15:40:55 2026 GMT

         Not After : Apr 29 15:40:55 2028 GMT

     Subject: CN = example.com

     Subject Public Key Info:

         Public Key Algorithm: id-ecPublicKey

             Public-Key: (256 bit)

             pub:

                 04:c7:45:2b:81:97:aa:93:1f:eb:03:c5:86:07:5e:

                 27:65:a5:0f:72:f8:30:7a:b2:8b:91:ea:f2:7f:9d:

                 02:be:fe:6e:dd:f2:a6:13:fe:42:f9:b5:7a:5a:b2:

                 e5:34:c0:64:e7:b9:0d:64:9d:34:38:2e:b2:2e:69:

                 8a:0a:e7:ce:6c

             ASN1 OID: prime256v1

             NIST CURVE: P-256

     X509v3 extensions:

         X509v3 Subject Key Identifier:

             43:E9:3E:38:65:B4:8A:C9:82:FB:CB:FA:34:0C:75:36:C4:E0:AE:02

         X509v3 Authority Key Identifier:

             43:E9:3E:38:65:B4:8A:C9:82:FB:CB:FA:34:0C:75:36:C4:E0:AE:02

         X509v3 Basic Constraints: critical

             CA:TRUE

         X509v3 Subject Alternative Name:

         
   DNS:example.com, DNS:www.example.com, IP Address:192.168.1.10, email:admin@example.com
 Signature Algorithm: ecdsa-with-SHA256

 Signature Value:

     30:45:02:20:04:9c:63:f0:ce:b5:5f:ae:15:b9:8f:34:6b:35:

     63:f2:e6:34:08:76:4f:3c:44:61:b0:ee:60:9d:2e:5b:e4:5f:

     02:21:00:d3:a6:04:ee:90:df:cc:75:ba:5a:84:24:6d:53:70:

     ba:ab:81:a5:cc:de:5c:0c:43:31:71:df:a7:5b:d6:cd:1e


 Domino  DAOS  daostune 

Leveraging DAOS for storage optimization and backup

Daniel Nashed – 25 January 2026 15:07:32

Domino DAOS (Domino Attachment and Object Service) was introduced in HCL Domino 8.5, released in 2008.

It replaced the old approach "Single Copy Object Store which never really worked well.


In contrast DAOS is much simpler and robost approach where the server gernerates an unique hash for each backend object behind an attachment and creates a NLO file (Notes Large Object) storged outside the .nsf file.

But still not everyone is lerveraging DAOS -- It can have also have great benefit on backup.


Benefits:


  • Deduplication can save ~ 30% dpending how attachments are spead

  • Up to 60% - 70% of storage could be either deduplicated or storged in DAOS which leaves the .nsf file to like 30% - 40% of the original .nsf file.

  • .nsf requires high perfomance disks and generates many small I/O operations
    In contrast DAOS uses larger I/O operations and works well on standard disks -- which already is great optimization


  • The smaller .nsf files are also easier to maintain. DBMT and other operations are working much faster on smaller databases  

  • DAOS .nlo files are written once and a reference count is maitained. in daoscat.nsf.
    In contrast to .nsf this makes .nlo files the perfect ingremental backup candiate -- which doesn't require a Domino aware backup


  • This means you can focus on the way smaller .nsf files for online backup


Consider DAOS also for backup purposes


If you never used DAOS, you should really consider looking into it and review your backup strategy.

When encryption is disabled the DAOS store is also a candidate for cross server deduplication.

You could point DAOS for multiple servers to a deduplicating storage provider to gain more storage benefits and central DAOS backup.



DAOS Tune


How to find out how DAOS would look like for your server?


The new DAOS Tune replaces the earlier DAOS Estimator.

It is completely rewritten, much faster and provides more details.

DAOS Tune can run without specifying and parmeters, but also supports more detailed analysis options.


Here is a sample scan for the DNUG production server -- which is not fully representative but shows the principle.


In our case we would only gain 15% deduplication.
But you can see that the remaining space in NSF is around 50% -- even on this small server..


When analayzine our DNUG server I noticed DAOS Tune wasn't yet in the container image.
You can now automatically add it to the container image via -daostune build option.


Image:Leveraging DAOS for storage optimization and backup
 Domino  Backup 

Extending Domino Backup Integrations with a second backup tier

Daniel Nashed – 25 January 2026 12:06:26

Domino Backup can be used as a simple backup without any external backup solution.
But it can be also be part of a multi tier backup strategy.

The most simple backup enabled out of the box is to write to a local or remote storage location.
Usually this is de-duplicating storage like a ZFS.
ZFS can also be used to create a snapshot after the backup finishes. But you would need a trigger to invoke the snapshot.

For containers or with remote storage locations an in-sync trigger would be challenging.
In this case a trigger file to create a snapshot or a second tier backup on the remote storage would be helpful.

Here is an example how you could write in the Post Backup command instead of executing an inline action:


/local/backup/nsf/backup_moby.lab.dnug.eu_FULL_20260125113609.req


BackupNode: moby.lab.dnug.eu

BackupRefDate: 20260125113609

BackupRetentionDateTimeText=20260201113609

Directory=/local/backup/nsf/moby.lab.dnug.eu/FULL/20260125113609


---

What is important to know is that the formula is invoked on the draft backup log document.
Not all fields have been written at that point - for example the counters for processed backups are not written.
But all the meta information like:



BackupNode
BackupMode

BackupRefDate

BackupRetentionDateTimeText

BackupTargetDir



This helps to build all kind of integrations.
In my case I am looking into flows like this:


  1. Standard Domino Backup to a ZFS deduplicating storage
  2. Have the backend create a ZFS snapshot triggered by the tag file
  3. In addition use a second tier for DR recovery using Restic Backup to backup to S3 or Borg Backup to backup to a backup repository via SSH

The standard restore process would be used against the ZFS file storage.

Only in a DR situation:

  1. The local snapshot can be restored
  2. Or if completely broken restore from Restic/Borg Backup



Restic Backup - would work well for a S3 based 2nd trier backup
Borg Backup
- would be a good choice for a remote DR location like a Hetzner StorageBox because Borg Backup would only send the data once and is very good at deduplicating and meta data caching.


The key point here is that the primary backup and restore stays very simple.
This concept would also work with Domino in Docker/Podman and Kubernetes Containers where you would just assign another volume (like of a different storage class).



Image:Extending Domino Backup Integrations with a second backup tier


Virtualization at it’s best -- Windows 11 on Ubuntu Desktop using KVM

Daniel Nashed – 22 January 2026 01:45:09

KVM is part of Linux and well integrated. Now that I have Ubuntu running end to end (on a USB stick) I am adding KVM to give Windows 11 a try.
It is using ZFS for storing the virtual Windows disk. The installation was super easy and the performance is great.
I am still using a remote desktop connection to install it, but the notebook is next to me and runs everything native.
To stay on my primary machine with my main keyboard and to take screen prints I am using the RDP connection.

The KVM integration allows to run full screen. You even forget for a moment you are running on an Ubuntu desktop.
Yes this isn't what we want. We want to only run Windows applications if really needed.

The KVM integration is great. Beside the GUI there are simple commands to query settings, mount disks etc.

I also installed Nomad web running against a Domino server on the Linux host on Docker.
You can mix and match any combination. It just works and the performance is great.

I am not sure where I end up. I need a Notes client with a designer and I need a Windows C/C++ compiler.
My notebook will probably stay on Windows using WSL with Ubuntu.


But this is a great environment to have a transportable Linux including a Windows VM if needed.
Everything is on a stick. In My case a fast Samsung 256GB stick. It would be even more performing with a SSD.

But running on a USB stick is nothing I noticed any second during all my testing.

Now that I got it working forward, backward, mixed up on multiple layers I am prepared for any type of setup.

If you try this out on your own, ChatGPT is your best buddy when you are stuck!

This environment is now waiting for Notes/Domino 14.5.1 EA1 ...



Image:Virtualization at it’s best -- Windows 11 on Ubuntu Desktop using KVM
 Ubuntu 

Ubuntu Desktop is pretty cool and supports RDP - what? really?

Daniel Nashed – 21 January 2026 16:20:33

OK now that I have it working on a USB stick and can boot it any time, I am looking into more functionality.
The best way to run it would be native on a desktop or notebook.

But what if you need a jump host to connect to a customer environment or for other purposes?
Or you want to try out something and need to connect remotely to the desktop?

I thought the answer would be X11 and VNC. But ChatGPT came up with something better.

The answer surprised me and let me share what ChatGPT explained:


-- snip --

Why RDP makes sense on Linux today
Modern Linux desktops are Wayland-based, composited, and GPU-driven, where exporting a full desktop via X11 no longer works well and VNC’s pixel-based approach scales poorly.
RDP efficiently transports rendered frames and input, works cleanly with Wayland, and supports encryption and dynamic resolution. On Linux it is just a protocol: GNOME Remote Desktop uses it for screen sharing, while xrdp provides true remote login.

-- snip --

It's fully integrated into Ubuntu deskop and can be just enabled in settings.
I would not run that remotely on a hosting provider but it can be tunneled to the Linux box via SSH nicely (this is how I operate all hosted Windows servers too).

Once enabled you can just use RDP to connect to it.

Sounds really weird. But it really works very very well including copy & paste.

For testing this is very convenient. But in most cases you would run a native Linux host and use RDP to connect to Windows boxes.

Ubuntu Desktop is amazing easy to use and very cleaned up. It all just makes sense and the controls also all make sense.

Now I have a transportable workstation which I could even boot from VMware if needed as a machine in parallel. And I can RDP from the host into the guest.

I could imagine more scenarios which would be possible. Not all of them make sense in production.
But I am also thinking about test environments and sandbox environments.

-- Daniel

Image:Ubuntu Desktop is pretty cool and supports RDP - what? really?



 Ubuntu 

Ubuntu USB Stick Day 3 - Native USB install

Daniel Nashed – 21 January 2026 14:13:24

When I woke up this morning I had another idea. I don't know why I did not start like this.

But research showed that this approach only works with modern UEFI configurations. The boot would be problematic with a standard BIOS.


The idea:


Instead of using a Live USB stick with persistent data mode, I am just installing Ubuntu native on an USB stick.

No more read-only file-system. No copy on write mode for the persistent data. No hacks to get another normal file-system.


The steps are pretty simple now that I know it works. And I even took a different approach to set it up.


  • I created a dummy VMware workstation VM and booted fom ISO/DVD.
  • The I mounted an empty USB stick from host
  • A dummy disk stays untouched


The same would also work with two USB sticks but this way I was making sure my SSD with Windows on it will not be touched and I had a fast retry test environment.


Once started I used the normal installer to do a custom partitioned install on the USB stick:


  • Custom install instead of the default
  • Create a boot partition on the USB stick
  • Add a sufficient size ext4 disk and set the mount to /
  • Keep some free space for ZFS later and create an empty partition


The installation will just proceed the standard way and you can boot from the USB stick

In my case I did boot in the VMware environment first before using it to boot my notebook from it natively.


All the remaining steps to setup ZFS and get up Docker remain the same and the result looks like this:


filesystem             Size  Used Avail Use% Mounted on

/dev/sda2               39G   11G   26G  30% /

tmpfs                   16G  8.0K   16G   1% /dev/shm

tank                    186G  128K  186G   1% /tank

tank/local              189G  2.9G  186G   2% /local

tank/docker             187G  438M  186G   1% /tank/docker

tank/containerd         186G  256K  186G   1% /tank/containerd



---


Sometimes it takes more than one iteration to get it right.

I think this is now a quite flexible setup and can be used even for production.
For sure it is a good backup if you run into issues with your normal machine configuration.

 Notes  Domino 

Notes/Domino 14.5.1 planned to ship in Q1/2026 is planned to replace the 14.5 code stream

Daniel Nashed – 21 January 2026 23:24:07

When asking support for a fix to be in 14.5 FP2 I was surprised there is no fixpack planned for that code stream.
Similar to 12.0.2 replacing 12.0.1 the Notes/Domino 14.5.1 release is planned to be the next version after 14.5 FP1.

According to the public fixlist this will be in Q1 as shown below.
That makes it even more important to look into EA2 soon, if this is what we are going to update current 14.5 FP1 installation with this new version soon.


The DNUG Domino focus group is planning an event 24. March focusing on this new release for client and server.

The container image will be updated and Domino autoupdate should be available to update automagically from EA1 to EA2 as well.
Stay tuned for details and see you in the forum for EA2.


Image:Notes/Domino 14.5.1 planned to ship in Q1/2026 is planned to replace the 14.5 code stream

Ubuntu USB Stick Day 2 - Using free space to install ZFS

Daniel Nashed – 20 January 2026 20:32:36

Yesterday I have been using /dev/shm to have disk storge for containerd and Docker.

Today I found a way to get some space from the persistent partition /cow and create a ZFS pool.

RUFS can't make this adjustment. But there is a simple tick:


Just create the USB image with as much of persistent storage as possible.
Then use GParted on Linux to make the partition smaller and create another partition.

After that I booted from the USB stick and did all the installation again using ZFS for Docker.


--- Update 21.1.2026 ---

It turned out that today with UEFI you can install directly on a USB stick and boot from there.
No tricks needed. But all I did was a very good experience and some of it will be helpful in other sencarios. Like the trick to use tmpfs to speed up in a lab environment.
Check this block post for the final solution -->
https://blog.nashcom.de/nashcomblog.nsf/dx/ubuntu-usb-stick-day-3-native-usb-install.htm
But this information is still mostly useful and see how I came up with new layout.
In the new layout I leave out free space on a native installed Linux on USB allocating an unformatted partition to add ZFS when Linux is installed.

---


New disk layout


df -h

Filesystem       Size  Used Avail Use% Mounted on

/dev/sda1        5.8G  5.3G  529M  92% /cdrom

/cow              30G  5.8G   22G  21% /

tmpfs             16G  8.0K   16G   1% /dev/shm

tmpfs             16G  8.0K   16G   1% /tmp

tank              21G  128K   21G   1% /tank

tank/local        23G  1.4G   21G   7% /local

tank/containerd   21G  384K   21G   1% /tank/containerd

tank/docker       21G  3.3M   21G   1% /tank/docker


ZFS Filesystems


zfs list

NAME                         USED  AVAIL  REFER  MOUNTPOINT

tank                        2.81G  20.9G   104K  /tank

tank/containerd              272K  20.9G   272K  /tank/containerd

tank/docker                 1.41G  20.9G  3.18M  /tank/docker

tank/local                  1.39G  20.9G  1.39G  /local


That's pretty cool. But the original setup was a lot faster. Now I am bound to the USB stick performance.
Before I was just using RAM. At runtime we could still put the Domino data disk into tmpfs for testing ...



Installation notes



# Create zpool

zpool create -o ashift=12 -o autotrim=on tank /dev/sda3

zfs set compression=lz4 tank

zfs set atime=off tank

zfs set xattr=sa tank
zfs set recordsize=32K tank

# Create file-systems

zfs create -o mountpoint=/local tank/local

zfs create -o mountpoint=/tank/docker tank/docker

zfs create -o mountpoint=/tank/containerd tank/containerd


# Link storage location for containerd to the new location

ln -s /tank/containerd /var/lib/containerd


# Ensure Docker uses ZFS and the file-system created

mkdir -p /etc/docker


vi /etc/docker/daemon.json


{

"data-root": "/tank/docker",

"storage-driver": "zfs"

}


# Start Docker and containerd

systemctl start containerd

systemctl start docker

systemctl start docker.socket



Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]