Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

Virtualization at it’s best -- Windows 11 on Ubuntu Desktop using KVM

Daniel Nashed – 22 January 2026 01:45:09

KVM is part of Linux and well integrated. Now that I have Ubuntu running end to end (on a USB stick) I am adding KVM to give Windows 11 a try.
It is using ZFS for storing the virtual Windows disk. The installation was super easy and the performance is great.
I am still using a remote desktop connection to install it, but the notebook is next to me and runs everything native.
To stay on my primary machine with my main keyboard and to take screen prints I am using the RDP connection.

The KVM integration allows to run full screen. You even forget for a moment you are running on an Ubuntu desktop.
Yes this isn't what we want. We want to only run Windows applications if really needed.

The KVM integration is great. Beside the GUI there are simple commands to query settings, mount disks etc.

I also installed Nomad web running against a Domino server on the Linux host on Docker.
You can mix and match any combination. It just works and the performance is great.

I am not sure where I end up. I need a Notes client with a designer and I need a Windows C/C++ compiler.
My notebook will probably stay on Windows using WSL with Ubuntu.


But this is a great environment to have a transportable Linux including a Windows VM if needed.
Everything is on a stick. In My case a fast Samsung 256GB stick. It would be even more performing with a SSD.

But running on a USB stick is nothing I noticed any second during all my testing.

Now that I got it working forward, backward, mixed up on multiple layers I am prepared for any type of setup.

If you try this out on your own, ChatGPT is your best buddy when you are stuck!

This environment is now waiting for Notes/Domino 14.5.1 EA1 ...



Image:Virtualization at it’s best -- Windows 11 on Ubuntu Desktop using KVM
 Ubuntu 

Ubuntu Desktop is pretty cool and supports RDP - what? really?

Daniel Nashed – 21 January 2026 16:20:33

OK now that I have it working on a USB stick and can boot it any time, I am looking into more functionality.
The best way to run it would be native on a desktop or notebook.

But what if you need a jump host to connect to a customer environment or for other purposes?
Or you want to try out something and need to connect remotely to the desktop?

I thought the answer would be X11 and VNC. But ChatGPT came up with something better.

The answer surprised me and let me share what ChatGPT explained:


-- snip --

Why RDP makes sense on Linux today
Modern Linux desktops are Wayland-based, composited, and GPU-driven, where exporting a full desktop via X11 no longer works well and VNC’s pixel-based approach scales poorly.
RDP efficiently transports rendered frames and input, works cleanly with Wayland, and supports encryption and dynamic resolution. On Linux it is just a protocol: GNOME Remote Desktop uses it for screen sharing, while xrdp provides true remote login.

-- snip --

It's fully integrated into Ubuntu deskop and can be just enabled in settings.
I would not run that remotely on a hosting provider but it can be tunneled to the Linux box via SSH nicely (this is how I operate all hosted Windows servers too).

Once enabled you can just use RDP to connect to it.

Sounds really weird. But it really works very very well including copy & paste.

For testing this is very convenient. But in most cases you would run a native Linux host and use RDP to connect to Windows boxes.

Ubuntu Desktop is amazing easy to use and very cleaned up. It all just makes sense and the controls also all make sense.

Now I have a transportable workstation which I could even boot from VMware if needed as a machine in parallel. And I can RDP from the host into the guest.

I could imagine more scenarios which would be possible. Not all of them make sense in production.
But I am also thinking about test environments and sandbox environments.

-- Daniel

Image:Ubuntu Desktop is pretty cool and supports RDP - what? really?



 Ubuntu 

Ubuntu USB Stick Day 3 - Native USB install

Daniel Nashed – 21 January 2026 14:13:24

When I woke up this morning I had another idea. I don't know why I did not start like this.

But research showed that this approach only works with modern UEFI configurations. The boot would be problematic with a standard BIOS.


The idea:


Instead of using a Live USB stick with persistent data mode, I am just installing Ubuntu native on an USB stick.

No more read-only file-system. No copy on write mode for the persistent data. No hacks to get another normal file-system.


The steps are pretty simple now that I know it works. And I even took a different approach to set it up.


  • I created a dummy VMware workstation VM and booted fom ISO/DVD.
  • The I mounted an empty USB stick from host
  • A dummy disk stays untouched


The same would also work with two USB sticks but this way I was making sure my SSD with Windows on it will not be touched and I had a fast retry test environment.


Once started I used the normal installer to do a custom partitioned install on the USB stick:


  • Custom install instead of the default
  • Create a boot partition on the USB stick
  • Add a sufficient size ext4 disk and set the mount to /
  • Keep some free space for ZFS later and create an empty partition


The installation will just proceed the standard way and you can boot from the USB stick

In my case I did boot in the VMware environment first before using it to boot my notebook from it natively.


All the remaining steps to setup ZFS and get up Docker remain the same and the result looks like this:


filesystem             Size  Used Avail Use% Mounted on

/dev/sda2               39G   11G   26G  30% /

tmpfs                   16G  8.0K   16G   1% /dev/shm

tank                    186G  128K  186G   1% /tank

tank/local              189G  2.9G  186G   2% /local

tank/docker             187G  438M  186G   1% /tank/docker

tank/containerd         186G  256K  186G   1% /tank/containerd



---


Sometimes it takes more than one iteration to get it right.

I think this is now a quite flexible setup and can be used even for production.
For sure it is a good backup if you run into issues with your normal machine configuration.

 Notes  Domino 

Notes/Domino 14.5.1 planned to ship in Q1/2026 is planned to replace the 14.5 code stream

Daniel Nashed – 21 January 2026 23:24:07

When asking support for a fix to be in 14.5 FP2 I was surprised there is no fixpack planned for that code stream.
Similar to 12.0.2 replacing 12.0.1 the Notes/Domino 14.5.1 release is planned to be the next version after 14.5 FP1.

According to the public fixlist this will be in Q1 as shown below.
That makes it even more important to look into EA2 soon, if this is what we are going to update current 14.5 FP1 installation with this new version soon.


The DNUG Domino focus group is planning an event 24. March focusing on this new release for client and server.

The container image will be updated and Domino autoupdate should be available to update automagically from EA1 to EA2 as well.
Stay tuned for details and see you in the forum for EA2.


Image:Notes/Domino 14.5.1 planned to ship in Q1/2026 is planned to replace the 14.5 code stream

Ubuntu USB Stick Day 2 - Using free space to install ZFS

Daniel Nashed – 20 January 2026 20:32:36

Yesterday I have been using /dev/shm to have disk storge for containerd and Docker.

Today I found a way to get some space from the persistent partition /cow and create a ZFS pool.

RUFS can't make this adjustment. But there is a simple tick:


Just create the USB image with as much of persistent storage as possible.
Then use GParted on Linux to make the partition smaller and create another partition.

After that I booted from the USB stick and did all the installation again using ZFS for Docker.


--- Update 21.1.2026 ---

It turned out that today with UEFI you can install directly on a USB stick and boot from there.
No tricks needed. But all I did was a very good experience and some of it will be helpful in other sencarios. Like the trick to use tmpfs to speed up in a lab environment.
Check this block post for the final solution -->
https://blog.nashcom.de/nashcomblog.nsf/dx/ubuntu-usb-stick-day-3-native-usb-install.htm
But this information is still mostly useful and see how I came up with new layout.
In the new layout I leave out free space on a native installed Linux on USB allocating an unformatted partition to add ZFS when Linux is installed.

---


New disk layout


df -h

Filesystem       Size  Used Avail Use% Mounted on

/dev/sda1        5.8G  5.3G  529M  92% /cdrom

/cow              30G  5.8G   22G  21% /

tmpfs             16G  8.0K   16G   1% /dev/shm

tmpfs             16G  8.0K   16G   1% /tmp

tank              21G  128K   21G   1% /tank

tank/local        23G  1.4G   21G   7% /local

tank/containerd   21G  384K   21G   1% /tank/containerd

tank/docker       21G  3.3M   21G   1% /tank/docker


ZFS Filesystems


zfs list

NAME                         USED  AVAIL  REFER  MOUNTPOINT

tank                        2.81G  20.9G   104K  /tank

tank/containerd              272K  20.9G   272K  /tank/containerd

tank/docker                 1.41G  20.9G  3.18M  /tank/docker

tank/local                  1.39G  20.9G  1.39G  /local


That's pretty cool. But the original setup was a lot faster. Now I am bound to the USB stick performance.
Before I was just using RAM. At runtime we could still put the Domino data disk into tmpfs for testing ...



Installation notes



# Create zpool

zpool create -o ashift=12 -o autotrim=on tank /dev/sda3

zfs set compression=lz4 tank

zfs set atime=off tank

zfs set xattr=sa tank
zfs set recordsize=32K tank

# Create file-systems

zfs create -o mountpoint=/local tank/local

zfs create -o mountpoint=/tank/docker tank/docker

zfs create -o mountpoint=/tank/containerd tank/containerd


# Link storage location for containerd to the new location

ln -s /tank/containerd /var/lib/containerd


# Ensure Docker uses ZFS and the file-system created

mkdir -p /etc/docker


vi /etc/docker/daemon.json


{

"data-root": "/tank/docker",

"storage-driver": "zfs"

}


# Start Docker and containerd

systemctl start containerd

systemctl start docker

systemctl start docker.socket



Hetzner Server Factory for Domino workshops

Daniel Nashed – 20 January 2026 17:03:48
Quite some time ago, I put together a small Domino database as a lab registration and automation tool. The original idea was simple and very pragmatic:
bring up Hetzner servers on demand for workshops, send notification emails to each participant, and tear everything down again once the workshop was over.


From a user perspective, the setup was intentionally minimal. You only had to provide your API keys for the DNS API and the Cloud API, and the database took care of the rest.
Over time, however, things changed. Hetzner merged the DNS API into the Hetzner Cloud API, and that was a good excuse for me to revisit the whole implementation.


Making the database smarter


Before my recent changes, the database required you to manually specify server type and location.
While that worked, it always felt a bit clunky. When I looked at it again, the obvious improvement was to stop hardcoding assumptions and instead pull all relevant metadata dynamically via REST.


So now the database reads:


  • locations and data centers
  • available images
  • server types and capabilities
  • and related configuration options
     
directly from the API and stores them locally. This not only reduces errors, but also makes the solution future-proof when Hetzner adds new regions or instance types.

DNS, SSH, and networking — automated


The database was already providing the most important automation steps:


  • setting DNS records automatically for the assigned public IP
  • creating the corresponding IN-ARPA reverse DNS entries
  • assigning SSH keys during server provisioning
     
On top of that, I’ve now added support for:

  • labels, which are extremely useful for grouping and lifecycle management
  • automatic assignment to a private network, selected via a configuration profile
     

That last point is particularly important for lab deployments, where you want all machines to immediately see each other without any manual networking work.


Intended use: workshops and labs


This database is not (yet) an open-source project. That said, I’m more than happy to provide it to user groups that want to run workshops and need temporary cloud infrastructure.

In fact, we are already planning workshops this year for DNUG and Engage, and chances are we’ll need Hetzner-based lab environments spun up on the fly.
This database was designed for exactly that use case.

An interesting side effect: after adding the new functionality, I was asked to import existing DNUG lab data into the database.
That worked without any issues — even with read-only API tokens. Each database instance manages exactly one Hetzner project, defined solely by the API token it uses.


What’s next?


The next logical step is cloud-init integration. Not for anything fancy, but to bootstrap lab environments more efficiently — for example:


  • starting custom scripts pulled from Git
  • mounting and using an NFS share prepared as part of the lab


No magic involved. This is a straightforward REST API implementation in LotusScript talking to the Hetzner Cloud API — which, by the way, is easy to consume and very well designed.

I may also give the database UI a bit of love and improve navigation -- No, I’m not going to use Notes Restyle ...




Image:Hetzner Server Factory for Domino workshops
 Domino  Ubuntu  USB 

A wild configuration - Domino on Ubuntu Deskop 26.04 live USB stick boot with persistent mode

Daniel Nashed – 20 January 2026 00:41:22

--- Update 21.1.2026 ---

This was a good start, but it evolved over the week.
It turned out that today with UEFI you can install directly on a USB stick and boot from there.
No tricks needed. But all I did was a very good experience and some of it will be helpful in other sencarios. Like the trick to use tmpfs to speed up in a lab environment.
Check this block post for the final solution -->
https://blog.nashcom.de/nashcomblog.nsf/dx/ubuntu-usb-stick-day-3-native-usb-install.htm


This evening I have been playing this with the Ubuntu Dekstop live ISO booted from USB and tried to see how far I can get.

This was a learning experience about the components involved. But some of it for sure will get refined over time.
The special part of this is that this is a booted from USB environment. Not a USB drive that is used to automatically install Ubuntu on a machine.
I have played with auto installing Ubuntu via USB earlier. But this is a different scenario.

The notebook is untouched. And yes all the standard Linux desktop functionality works as well.
Like using the browser to access a local or remote server via HCL Nomad.


Ubuntu 26.04 LTS (Resolute Raccoon) Beta


To see how stable it already is, I downloaded the latest ISO for Ubuntu 26.04 and copied it to a USB stick using RUFUS.
The official released version would be 24.04 LTS.


https://releases.ubuntu.com/26.04-snapshot1/

Ubuntu Desktop Live ISO


I first looked into how I can boot the live Desktop variant on my Thinkpad.

Without any modification the current beta of the ISO downloaed (5.3 GB) worked like a charm.

Ubuntu Persistent data mode


But that does not persist any data. So I told RUFUS to keep 200 GB of my 256 GB USB stick for data. That's a simple change when writing the ISO by specifying the preserve space.

Booting with a persistent data location I get a /cow volume of 200 GB assigned after booting the Ubuntu from USB stick. All changes are going into the copy on write file-system automatically.

When I reboot all my data is still available!


Running Docker


I had to try out  if I can get Docker working. I could download images, but I could not start them.
Docker will not work because it can't use the file-system.
The file-system is already virtual. And Docker uses a similar mechanism.


I could have reserved extra space on the USB stick.
Instead I could have added another USB stick or drive.

But I came up with another wild idea because I did not want to redo my USB stick and I did not want to add another one ...

Trick: Leverage /dev/shm


My notebook has 32GB of RAM. By default half of the memory is available as a tempfs at /dev/shm.
Max 16 GB memory is fine for the images and volumes for a small Domino test server.

I pointed Dock and containerd to sub directories in /dev/shm  for it's data.

Yes the data will be gone when I reboot. But it is pretty fast. All Docker I/O is in RAM :-)


It's just for testing and I can download the base images and build the container image when I reboot my machine.
Instead of using a Docker volume I pointer my Docker container to /dev/shm/local as well.


The result was a freaking fast Domino server. Once the data was cached, catalog.nsf was updated in a second.

Domino Backup also completed in a second as soon the data was cached.



Conclusion / What to learn from it


This was a pretty fun experience. And there is a lot to learn from it about the individual components. The setup is really quick once you know what to do. Because a lot of it is in RAM it is pretty fast.

No I would probably use this for a Docker workshop probably. But it would be possible to just give out prepared USB sticks if a notebook is allowed to be booted with Ubuntu.
Ubuntu comes with the right signatures to be booted on a Windows 11 machine without any issues. No modification to secure boot is needed (and it would be a disaster if you change it, because you would break your Windows 11).


Pointing Docker and containerd to /dev/shm was a wild trick. But everything else was really out of the box and would just work with adding persistent space to a USB stick using RUFUS.


References and further reading


On Windows RUFUS is really the tool of choice. If you already have a Ubuntu machine, there are also other options.


https://documentation.ubuntu.com/desktop/en/latest/how-to/create-a-bootable-usb-stick

The documentation also mentions RUFUS on Windows. But there is no persistence explained in this tutorial.

But that's pretty simple. You just need to slide the space that should be reserved on the RUFUS menu to provide some space.



 Domino  Linux 

Domino Diagnostics Collect is available

Daniel Nashed – 11 January 2026 21:04:44
The first version of the Domino Diagnostic Collect script is available in the develop branch of the start script and the container project.
I am still testing it, but I already want feedback. When updating the start script or the container image it is added to /usr/bin automatically like other scripts.


The configuration is flexible and supports three targets.


  • SCP
  • OwnCloud/NextCloud via WebDAV upload
  • Sending mail using nshmailx


I also a added 7-Zip support for better compression and password support.
You can define a fixed password, prompt for a password or let the upload generate a random password on a request.


You just invoke it via domdiagcollect

The configuration is invoked via domdiagcollect cfg and comes with a configuration file written on first cfg invocation.

Domino Grafana Monitoring meets "show trans"

Daniel Nashed – 11 January 2026 15:01:59
In a current troubleshooting customer scenario we are looking into different transaction types to narrow down what causes the servers to spike.
Transaction counts are always captured but not shown or written anywhere automatically..

The Min and Max and Average values are not helpful for Grafana, but turning Count and Total values into Prometheus compatible stats, can be quite helpful.
The average is better to be calculated based on the raw data instead of using the Domino Average.


It's not intended for permanent monitoring. Usually the number of transaction is sufficient and can be reset and checked any time.
But for performance troubleshooting it can be turned on in the latest version of the Domino Prometheus Node Exporter.


In this new version 1.0.x the .prom files are also have proper HELP and TYPE. It does not depend on Node Exporter anymore to add HELP and TYPE.

HELP contains the native Domino performance stat where available. But there are additional statistics turning important Domino text stats into numbers and also turning some TIMEDATES into epoch times -- which are captured by Prometheus.

There is still work in progress -- specially for beefing up the documentation. But I wanted to give a quick update about the new transaction option.


show trans


Function                       Count     Min     Max      Total    Average

OPEN_DB                          799       0       3        570          0

OPEN_NOTE                        298       0       2        103          0

DB_INFO_GET                      252       0       1         51          0

SEARCH                           504       0      15        918          1

DB_REPLINFO_GET                  756       0       5        193          0

REMOTE_CONSOLE                     2       0       0          0          0

OPEN_COLLECTION                  252       0       4        297          1

START_SERVER                       3      10    1383       1452        484

...


Prometheus stats example


# HELP DominoTrans_count Transaction count
# TYPE DominoTrans_count counter

DominoTrans_count{op="OPEN_NOTE"} 298
DominoTrans_count{op="START_SERVER"} 3

...

# HELP DominoTrans_total_ms Total transaction time in milliseconds
# TYPE DominoTrans_total_ms counter
DominoTrans_total_ms{op="OPEN_NOTE"} 103
DominoTrans_total_ms{op="START_SERVER"} 1452

...


This turns into useful Grafana dashboards


Once you have the statistic you can add them to dashboards.
The challenge is how to present the data. I am still looking into it.

rate
 or irate seem not to match well for those type of starts only collected very like 4-8 minutes.

But increase seems to be a pretty good match. I have done some performance testing today which I accompanied with Grafana stats testing to fine tune the first dashboard for transaction stats.

The new code for the Domino Prometheus exporter generating .prom files is already committed to the repository. But I am still testing and have not updated the exported.
There is one breaking change. I decided to prefix the Domino stats with "Domino_" but there is a notes.ini to disable the prefix.

The Grafana project now turned into a version 1.0.0 with those new stats and new options.

There is still work to do. But it is pretty useful already and very straightforward to setup.
I am going to add more documentation and how to materials while I first deploy it on customer site hopefully next week.

There is one detail about collecting the "show trans" stats. I am using the remote console operation capturing and parsing the results and suppressing the output from log.nsf and console.log prefixed with exclamation mark.


Image:Domino Grafana Monitoring meets "show trans"


Image:Domino Grafana Monitoring meets "show trans"
 Linux  Tools 

My favorite SSH client on Windows - MobaXterm

Daniel Nashed – 3 January 2026 10:41:04

Putty has been the tool of choice for many customers for years.

I switched from Putty to MobaXterm a long time ago.

It leverages the Putty back-end for SSH sessions and even it supports X11 sessions I mainly use it for SSH and SFTP.

But it is also a very good X11 implementation of you need it -- I am mainly using it for SSH terminal sessions.


MobaXterm also detects WSL on your machine and provides support for WSL sessions with the same user interface.

In addition to SSH it offers many other protocols and opens a SFTP file tap for your SSH session.

For me it also replace WinSCP which I used in earlier days along with Putty.


MobaXterm comes in a free edition with some limitations -- like number of sessions.

I am using the paid installed version, but there is also a portable edition if you can't install software on your machine.


https://mobaxterm.mobatek.net/


It comes with many additional options which most admins will never need.

I am using MobaXterm because of the great usability:


  • Font configuration
  • Increasing font size when needed via CTRL + mouse wheel
  • Great terminal settings
  • Great SSH support including jump host configurations
  • Support many session tabs at once including detaching and reattaching session Windows
  • Split window mode
  • Support to security store passwords and only need to enter SSH key passwords once while MobaXterm is running
  • Great session management including folders and export/import
     
Just to give you some of the great options. There is much more.
For example SFTP can also be used to round trip edit files with the integrated and external editors like Notepad++ (another of my favorite tools).


If you are using Putty with WinSCP or other tools on Windows I would really recommend to try out MobaXterm.

No I am not getting any commission from them - I am even paying for it.
I really think this would help other Domino admins on Linux and AIX.



Image:My favorite SSH client on Windows - MobaXterm

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]