Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

Domino container image use a different editor than vi

Daniel Nashed – 5 June 2025 23:01:58

This came up in a discussion at Engage conference.
The functionality is not new but maybe not well known.

I am personally a big fan of vi, because I use it since the early days of Linux.

But there are other editors like nano or mcedit from Midnight Commander (MC).
Not all distributions come with all editors available.
For example UBI does have nano included but no MC.

The container image supports adding any type of package available on the distribution.


Here is a command-line example how to install nano at build time:


-linuxpkg=nano


Once an alternate editor is installed, you can set environment variables in dominoctl (dominoctl env) passed to the running container.

The editor command all involved scripts support is

EDIT_COMMAND=nano


I just added another option which many Linux tools support:


EDITOR=nano


EDIT_COMMAND is checked first. If not set the EDITOR command is check.

If nothing is specified "vi" is used.


This option provides an easy way to customize the container image.

Sadly not every additional package is available on UBI.
Depending on which command you want to install (for example ncdu) Ubuntu could be the better choice.


-from=ubuntu


selects the latest Ubuntu LTS version as the base image for your Domino container image.

Nomad Web 1.0.16 Spell Check and LS2CAPI support

Daniel Nashed – 3 June 2025 16:55:41

Nomad Web 1.0.16 just shipped. It comes with two great new features.

Full spell check support and LS2CAPI.

There look & feel also has changed in detail.


I have updated the Domino Container project already and installed it in DNUG Lab updating the container image...



Image:Nomad Web 1.0.16 Spell Check and LS2CAPI support

Redhat Enterprise Linux 10 & UBI 10 available

Daniel Nashed – 1 June 2025 19:15:18

Earlier than announced Redhat Enterprise 10 and the Universal Base Image 10 are available.

There are no surprises because it is based on CentOS Stream 10, which I have looked into before.


RHEL 10 came out to late to be officially supported with the upcoming Domino 14.5 release June 17.


The Linux kernel has been upgraded from  5.14 to 6.12
glibc has been updated from 2.34 to 2.39


That's not a too big jump. Ubuntu 24.04 us running the same glibc 2.39 and a newer Linux Kernel 6.8.


I would not think many customers want to move immediately. But I have added the UBI 10 image already to the container project.

It's not yet the default, but you can select it via


-from=ubi10-minimal

-from=ubi10


This will select the base new UBI 10 base image:


registry.access.redhat.com/ubi10/ubi-minimal

registry.access.redhat.com/ubi10


For containers the kernel version is the kernel version of the host.

Only the new glibc would be involved here and the kernel would stay the same.


I am going to switch the base image on all of my deployments to UBI 10 for a production level test.





Optimizing existing code - a bash example

Daniel Nashed – 1 June 2025 07:11:34

Once code is implemented and works, it is usually not looked at again.
Often performance issues come up later when more data is added to an application.

This isn't just true for normal applications, but also for bash scripts as the following example shows.


The script already had verbose logging, so I could figure out quickly which part of the script takes longer.
But it wasn't clear immediately what the issue was.


Pasting code into ChatGPT asking the right questions gave me a good indication.
The code was parsing strings in a loop. For a single invocation for a full file using cut would be a good way to parse.


But it turned out that invoking cut for every operation took quite some overhead and high CPU spikes.
ChatGPT had an interesting suggestion, which did not work initially, but gave me a good direction.


I replaced the invocation of cut by internal bash parsing. This does not only reduce the overhead in CPU but is also dramatic faster.
Analyzing and re-factoring code can be very beneficial. But there needs to be potential in the optimization.
In may case it was simple. The server CPU spiked up for like 30 seconds on that Linux machine just for a bash script running to rebuild some HTML code.


I wasn't aware of the internal shell way to split strings this way into an array.

So asking ChatGPT and validating the ideas coming back can be very helpful.


But on the other side all of this only makes sense if there is optimization potential.

In my case it was easy to spot and address and I have other areas in another script that might benefit from the same type optimization.



Existing Code invoking an external command "cut"


while read LINE; do


 ENTRY=$(echo "$LINE" | cut -d'|' -f1)

 CATEGORY=$(echo "$ENTRY" | cut -d'/' -f1)

 SUB=$(echo "$ENTRY" | cut -d'/' -f2)

 COMBINED=${CATEGORY}_${SUB}

 FILE=$(echo "$LINE" | cut -d'|' -f2)

 DESCRIPTION=$(echo "$LINE" | cut -d'|' -f3)

 HASH=$(echo "$LINE" | cut -d'|' -f4)


 html_entry "$COMBINED.html" "$FILE" "$FILE" "$DESCRIPTION" "$HASH" "$SERVER_URL"


done < "$CATALOG_FILE"



New Code leveraging bash internal functions


while read LINE; do


 IFS='|' read -r -a PARTS <<< "$LINE"


 ENTRY=${PARTS[0]}

 FILE=${PARTS[1]}

 DESCRIPTION=${PARTS[2]}

 HASH=${PARTS[3]}


 IFS='/' read -r -a PARTS <<< "$ENTRY"


 CATEGORY=${PARTS[0]}

 SUB=${PARTS[1]}

 COMBINED=${CATEGORY}_${SUB}


 html_entry "$COMBINED.html" "$FILE" "$FILE" "$DESCRIPTION" "$HASH" "$SERVER_URL"


done < "$CATALOG_FILE"


New Domino on Linux diagnostic script

Daniel Nashed – 27 May 2025 23:04:00

This is still work in progress. But I am working on it since Engage.
I had a server hang in the middle of the night, which was hard to to troubleshoot from remote from my notebook.

it would not have been easier on a Windows machine. But now it is going to be easier on Linux than on Windows.
I am adding a diagnostic menu the Domino Start script. It's going to be a separate script called from the start script.

The idea is to collect data and have an alternate way to transfer data -- even if the Domino server is down.
First transfer is option is using SMTP mail via nshmailx.

But if the files are getting bigger we might need another option. For example SCP.
Using SCP would not require nshmailx, but would not be a convenient.

Maybe an upload agent would be a good idea, but would need an agent.
There is probably not a once size that fits all.

What do you think?


Image:New Domino on Linux diagnostic script

Engage session follow-up – Domino 14.5 AutoUpdate downloads

Daniel Nashed – 24 May 2025 17:40:45
Thanks to everyone who attended my 8:00 AM session on Wednesday. One topic raised during the session deserves a closer look: how Domino AutoUpdate retrieves installation artifacts.

To download product.jwt, software.jwt, and the Notes/Domino web kits, you need at least one server with outbound connectivity to My HCL Software portal (MHS) and the HCL Domino fixlist servers.

Domino AutoUpdate a supports HTTP proxy configurations, including authenticated proxies, which should work in most enterprise network environments.
All downloads are validated against the software.jwt, which includes signed metadata for all supported software packages. This model fits most connected environments.

Completely air-gapped setups are uncommon, and to date, there haven’t been strong or clearly defined requirements for full offline AutoUpdate workflows.

However, it’s still possible to override download URLs in AutoUpdate documents to manually provide software.jwt and web kits from internal sources.

To support these scenarios, I created an NGINX-based download proxy that utilizes the documented MHS API.


Initially developed for the Domino Download Script, the proxy has evolved into a flexible tool that can be deployed in several modes:

  • Internal software distribution portal
  • Backend data source for the Domino Download Script
  • Transparent proxy simulating the MHS API for Domino AutoUpdate

This makes it well-suited for secure environments requiring additional controls like antivirus scanning or staging downloads.


My personal use cases is all of the above to cache downloaded web-kits and host them from a local server.



I really appreciate your feedback and would like to hear about your specific requirements.
Please open a GitHub issue here for specific feedback.

GitHub Issues – Domino Start Script Project

https://github.com/nashcom/domino-startscript/issues

For very specific requirements, which cannot be discussed in public, ping me offline.



Related links:

Blog post introducing the download server:

https://blog.nashcom.de/nashcomblog.nsf/dx/new-project-domino-download-server.htm
Domino Download Server project on GitHub:

https://github.com/nashcom/domino-startscript/tree/main/domdownload-server


 GitHub 

Nash!Com GitHub organization profile for open source projects

Daniel Nashed – 17 May 2025 18:31:52

There are a couple of open source projects I am involved with.

I have added a page to the organization as a quick overview of my services and my open source work.

This also includes an overview of the HCL open source projects I am involved with.


https://github.com/nashcom/

Many of those projects are Linux and container focused.

I added this overview in preparation of Engage next week.


If you are interested in Domino on Linux you should join us for the Linux round table next week.


-- Daniel



 CentOS  SELInux  SSH 

CentOS 9 Stream update broke my SSH server with custom port because of SELinux

Daniel Nashed – 17 May 2025 16:32:58

I just patched my CentOS 9 Stream server to the latest version.
The server came up, but SSH did not work any more.

It turned out that the SELinux enforced mode in combination with the policies for sshd was responsible for it.
My server runs on a custom SSH port.
I had to add that port to my SELinux condfiguration. Let's assume you want to add 123.

You would need to allow the port like this:

semanage port -a -p tcp -t ssh_port_t 123

But first you need to make sure you have the enforced SELinux mode at all with this command:

getenforce
Enforcing


You should check the SELinux settings for the SSH port before and after the change via:

semanage port -l | grep ssh

I have not seen this on any other update like Ubuntu.
But the latest CentOS patches caused this to one of my servers.

Maybe this helps in one or another case.

I am migrating most of my servers to Ubuntu. But I am keeping some for testing.

-- Daniel



Tool chain security dependencies in containers

Daniel Nashed – 17 May 2025 15:02:09

When building your own software from scratch with a small number of dependencies like OpenSSL, LibCurl on current Linux versions is straightforward.
But as soon you are adding external projects to your stack, would bring up more dependencies which can raise security challenges.


In the container world there is strict vulnerability scanning.  Stacks like https://www.chainguard.dev/ provide great options to keep the stack you are building on secure.
But you might have external projects you rely on. You usually don't want to build everything from scratch.


Example for a dependency


The Prometheus Node Exporter is an optional component of the Domino container image.
It turns out it is built with Go, which can introduce vulnerabilities when they are not up to date.

Even if the project manages all it's dependencies, and older version of the application might have older versions for example of Go statically linked.
Linking Go statically is a common practice to not install the run-time environment on the target environment.
In my particular case the Node Exporter was outdated and a newer version comes with a newer Go run-time statically linked.


Container scan tools


The good news is that Docker Scout and other vulnerability testing shows up the CVEs and in which version they are fixed.
glibc
is dynamically linked and patching depending on the run-time. For a Linux machine this would be a normal update.

For a container image it would mean re-building the image with the latest Linux updates.
As a good practice each software should show the version of the tool chain it was developed and is running on.
In this example you see the updated run-time for the current Node Exporter, which fixes the reported CVEs.


Conclusion
     
  • As a developer you have to be ware of your dependencies and closely watch them
  • If it is reasonable to link dynamically, it can make a lot of sense
  • But if you expect the target has older versions, it might be better to include them
    (for example Domino bundles the latest versions of OpenSSL, which are usually never than what Linux ships)

  • When running containers you should scan the images and ensure you are running the latest versions
  • Making it easy for an admin to query all the dependencies is important as you see from the Node Exporter example


I have just updated the container image to use the latest Node Exporter.

Example Node Exporter


node_exporter --version
node_exporter, version 1.9.1 (branch: HEAD, revision: f2ec547b49af53815038a50265aa2adcd1275959)

 build user:       root@7023beaa563a

 build date:       20250401-15:19:01

 go version:       go1.23.7

 platform:         linux/amd64




Image:Tool chain security dependencies in containers


New Domino Container Control menu

Daniel Nashed – 17 May 2025 10:14:33

dominoctl can help you to run your containers locally on Docker and Podman.
It was the only tool left which had no menu. I just changed that in my final preparations for the Linux roundtable at Engage.


The menu is added to the develop branch of the GitHub Domino Start Script project already.
If there is anything missing or you have any other feedback, let me know.


https://nashcom.github.io/domino-startscript/dominoctl/

Image:New Domino Container Control menu


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]