Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 GitHub 

Nash!Com GitHub organization profile for open source projects

Daniel Nashed – 17 May 2025 18:31:52

There are a couple of open source projects I am involved with.

I have added a page to the organization as a quick overview of my services and my open source work.

This also includes an overview of the HCL open source projects I am involved with.


https://github.com/nashcom/

Many of those projects are Linux and container focused.

I added this overview in preparation of Engage next week.


If you are interested in Domino on Linux you should join us for the Linux round table next week.


-- Daniel



 CentOS  SELInux  SSH 

CentOS 9 Stream update broke my SSH server with custom port because of SELinux

Daniel Nashed – 17 May 2025 16:32:58

I just patched my CentOS 9 Stream server to the latest version.
The server came up, but SSH did not work any more.

It turned out that the SELinux enforced mode in combination with the policies for sshd was responsible for it.
My server runs on a custom SSH port.
I had to add that port to my SELinux condfiguration. Let's assume you want to add 123.

You would need to allow the port like this:

semanage port -a -p tcp -t ssh_port_t 123

But first you need to make sure you have the enforced SELinux mode at all with this command:

getenforce
Enforcing


You should check the SELinux settings for the SSH port before and after the change via:

semanage port -l | grep ssh

I have not seen this on any other update like Ubuntu.
But the latest CentOS patches caused this to one of my servers.

Maybe this helps in one or another case.

I am migrating most of my servers to Ubuntu. But I am keeping some for testing.

-- Daniel



Tool chain security dependencies in containers

Daniel Nashed – 17 May 2025 15:02:09

When building your own software from scratch with a small number of dependencies like OpenSSL, LibCurl on current Linux versions is straightforward.
But as soon you are adding external projects to your stack, would bring up more dependencies which can raise security challenges.


In the container world there is strict vulnerability scanning.  Stacks like https://www.chainguard.dev/ provide great options to keep the stack you are building on secure.
But you might have external projects you rely on. You usually don't want to build everything from scratch.


Example for a dependency


The Prometheus Node Exporter is an optional component of the Domino container image.
It turns out it is built with Go, which can introduce vulnerabilities when they are not up to date.

Even if the project manages all it's dependencies, and older version of the application might have older versions for example of Go statically linked.
Linking Go statically is a common practice to not install the run-time environment on the target environment.
In my particular case the Node Exporter was outdated and a newer version comes with a newer Go run-time statically linked.


Container scan tools


The good news is that Docker Scout and other vulnerability testing shows up the CVEs and in which version they are fixed.
glibc
is dynamically linked and patching depending on the run-time. For a Linux machine this would be a normal update.

For a container image it would mean re-building the image with the latest Linux updates.
As a good practice each software should show the version of the tool chain it was developed and is running on.
In this example you see the updated run-time for the current Node Exporter, which fixes the reported CVEs.


Conclusion
     
  • As a developer you have to be ware of your dependencies and closely watch them
  • If it is reasonable to link dynamically, it can make a lot of sense
  • But if you expect the target has older versions, it might be better to include them
    (for example Domino bundles the latest versions of OpenSSL, which are usually never than what Linux ships)

  • When running containers you should scan the images and ensure you are running the latest versions
  • Making it easy for an admin to query all the dependencies is important as you see from the Node Exporter example


I have just updated the container image to use the latest Node Exporter.

Example Node Exporter


node_exporter --version
node_exporter, version 1.9.1 (branch: HEAD, revision: f2ec547b49af53815038a50265aa2adcd1275959)

 build user:       root@7023beaa563a

 build date:       20250401-15:19:01

 go version:       go1.23.7

 platform:         linux/amd64




Image:Tool chain security dependencies in containers


New Domino Container Control menu

Daniel Nashed – 17 May 2025 10:14:33

dominoctl can help you to run your containers locally on Docker and Podman.
It was the only tool left which had no menu. I just changed that in my final preparations for the Linux roundtable at Engage.


The menu is added to the develop branch of the GitHub Domino Start Script project already.
If there is anything missing or you have any other feedback, let me know.


https://nashcom.github.io/domino-startscript/dominoctl/

Image:New Domino Container Control menu


 Domino  OTS 

Domino One Touch Setup (OTS) domain join token

Daniel Nashed – 11 May 2025 22:56:47

Many applications support a "join token". For example Kubernetes creates a joint token which contains the connection to the existing cluster and also the authentication needed to join the cluster.
With Domino OTS we can actually implement something very similar.


If you have an existing server, you can create a OTS JSON file pointing to the existing domain and server.
The only part that is missing is some kind of authentication.

We actually have this type of authentication: the server.id. But it would be a separate file.


So here is the idea:


  • We can encode the server.id in base64 and include it into the JSON.
  • The container entrypoint.sh script decodes it, stores it on disk and patches the JSON file.

The format looks like this and matches other OTS functionality like the password prompts.

Example:


"IDFilePath":"@Base64:AQABAC4BAAAAAAAA4..."


With this type of joint token we have a single file to pass to your additional server.


How to create a OTS JSON from a server.id


If you have a server.id you can generate all other information from the server.id and your NAB:
  • There is C-API code to read the server name from the server.id
  • With the server.id you can lookup all other information from your current mail-server or let the script prompt for the right server to lookup the information.
  • This allows to generate a full OTS JSON file

I wrote a Lotus Script class to do exactly that. I have added it to the JSON generation database, which is available on the OpenNTF Net server via NRPC:


home.openntf.net/openntf-net!!nashcom/domino-ots.nsf


Maybe it would be a good idea to create a separate server registration database which would work with a cert.id or a Domino CA to have a full end to end experience.

But the most important step is to have the container image able to consume the join token style with the
@Base64: syntax.
I have just submitted the additional logic to the develop branch of the container project.


 Domino 

Configure an addtional Notes port on a server

Daniel Nashed – 10 May 2025 15:51:34

The previous blog post was more dealing with the background about having a second Notes TCP/IP port.
This post focuses to setup a new Notes port end to end using the DNUG Lab environment as an example.


The server I am configuring has two separate IP addresses on two different network cards.
But the same procedure would also work with IP addresses in the same network.


Some of the settings have to be specified via notes.ini directly.
Other settings can be configured in the UI, but result in notes.ini settings.


In my example I am using a public (159.69.82.118) and a private IP address (10.0.0.3).


The ports notes.ini setting can be managed with the port configuration in admin client - similar to how you configure your local Notes client port.

The dialog is a bit hidden in the admin client.


Open the admin client and switch to the "Configuration" tab


In the menu select "Configuration -> Server -> Setup ports ..."


Image:Configure an addtional Notes port on a server

After configuring the port the Notes.ini settings "ports" will contain two ports:


ports=TCPIP,TCPIP-LOCAL


Each port gets basic settings set automatically using the dialog.

The first line contains the settings for the port. The last part of it are option bits containing port compression and encryption settings.

The second line contains the connection timeout also specified in the UI.



TCPIP=TCP, 0, 15, 0,,45088

TCPIP_TcpConnectTimeout=0,30


TCPIP-LOCAL=TCP,0,15,0,,45056

TCPIP-LOCAL_TcpConnectTimeout=0,30



Complete Notes port settings for hosting multiple ports


Because both ports by default use port 1352, you now have to bind each port to a specific IP address.
In this case we are assigning the public IP to the standard "TCPIP" port and the local IP to Port "TCPIP-LOCAL".


The prefix for all those parameters is always the port name you selected.

It's a bit confusing because the standard port itself is named "TCPIP" as well.


TCPIP_TcpIpAddress=0,159.69.82.118

TCPIP-LOCAL_TcpIpAddress=0,10.0.0.3



Specify the port used for internet protocols


Once you have another port you want to make sure Internet traffic is send thru the external port by specifying the following notes.ini parameters.


SMTPNotesPort=TCPIP

POP3NotesPort=TCPIP

LDAPNotesPort=TCPIP

IMAPNotesPort=TCPIP



Set the cluster port


In a cluster environment you should also set the default port for cluster traffic to our new local port.


Server_Cluster_Default_Port=TCPIP-LOCAL



Check and complete port settings in server document


The server document contains a list of ports.

The driver will be filled by AdminP. But the other settings need to be completed manually.


Each port should map to a Notes named network (NNN), which should be the same for the same type of port for all servers connected very close to each other -- for example in the same LAN or cluster.

Servers in the same NNN see each other and can route mail without connection documents.


But usually I would recommend creating connection documents for each server to be in full control.



Image:Configure an addtional Notes port on a server



Start/restart ports or better restart server


Restarting a port to bind it only to one IP is tricky.

You can stop and restart ports. But usually it is easier to restart your server.



Check port availability


First  run "
show port TCPIP-LOCAL".

If the port is not yet started try to start it manually:


start port TCPIP-LOCAL


Once the port is enabled on multiple servers, you can trace the connection.

The extended syntax uses the port delimiter "!!!"

The full path to a database is
port!!!server!!db.nsf

Ports are usually omitted and the server chooses the right port or only has one port.
But this syntax also works when tracing connection


trace TCPIP-LOCAL!!!linus.lab.dnug.eu/dnug-lab




Connection documents


Connection documents contain the port to use for the connection. Usually you see "TCPIP" in those connection documents when only one port is available.

Now you can switch connection document to the new local port to use the local connection between servers.



Image:Configure an addtional Notes port on a server

TCP/IP Settings for IPv6


Running IPv6 introduces additional challenges and would be for another blog post.

But here is the basic information for IPv6, which can also assigned to separate ports.

With one Notes port you would only need to enable IPv6. But with multiple ports you will also need to bind the IP to separate ports.


https://help.hcl-software.com/domino/14.0.0/admin/plan_examplesofusingnotesinivariableswithipv6_c.html


Benefits of running domino with multiple TCP/IP ports

Daniel Nashed – 10 May 2025 12:37:49

Introduction


Support for multiple TCP/IP ports has been part of HCL Domino since the early days. Back then, it was first essential to support multiple simultaneous modem connections. It also proved valuable for clustered servers using dedicated network cards.
While today’s networks offer 1 Gbit/s or even 10 Gbit/s speeds—making multiple ports less necessary from a raw bandwidth perspective—there are still compelling reasons to use multiple Notes ports in modern environments.



Historical Context and Evolution


In the days of 10 Mbit/s Ethernet, splitting user and server traffic across different ports and network cards made a lot of sense. This was sometimes even done with dedicated network cables between servers as a private LAN connection.

It helped optimize limited bandwidth and reduce contention. While raw network speeds have improved dramatically, the architectural benefits of multiple ports remain relevant in specific scenarios.



Performance Benefits


The main advantage of using multiple Notes ports is to separate user-to-server traffic from server-to-server traffic. This separation improves performance and scalability, especially under high load.

Each port has its own listener and thread pool, which allows more granular control and scalability for NRPC (Notes Remote Procedure Call) traffic.
You can assign specific ports to different types of connections—for example, routing all cluster replication traffic through a dedicated Notes port with a separate IP address and network card.

This strategy remains highly effective in optimizing performance in Domino environments with high cluster and server activity in general.


Introducing a separate Notes port on the same network card with a separate IP address is already beneficial because the separate TCP/IP listener/queue and the dedicated thread pool to perform operations, help most.
But depending on our hardware or network setup you might already have separate network cards.



Cloud and Cost Considerations


In many cloud environments—particularly with service providers—data ingress and egress are billed separately. However, internal traffic (e.g., within a private 10.x.x.x network) is often free.
By setting up a dedicated Notes port for internal communication, you can route intra-server traffic over the private network. This approach helps reduce monthly costs while preserving performance.



Security and Performance Optimization


External-facing ports should always use encryption, and depending on your setup, enabling compression may also be beneficial.
However, for internal server-to-server connections — such as those between Citrix-hosted Notes clients and back-end servers — disabling compression and even encryption can significantly reduce CPU load and improve performance.


Of course, this optimization assumes you're operating in a trusted network environment.
Your security team must approve any unencrypted traffic. In some cases, traffic is already protected by VPN tunnels, in which case additional encryption at the Notes level may be redundant.

Having support for multiple Notes ports enables these optimizations without compromising external security policies.



Practical Example: DNUG LAB at Hetzner


In our DNUG LAB hosted at Hetzner, we implemented a dedicated internal network port for server-to-server communication using a private 10.x.x.x address.
This internal port is unencrypted and uncompressed, as it is isolated from the external network via firewall and network segmentation.

Even in a small lab environment, this setup has helped reduce costs and improve performance. All servers are configured with a second Notes port, and all connection documents point to the internal network.



Additional security for Different Ports


You can define port-specific access controls, including group-based restrictions. While network segmentation is usually sufficient, the ability to explicitly restrict who can access each port adds another layer of security.

This is particularly useful in cloud deployments or large clustered environments, where server-to-server traffic can significantly exceed typical user traffic due to just-in-time streaming replication and inter-server communication.



Important Note: Directory Assistance Configuration


Be cautious with Directory Assistance (DA) configurations. If you specify a remote server for DA, it may use remote databases by default. This introduces additional load and creates potential failover issues.

To force DA to use a local replica, enter a single asterisk (*) in the server name field. This instructs Domino to always use the local copy, avoiding unnecessary inter-server traffic—even if both servers are in the same data center.



Conclusion


Domino has supported multiple network ports since its inception, and they still offer distinct advantages in specific scenarios.

For most standard servers, a single port is sufficient. But for large clusters, hosted environments, or cost-sensitive cloud deployments, using separate Notes ports can greatly enhance performance, optimize traffic routing, and reduce operational costs.


A follow-up post will walk through the steps to configure a separate Notes port. This article focused on the "why" — next, I will dive into the "how."



 OCR  Tika 

tesseract -- Teaching Tika to read image formats

Daniel Nashed – 10 May 2025 09:37:14

While looking into what I can do with Domino IQ I add some experiments with LLMs in general.

Domino IQ out of the box today not support images.
But sending images to an LLM might even not be the best option you have depending on your use case.


If you are looking for real image processing a visual model might be a good choice.

But in many cases you are looking for "text processing" in images. For example when processing invoice scans or similar documents.


There is something available for a very long time called "OCR", which we might have forgotten during the AI hype.

It turns out that OCR in combination could be a way more efficient and effective way to get text out of images.


So the idea is to pre-process images before running an LLM query.


Domino uses Apache Tika in the background to extract data from many formats.

But out of the box Tika cannot process images.



Tesseract an interesting project with a long history


It turns out that there is a free package which works stand-alone on command line, as C/C++ based lib to include in your applications and also integrates into Tika.

In fact it is well integrated into Tika even being an own project.


One of the reasons it is separate is that it does not fit the Apache Java based project.

But Tika automatically detects it when installed and it is included for example into the Ubuntu distribution.


You find details about the project here -->
https://tesseract-ocr.github.io
But let me show you how simple it is to use it.


Once you have it installed on Linux, you can just run it from command-line.

The command-line is pretty simple. You specify the file and an output text file name.


Example:


tesseract invoice.png invoice
cat invoice.txt


Tika directly integrates with it and finds it once installed.


Notes/Domino leverages Apache Tika in the background running it on local host.
You should not try to use the Domino Tika instance, because it is controlled by the full-text index back-end of Domino and is started and stopped as needed.

But you can start your own Tika instance.


You can either download the latest version or use the one included in Domino.

In this example I am downloading it manually before running it.



Running Tika stand-alone


curl -L
https://dlcdn.apache.org/tika/3.1.0/tika-server-standard-3.1.0.jar -o tika-server.jar
java -jar tika-server.jar > tika.log 2>&1 &


Tika provides a simple REST based API. Notes/Domino uses the exact same interface.


With this interface you can get text from a file you send in a binary post.
But there are also other endpoints which classify attachments in detail by the way.


For a full reference of Tika REST check this link ->
https://cwiki.apache.org/confluence/display/TIKA/TikaServer

But in our case we just want to send a plain request to get the text from an image.

With Tesseract installed, Tika does support image formats.


This interface can be used from command-line or from your own applications -- provided you find a way to send binary data.

The Lotus Script HTTP request class currently does not support sending binary data.

And it would be much more efficient if running the extract on server side.


But this is a general free option you can leverage in your applications not just for scanning images.

You can use Tika for your needs. But you need your own instance running on a different port (because the embedded instance is currently only usable by Domino FT indexing).



curl -T invoice.png  
http://localhost:9998/rmeta/text | jq
curl -T invoice.png
http://localhost:9998/rmeta/text | jq -r '.[0]."X-TIKA:content"'


Domino Tika supporting image indexing


But this does not only work on command-line. Once you installed Tesseract, Notes/Domino can also index images in attachments.


You could install Tesseract OCR on Linux and have Domino Tika use image processing.


To get this working you also have to include those attachment types to FT indexing.
They are disabled by default because Tika cannot process them. So they are not send to Tika for indexing.

But there is a way to have your own list of attachment types to be specified.


In my testing Tesseract make the CPU quite busy for a couple of test attachments.

Top showed that Tika invoked it multiple times in parallel during attachment re-indexing (updall -x db.nsf)



FT_USE_ATTACHMENT_WHITE_LIST=1

FT_INDEX_FILTER_ATTACHMENT_TYPES=*.jpg,*.png,*.pdf,*.pptx,*.ppt



Building a container image with Tesseract support


The Domino container project supports adding Linux packages. Sadly the package is not available on Redhat UBI.

But you can use Ubuntu as the base image of your container and just have the packages added at build time.



./build.sh menu -from=ubuntu -linuxpkg "tesseract-ocr tesseract-ocr-eng tesseract-ocr-deu"




Running your own Tika Server with Tesseract support


Here is a simple test using an Ubuntu docker container.

This could be turned into an own container image eventually.

You also need the download of the Tika server separately. But in a Domino container you would already have Tika installed.


docker run --rm -it ubuntu bash

apt update

apt install -y openjdk-21-jdk curl jq tesseract-ocr tesseract-ocr-eng tesseract-ocr-deu



Alpine Linux would be the better choice for a container


Alpine also supports Tesseract. But does also not include Tika directly.
Here is a simple command line to install it. Alpine is much lighter from the packages installed as you will notice when you run those commands.



docker run --rm -it alpine sh

apk update
apk add openjdk21 curl jq tesseract-ocr tesseract-ocr-data-eng tesseract-ocr-data-deu




My Conclusion & your feedback


I would not add Tesseract to a Domino server for Tika and change the Tika indexing globally.

This was just to show how far we could go. And maybe HCL wants to look into the Tesseract option in some way.

It could be also built into Notes/Domino itself to allow text extraction from images.


I would look into Tika as a separate service you use for your own applications and leave FT indexing alone for now.

Tika itself with or without this extension is another tool in your arsenal for building cool applications.


The tika-server.jar comes with every Notes client and Domino server.
You could run it for your own applications today under your control.
The only challenge is really to send binary data post requests to Tika.


Local Tesseract support on a server could invoke the binary like Tika does.
Or you could use their C lib to add it to your own C-API based soltutions.


I thought about building a DSAPI filter to provide Tika functionality.

And I would be interested to hear if this would make sense from your point of view.

I already have LibCurl code to talk to Tika from a performance troubleshooting project.

It can run on databases to extract attachment data and write the results into a document.


This blog post is to raise the awareness for Tika and Tesseract.
And might be food for thought for your own integrations and requirements.


Did anyone work with Tesseract and/or Tika outside Domino before?

What is your feedback and what are your use cases?


I could build a simple Docker container for reuse putting both components into one new Tika service.

But again it is more a challenge how to access it from Notes/Domino



 Docker  LLM  NVIDIA 

Docker Desktop LLM support with NVIDIA

Daniel Nashed – 5 May 2025 10:48:17

After my first and simple test, I updated Docker Desktop on my lab machine.
There is a GPU option in the experimental settings showing up when you have a matching GPU.


Once enabled it, the following command loads the model into the NVIDIA GPU


docker model run ai/qwen3:0.6B-Q4_0


To check details I ran a Docker container with Ubuntu also using the GPU.


Beside nvidia-smi to check the card driver version I installed "nvtop" (which is part of Ubuntu) to see the GPU performance.


docker run --gpus all -it --rm ubuntu bash


Looking a bit deeper into the installed binaries you can indeed see that Docker also leverages the llama.cpp project (
https://github.com/ggml-org/llama.cpp)

Interestingly the GPU option mentions additional software will be downloaded. But so far I only found those files:



Directory of C:\Program Files\Docker\Docker\resources\model-runner\bin


05/05/2025  12:32    
         .
05/05/2025  12:32    
         ..
05/05/2025  12:31                71 com.docker.llama-server.digest

05/05/2025  12:31         1.838.320 com.docker.llama-server.exe

05/05/2025  12:31            44.784 com.docker.nv-gpu-info.exe

05/05/2025  12:31           481.520 ggml-base.dll

05/05/2025  12:31           492.272 ggml-cpu.dll

05/05/2025  12:31            65.776 ggml.dll

05/05/2025  12:31         1.212.144 llama.dll

              7 File(s)      4.134.887 bytes


Image:Docker Desktop LLM support with NVIDIA


Image:Docker Desktop LLM support with NVIDIA


 Docker  LLM 

Docker - a new player in the LLM business

Daniel Nashed – 5 May 2025 09:30:28

Docker has a new feature in beta. Running models on Docker.
There is not much information about the underlaying technology used.

But during installation you can see that it installs a llama-server (which Ollama and also Domino IQ are using).


Here is a link to the official documentation -->
https://docs.docker.com/model-runner/
Docker provides a registry for models.  For example:
https://hub.docker.com/r/ai/qwen3

To pull a model you just use the new model command. The following is a good small model to test.


docker model pull ai/qwen3:0.6B-Q4_0



Once downloaded you can list models


docker model list

MODEL NAME          PARAMETERS  QUANTIZATION    ARCHITECTURE  MODEL ID      CREATED     SIZE

ai/qwen3            8.19 B      IQ2_XXS/Q4_K_M  qwen3         79fa56c07429  4 days ago  4.68 GiB

ai/qwen3:0.6B-Q4_0  751.63 M    Q4_0            qwen3         df9f2a333a63  4 days ago  441.67 MiB



There are multiple ways to access the AI components.


1. Command Line


Form command line you can just start a model very very similar to what Ollama does


docker model run ai/qwen3:0.6B-Q4_0



2. Within containers


From within containers you can just use the API end-points against:
http://model-runner.docker.internal/

For example the OpenAI end-point:


POST /engines/llama.cpp/v1/chat/completions



3. Docker Socket


curl --unix-socket $HOME/.docker/run/docker.sock \

   localhost/exp/vDD4.40/engines/llama.cpp/v1/chat/completions



4. Expose a TCP socket on Docker host loopback interface


curl
http://localhost:12434/engines/llama.cpp/v1/chat/completions


First look results


This looks like a great new option to run LLM models.


For my first test it looked like it was not using my GPU.

But even on my very old Thinkpad (will test with the new GPU machine) the performance with this small model was OK.


This just the beginning and there is more to discover. I just took a quick peek into it.


There is more to discover. Alone the integration into the registry and having everything from one vendor is interesting.

In addition it is part of the Docker stack and companies would not need to use an open source project like Ollama directly.


This sounds like a smart Docker move to me.


Below are some screen shots from my test this morning.



Image:Docker - a new player in the LLM business


Image:Docker - a new player in the LLM business

Image:Docker - a new player in the LLM business


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]