Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 Loki 

Loki Integration next steps

Daniel Nashed – 7 February 2026 22:42:57

The first integration was based on promtail.
But meanwhile Alloy is the new tool. It is a 400 MB binary.

I have implemented Alloy to read log files. But their might be better ways to integrate it.

Detlev came up with the idea for annotating the process ID of the log line.
That's a bit tricky. But there is a way to implement it. pid.nbf contains the process.ids for all running Domino processes.
But Alloy or promtail can't really annotate PIDs.

A custom program could provided this mapping by reading pid.nbf and evaluating the process-id/thread.id information.

The Domino console.log file rotates in some weird way.
The Start Script logs into notes.log which does not rotate at run-time.


This brings up another idea. Why would we want to write the log first before annotating it.

server | nshlog > /local/notesdata/notes.log

The small C++ program can annotate the log and write it to the log file and also push it to Loki in parallel.

Pushing to the Alloy client didn't work out. Even the Alloy client is 400 MB, the HTTP end-point wasn't configurable.
But I came up with another way.

Loki HTTP API

https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs

Logs can be also pushed directly to Loki. It turns out to be a more direct way.
The only challenge is to have a way temporary store logs in case Loki is not reachable.

Other benefits of an own annotator

Instead of trying to annotate the time format -- which are not that easy to parse in different locales.
Because the log is almost in real-time the annotation can use the ingestion time instead of parsing the timedates.

This makes annotation a lot easier. The resulting format is JSON and can be pushed directly via LibCurl.

I wrote a first implementation of nshlog. The next step is to decouple the read from the write part and have it in a separate thread to ensure no write operation gets stuck.

The first tests look good. This might end up to be another GitHub OpenSource project which might avoid using the huge Alloy client.


Image:Loki Integration next steps
 NIFNSF 

Leftover NIFNSF .ndx files after DBMT runs

Daniel Nashed – 6 February 2026 22:29:51

We ran into this in a larger customer environment and it turned out to be a bug.
The never name format for NIFNSF files is the full database name plus the .ndx extension.

Example:  names_nsf.ndx

The old format which is also created during DBMT runs looks like this n0007264.ndx

Once the compact is finished, the file can be safely deleted.
The issue is solved in the upcoming 14.5.1 release.

Here is the SPR and TN for reference

SPR#JPAIDK8EV9 / https://support.hcl-software.com/csm?id=kb_article&sysparm_article=KB0128064

The leftover .ndx files can have significant size. I just removed the files on my Linux servers.

Here is a simple command line to first find all files and then a second command line to delete those files.

Example for Linux:

find /local/notesdata -name "n[0-9][0-9][0-9][0-9][0-9][0-9][0-9].ndx" -type f -mtime +1 -print
find /local/notesdata -name "n[0-9][0-9][0-9][0-9][0-9][0-9][0-9].ndx" -type f -mtime +1 -delete



 Grafana  Loki 

Is anyone using Grafana Loki for Domino Logs

Daniel Nashed – 6 February 2026 01:53:52

Grafana Loki can be a helpful tool to collect, search and visualize logs.
Has someone looked into it already? In general? For Notes logs?


I have added an Alloy collector to collect Notes console logs and I am looking if I want to annotate the log lines in some way.
If you are using it, I would be very interested to hear from you.

Beside Domino Logs I have looked into writing NGINX logs in JSON format to get them structured pushed into Loki.


Here is an NGINX configuration example:


log_format loki_json escape=json

  '{'

    '"time":"$time_iso8601",'

    '"remote_addr":"$remote_addr",'

    '"request":"$request",'

    '"status":$status,'

    '"method":"$request_method",'

    '"uri":"$uri",'

    '"bytes_sent":$bytes_sent,'

    '"request_time":$request_time,'

    '"upstream_time":"$upstream_response_time"'

  '}';


  access_log /nginx-logs/access.json loki_json;


And I have uploaded a Alloy collection configuration here -->
https://github.com/nashcom/domino-grafana/blob/main/loki/alloy/nginx_config.alloy
It is using environment variables which are set by my exporter at startup.

I have played with the NGINX logs and I could think of getting http request logs from Domino as well.


Domino log.nsf with event meta data?


But discussing with an admin buddy today I we had another idea which could be interesting.
Instead of reading the console.log we could read from log.nsf and get the event types, severity etc from the log document.


Additional logs?

We could do the same with mail routing logs, replication logs and security logs.
Would this make more sense to get structured data with event type and severity?

So far I am just getting console.log. But we could write out the other log output to json files and collect them.
Eventually have one log file to scrape per log type.

In contrast to Splunk universal forwarder, which has a way to push data to the forwarder with Loki we need a file.
But the same kind of interface could later be used for other integrations.


There is also a C-API way to register for events to retrieve. I would need to have a look into if this might be the better integration.
But I am looking for feedback first what type of logging admins would be interested to push to Loki, Splunk or other tools.

I looked into Splunk earlier. It has a simple to use interface to talk to their universal forwarder.


But I want to establish purpose before action.

a.) Would you be OK with just  log file with every console message?
b.) Or would you want to have more granular, categorized log filtered by the Domino event generation, captured either C-API or read from log.nsf?


Right now I am just using a simple log forwarding.


Image:Is anyone using Grafana Loki for Domino Logs

In addition we could turn message logs and replication logs into Loki data.




Image:Is anyone using Grafana Loki for Domino Logs

Configure the Domino container wait for OTS

Daniel Nashed – 4 February 2026 08:22:34

The Domino container image always waited a bit at first start until a OTS file showed up.
This was due to the expansion of the notes data tar file.

But it wasn't an explicit feature to let the server wait for a longer time for a OTS.
If no OTS was present when starting the server binary, it falls back to the legacy Java based listener for remote Domino setup.


A new variable DOMINO_WAIT_FOR_OTS=1 can now be used to always wait for OTS.

This would work for any kind of container where the admin wants to provide OTS JSON thru a trusted channel like docker cp / kubectl cp.
In case of an additional server setup this approach could also copy the server.id into the container before copying the OTS which triggers the setup to continue.

The two trigger files are the following two possible json files in the Domino's server data directory:


  • DominoAutoConfig.json
  • DominoAutoConfigTemplate.json
     
It is now formalizing the wait to be a feature which works also when the configuration does not happen immediately.
This isn't replacing any other setup method and is another way to add more flexibility to avoid a remote listener only using a trusted channel.


Here is an example how it looks like in the container log:


Waiting for OTS JSON in /local/notesdata


... waiting 10 seconds for OTS

... waiting 20 seconds for OTS

... waiting 30 seconds for OTS


OTS configuration found: /local/notesdata/DominoAutoConfig.json

 USB 

USB Performance Part II

Daniel Nashed – 3 February 2026 22:20:51





I ordered a three additional sticks on Amazon and did a quick test. The results are quite interesting.

All of them have been quite OK. Some are better than others and we have a clear winner. The Samsung FIT stick is the smallest, most expensive but has the best performance.

Some of the other sticks are a lot cheaper and some have some latency issues.
The read rate of the Kingston stick is pretty impressive.

The picture I took for the brand new sticks shows they will get scratches soon. But you don't see them only on the picture.
All of them would fit on a key chain. The smallest one is the little Kingston guy.

If you need a permanent solution, Samsung FIT would be my choice.
But for a conference or for an emergency stick on a key chain, Kingston or Intensio sounds like a plan.



Image:USB Performance Part II     Image:USB Performance Part II                   


 Grafana  Alloy  Loki 

Grafana Alloy configuration with environment variables

Daniel Nashed – 3 February 2026 17:21:48

My first approach was to replace variables using envsubst.
But it turns out there is an easier and better method.
You can add environment variables and it even allows to specify empty values.

This is very helpful specially for container configurations.
You can just define those variables in your container configuration -for example custom trusted root file and an authentication token.

The "trick" here is to use the env ("xyz")

Domino server container entrypoint.sh makes sure variables have a meaningful default for logging for example.



logging {

level = env("ALLOY_LOG_LEVEL")

}


loki.write "loki" {

endpoint {

  url = env("ALLOY_PUSH_TARGET")


  bearer_token = env("ALLOY_LOKI_TOKEN")


  tls_config {

    ca_file = env("ALLOY_LOKI_CA_FILE")

  }

}

}


local.file_match "logfiles" {

path_targets = [

  {

    __path__ = env("ALLOY_LOKI_LOGFILE"),

    job      = env("ALLOY_LOKI_JOB"),

  },

]

}


loki.source.file "domino_log" {

targets    = local.file_match.logfiles.targets

forward_to = [loki.write.loki.receiver]

}

 Grafana  Loki 

Grafana Loki - Central log collection

Daniel Nashed – 3 February 2026 01:25:55

Grafana Loki is pretty cool. I looked into it earlier and just revisited it this week.

It's a bit comparable to Splunk. But completely open source and part of the Grafana family.
Loki is well integrated into Grafana and can be a data source for dashboards.


https://grafana.com/oss/loki/

There two different client components to collect logs.

Prometheus Promtail is the older component. The newer component is Grafana Alloy, which also can collect other logs.
For K8s metrics end-points Prometheus Node Exporter is still the tool of choice.


I have just added Grafana Alloy to the container project. It's a simple install option -alloy and needs a single environment variable to configure.


ALLOY_PUSH_TARGET=
http://grafana.example.com:3100/loki/api/v1/push

On the server side you just need a Loki instance, which Grafana leverages as a data source.

The Domino Grafana project docker-compose stack is prepared for Loki.




Image:Grafana Loki - Central log collection
 USB  NVMe 

USB and SSD performance compared

Daniel Nashed – 1 February 2026 21:20:27

For a workshop I am looking into what type of USB sticks we can want to use to boot up Ubuntu from a stick.

Prices and performance differs and there is orders of magnitude difference between them.
After testing I just ordered a couple of additional USB sticks for more testing.

But the list below shows the difference quite well.

There is a easy to use Windows tool to quickly test the performance "winsat".

  • It's pretty clear that a USB 2.0 is very slow. That's not just because of the USB 2.0 standard. This also greatly depends on the chips used.
  • Good USB 3.0 hardware is dramatically better performing.
  • An older Samsung NVMe is pretty good.
  • A current NVMe plays in a different league

I might write up another


winsat disk -drive E



--- Old USB 2.0 stick ---


Very slow even for copying data



Disk  Random 16.0 Read                          8.97 MB/s
Disk  Sequential 64.0 Read                     17.79 MB/s
Disk  Sequential 64.0 Write                     5.98 MB/s
Average Read Time with Sequential Writes       12.861 ms
Latency: 95th Percentile                      596.072 ms
Latency: Maximum                             1170.906 ms
Average Read Time with Random Writes          173.835 ms
Total Run Time 00:08:10.41



--- Current Samsung FIT stick ---


Quite good from transfer rates and latency



Disk  Random 16.0 Read                        53.69 MB/s
Disk  Sequential 64.0 Read                   147.78 MB/s
Disk  Sequential 64.0 Write                   58.54 MB/s
Average Read Time with Sequential Writes       3.451 ms
Latency: 95th Percentile                       5.336 ms
Latency: Maximum                              11.383 ms
Average Read Time with Random Writes           3.572 ms
Total Run Time 00:00:44.84



--- NVMe internal disk on an older notebook
 ---

I would have expected better read performance.

But the latency is dramatically better than for an USB stick!



Disk  Random 16.0 Read                       321.10 MB/s
Disk  Sequential 64.0 Read                   433.78 MB/s
Disk  Sequential 64.0 Write                   97.33 MB/s
Average Read Time with Sequential Writes       0.620 ms
Latency: 95th Percentile                       1.839 ms
Latency: Maximum                              14.415 ms
Average Read Time with Random Writes           0.591 ms
Total Run Time 00:00:40.08



--- NVMe internal disk on my new notebook
 ---

Dramatic increase in read and write performance.

Another 10 times better latency as well!



Disk  Random 16.0 Read                       1508.35 MB/s
Disk  Sequential 64.0 Read                   4414.35 MB/s
Disk  Sequential 64.0 Write                  1138.50 MB/s
Average Read Time with Sequential Writes        0.081 ms
Latency: 95th Percentile                        0.152 ms
Latency: Maximum                                1.208 ms
Average Read Time with Random Writes            0.084 ms
Total Run Time 00:00:07.33

 Domino  Linux  OTS 

Domino on Linux OTS changes for Domino 14+

Daniel Nashed – 1 February 2026 20:57:54

The Domino start script and the container has build-in OTS support.
There are OTS files for first server and additional server setup, which contain the SERVERSETUP_XXX variables as placeholders you get prompted for at setup.

Domino 14.0 introduced new OTS functionality, which make OTS setups more flexible --> https://github.com/HCL-TECH-SOFTWARE/domino-one-touch-setup?tab=readme-ov-file#document-operations

Find document operations support formulas since Domino 14.0 which makes lookups more straightforward. Else server names would need to exactly match.
That's problematic with servers with C= and OU= names.
Also the find operation with a formula is much more flexible in general.

Separating OTS configurations for Domino 14.0+ and older versions will allow to add more specific configuration.

There are additional options to use in Domino 14.0+ which we might want to use over time.
In addition some configuration settings are not needed on Domino 14+ -- For example iNotes redirect databases.

Below is how this is planned to look like.

The menu structure is extensible and I just added new JSON entries and new OTS JSON files.
I also removed the default environment variables for additional servers, because they never match existing values.

The env file for additional servers did not make much sense. Those variables are all depending on your environment and need to be typed in anyway.
For a first server it can make sense to quickly test a setup and have an example for each parameter.


Because some settings will not be good for larger servers, I added a way to specify an "info" in the menu configuration file.
And I added an info to all of the OTS configurations as you see below.

I am submitting it to the develop branch of the Domino container image and the start script.

dominoctl setup


[1] Domino 14+ First server JSON
[2] Domino 14+ Additional server JSON
[3] Domino 12.x

Select [1-3] 0 to cancel? 1


Info: Small server config - For a production server review tuning like: NSF_BUFFER_POOL_SIZE_MB=1024

SERVER_NAME: my-domino-server


 DKIM 

DKIM keys with RSA 2048 are now recommended

Daniel Nashed – 1 February 2026 18:33:57

There are two type of DKIM keys you can use:


  • RSA keys are the classical key type everyone supports
  • Ed25519 keys are based on elliptic curve crypto and are much shorter length with better key strength.
     

Not every server supports Ed25519 keys yet. To ensure best compatibility.

You either have to stay with RSA keys or use dual key signing with an Ed25519 key and a RSA key.

Domino DKIM supports both key types and I am running dual keys.


Earlier the best practice was to use a RSA 1024 key as long it was sufficient strong.
Now some providers require RSA 2048 keys to be full compliant.



Why are RSA 2048 keys are a challenge


The maximum length for a DNS entry is 255 bytes. A RSA 1024 key and an Ed25519 key fit into a single entry.

But a RSA 2048 key needs to be split into multiple parts.


This is usually not a big deal -- but depends on your DNS provider interface.


  1. DNS TXT records need to be enclosed in quotes
     
  2. When splitting the DNS TXT record each part needs to be quoted on it's own.


How it looks like and how to query DKIM text records



nslookup -type=txt rsa20260201._domainkey.nashcom.de


rsa20260201._domainkey.nashcom.de       text = "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzr/zFXV9H1HSC54U9qxSPsRNs/bngeNqJfTe8mV058hPnBPp5m2CBfAZZUHvQ1gB7pic5nUJ5rX7NuSWFB/W+9kf0UG92dLWKseUT6h7QoNIUlz0bOnNV1aji62ZUWEf1wL6iwLmbHwLYO0l8wUreoWtvwNpsnJqeW5YxSBNEHPW8EWFFtBkQ29m0xlToVJU1"
"mm9Hexn9LLkDQko90naiFxkeZy84vTixmv8xIMQVlKxZi3Arwz/xdUrGPfFwQI6Uu3IMjKzHrlOeZA5tmqBdLRwvFisAuiCY2UudkJrRt0xPjC/tHCcYcKYjLcJaFa9YWHTG8aqeeg4ApVYcyZEPQIDAQAB;"


One DNS TXT Records with multiple parts


On first sight it might look like multiple records. But it is really one record with multiple parts.


Some DNS GUIs support just pasting a single quoted string and chunk it on their own.


But in many DNS GUIs you have to create the chunks on your own and add one DNS entries with those multiple quoted parts.
Using the Hetzner DNS interface you just specify multiple strings. The maximum length of each part without quotes is 255 bytes exactly as shown in my example above.


I added logic to the Hetzner DNS TXT API to split the record. The Hetzner API expects one entry with multiple strings like this:


{"name":"rsa20260201._domainkey","type":"TXT","ttl":60,"records":[{"value": "\"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzr/zFXV9H1HSC54U9qxSPsRNs/bngeNqJfTe8mV058hPnBPp5m2CBfAZZUHvQ1gB7pic5nUJ5rX7NuSWFB/W+9kf0UG92dLWKseUT6h7QoNIUlz0bOnNV1aji62ZUWEf1wL6iwLmbHwLYO0l8wUreoWtvwNpsnJqeW5YxSBNEHPW8EWFFtBkQ29m0xlToVJU1\" \"mm9Hexn9LLkDQko90naiFxkeZy84vTixmv8xIMQVlKxZi3Arwz/xdUrGPfFwQI6Uu3IMjKzHrlOeZA5tmqBdLRwvFisAuiCY2UudkJrRt0xPjC/tHCcYcKYjLcJaFa9YWHTG8aqeeg4ApVYcyZEPQIDAQAB;\"","comment":"Created by Domino CertMgr"}]}




Image:DKIM keys with RSA 2048 are now recommended

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]