Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 TLS 

Is TLS 1.3 required today? What are the best practices?

Daniel Nashed – 15 February 2026 11:39:25

TLS 1.3 Support for the Domino INET Stack?


The AHA idea for TLS 1.3 support is from 2008 and has currently 289 votes.


https://domino-ideas.hcltechsw.com/ideas/DOMINO-I-124

I agree that TLS 1.3 is important to have supported in 2026! But it is not yet mandatory from security point of view yet.

Here is how I would see it in 2026. We can have a separate blog post of which parts of Domino already supports TLS 1.3.

This blog post is mainly about the internet stack -- which means the Internet tasks which share one network stack where TLS 1.2 with RSA and ECDSA keys is the current standard.


I know there are some very specific requirements in some industries where servers are configured TLS 1.3 only.
This would for example break Notes web services which use the INET stack as well.

The TLS 1.3 topic comes up every couple of weeks and this post is about how I would see it today for Domino Ineternet protocols on server side.


What are required standards by BSI and NIST?

  • Both German BSI and US NIST recommend using TLS 1.3. But also still allow TLS 1.2 with the right ciphers.
  • NIST requires systems to support TLS 1.3. But does not mandate to only support TLS 1.3.
    The background is likely that they want to move everyone to TLS 1.3 and that would only work when everyone supports it.


General rules for standards -- not just security

  • Be as standard compliant as you can on your side
  • Tolerate as much not RFC / best practices compliance as it is allowed for others connecting to you

This would mean in the context of TLS

  • Support TLS 1.3 for everyone who already supports it
  • But also support TLS 1.2 with the right ciphers and curves

There might be special requirements for specially hardened systems that only want to support TLS 1.3 for specific reasons.
But today I would not see that servers should deny TLS 1.2.


There are some important advantages in TLS 1.3. For example Forward Secrecy is enforced by design. But those advantages don't mean that TLS 1.2 should not be used at all.



TLS 1.2 with modern ciphers are still a good and secure standard


When using ECDSA keys a Domino server automatically picks two good ciphers:

C02C, TLSv1.2, ECDHE-ECDSA-AES256-GCM-SHA384 , TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
C02B, TLSv1.2, ECDHE-ECDSA-AES128-GCM-SHA256 , TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256


For RSA keys the recommended out of the box configuration is.

The RSA ciphers are configured in server document  (the two ECDSA ciphers are controlled by notes.ini parameters -- and usually do not need to be changed).


C030, TLSv1.2, ECDHE-RSA-AES256-GCM-SHA384   , TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

009F, TLSv1.2, DHE-RSA-AES256-GCM-SHA384     , TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

C02F, TLSv1.2, ECDHE-RSA-AES128-GCM-SHA256   , TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

009E, TLSv1.2, DHE-RSA-AES128-GCM-SHA256     , TLS_DHE_RSA_WITH_AES_128_GCM_SHA256



Distinction by protocol



HTTPS


HTTP has the strongest requirement but also the best support for TLS 1.3 in browsers, libs like LibCurl used in many applications.

Browsers support TLS 1.3 and also ECDSA keys for a long time.



SMTP with STARTTLS


When looking into SMTP connections not everyone uses TLS today.

But I think it is time to block anyone who isn't using opportunistic TLS today.


Do you want to block mail coming into your system from a badly configured server?
You could require to only send mail over TLS. But blocking incoming messages is a different story IMHO.
But because of GDPR we probably have to enforce TLS for every incoming and outgoing SMTP connection.

Blocking incoming messages is a bigger topic from SPAM and content security point of view.
Most providers require TLS + DKIM and check SPF records. But that's a separate story.


SMTP with STARTTLS is often a different story than HTTPS. The stack usually does not even support ECDSA and staying with TLS 1.2 and RSA with a good selected cipher list is a best practice.

But that does not mean you should not start offering TLS 1.3 today.



Loadbalancers & Offloading TLS


The big difference between the two protocols is that off-loading HTTPS  to a reverse proxy or load balancer is much easier than to off-load SMTP TLS.

For SMTP the only real chance (unless you have an enterprise grade load-balancer with SMTP support) you would need a Relay host in between.


In larger companies SMTP is secured by specially hardened SMTP appliances for anti-spam, anti-virus and policy enforcement.
Talking to those appliances using TLS 1.2 should be perfectly fine today.

So for outgoing and incoming SMTP traffic the SMTP appliance or specially hardened SMTP server in a DMZ is your first line of defense.

Not the internal mail server which speaks SMTP with STARTTLS or TLS to a relay host.



Revising HTTPS


Usually HTTP servers are behind a reverse proxy or load balancer to implement high availability.

In this context TLS can be off-loaded to the reverse proxy / load balancer.


In a secured infrastructure the TLS traffic is terminated on DMZ level on specially hardened appliances and forwarded only to explicitly allowed targets.

A simple load balancer would be for example NGINX which fully supports TLS 1.3 with a dual stack for RSA and ECDSA.


So that your back-end server can still use TLS 1.2 with ECDSA and one of the two mentioned ciphers at the beginning of this post.


But not only external facing HTTPS servers are behind load-balancers and reverse proxies. The same type of configuration also makes sense for internal servers.



Note about end to end TLS


In most scenarios the traffic should be end to end encrypted.
But that does not mean you need the same strong requirement for TLS between those systems.


You can use private CA ECDSA certificates and use the most efficient ciphers internally between you Domino servers and the load-balancer.

Domino CertMgr can manage MicroCA certificates with automatic renewal including distributing them to your servers.


The public trusted certificate would be on the load balancer -- which would trust your Domino MicroCA for back-end certificate validation.



My personal conclusion an summary


For larger environments no "application" server -- like Domino with this is true for all other server types -- is directly internet facing.
Everyone with higher security standards uses hardened appliances in front of their servers.


Domino servers come with good TLS 1.2 cipher support for RSA and ECDSA today.


TLS 1.3 is still an important requirement and is a must have for the next Notes/Domino release after version 14.5.1 ships in March.


 HashiCorp  SSH  OIDC 

Leveraging HashiCorp Vault signing a SSH-key authenticated using Domino OIDC

Daniel Nashed – 15 February 2026 23:23:53

There are a couple of vault provides which can sign SSH keys.

A vault has a couple of responsibilities


  • First of all it protects private keys against unauthorized usage
  • But a vault also provides mechanisms to authenticate
  • It also provide tight control who can issue certificates

The following write up is what I configured for a first lab setup to get familiar with all the components.


  • Setup a HashiCorp in a Docker container
  • Configure a SSH signer
  • Create a Domino OIDC configuration for HashiCorp
  • Configure OIDC for authentication using a JWT token -- instead of using a web UI flow


OIDC authentication using Domino OIDC


Notes/Domino 14.5.1 has a new feature in Lotus Script to request a JWT thru the Notes session.


Token = session.getOIDCAccessToken (Server, ClientID, Issuer, Resource, Scopes)


https://help.hcl-software.com/dom_designer/14.5.1/basic/H_GETOIDCACCESSTOKEN_METHOD.html


This token can be turned into a HashiCorp token for the REST API.

And finally request a signature for a publish SSH key.

Those components would be a good base for an enterprise grade application to centrally manage SSH access.

HashiCorp is managed using their "vault" command-line.





Simple flow diagram


Image:Leveraging HashiCorp Vault signing a SSH-key authenticated using Domino OIDC


Request Traces with details




------------------------------------------------------

Domino JWT

------------------------------------------------------


{

"typ": "Bearer",

"iss": "
https://oidc.lab.dnug.eu/auth/protocol/oidc",
"sub": "CN=Admin/O=dnug-lab",

"aud": "oidc-hashicorp",

"iat": 1771109257,

"exp": 1771109557,

"auth_time": 1771109257,

"scope": "sub",

"cn": "CN=Admin/O=dnug-lab",

"jti": "22595ea3-dc2a-d97b-dbd8-cbd72c710ba9",

"client_id": "oidc-hashicorp-dnuglab",

"email": "admin@lab.dnug.eu",

"family_name": "Admin",

"name": "Admin"

}


------------------------------------------------------

Get Vault Token

-----------------------------------------------------


hvs.CAESIIY-CRMIpoXt6pzDi9b1YDt8vkz9eAjSUcTuNetc-62UGh4KHGh2cy5hZ0dVVXltU0NDMHZKNncwMlNWODZuRjY


------------------------------------------------------

Get Capabilities

------------------------------------------------------


{

"capabilities": [

"update"

],

"ssh/sign/linux-admin": [

"update"

],

"request_id": "629d66fd-7480-8be4-a911-e29b58d06cce",

"lease_id": "",

"renewable": false,

"lease_duration": 0,

"data": {

"capabilities": [

"update"

],

"ssh/sign/linux-admin": [

"update"

]

},

"wrap_info": null,

"warnings": null,

"auth": null,

"mount_type": "system"

}


------------------------------------------------------

Sign SSH Key

------------------------------------------------------



------------------------------------------------------

Key Infos

------------------------------------------------------


ssh_linux_admin:

Type: ssh-ed25519-cert-v01@openssh.com user certificate

Public key: ED25519-CERT SHA256:kXHEfj/I3pS7r5LHRH/WyxXrfLw7JbuRlcYh0lzhhYM

Signing CA: ED25519 SHA256:rTUjegEvIpBca1S2HpkexZ/g1COp+UZ544smxSCOsjY (using ssh-ed25519)

Key ID: "vault-jwt-CN=Admin/O=dnug-lab-9171c47e3fc8de94bbaf92c7447fd6cb15eb7cbc3b25bb9195c621d25ce18583"

Serial: 3182202866998231282

Valid: from 2026-02-14T23:48:38 to 2026-02-15T00:04:08

Principals:

        linux-admin

Critical Options: (none)

Extensions: (none)

 SSH 

Using an SSH Certificate Authority (CA) for Centralized SSH Access

Daniel Nashed – 14 February 2026 17:28:46

Managing authorized_keys across many servers does not scale and can't be easily recoved without logging into each machine and removing the public key.

SSH certificates solve this by introducing a Certificate Authority (CA) model: servers trust one CA, and the CA signs user keys.
No more distributing individual public keys to every machine.

It also lowers the exposure by limiting what a user can do on a machine and how long a key can log into the machine.
You can even restrict from where to login and you can also enforce a command to be executed.

The following is just to explain how the components work together.
There are solutions like the step-ca or HashiCorp vault to automate operations.


But this write-up should help to understand how it works together.

Here is a reference to smallstep CA to explain it in another way -->
https://smallstep.com/docs/tutorials/ssh-certificate-login/

1. Create an SSH User CA


Your CA is just an SSH key pair used to sign other keys:



ssh-keygen -t ed25519 -f ca_user -C "SSH User CA"

  • ca_user → private CA key (keep secure!)
  • ca_user.pub → distributed to servers

Protect the private CA key carefully — it can issue valid login certificates.

2. Generate a User Key


Each user still has their own key pair:


ssh-keygen -t ed25519 -f id_ops



3. Sign the User Key


Use the CA to create a short-lived certificate:



ssh-keygen -s ca_user \
-I "nsh-ops" \
-z 1001 \
-n opt-notes \
-V +1m \
-O source-address=192.168.39.0/24 \
id_ops.pub


This creates:
 id_ops-cert.pub

Key options:


  • -n → allowed principals (roles)
  • -V → validity period (short-lived = safer)
  • -O source-address → restrict client IP
  • -I → certificate identity
  • -z → serial number


Optional restriction to just :

-O force-command="/usr/bin/domino"


4. Configure the Server to Trust Your CA


Instead of storing user keys, configure SSH once:


echo 'TrustedUserCAKeys /etc/ssh/ca/ca_user.pub' >> /etc/ssh/sshd_config
echo 'AuthorizedPrincipalsFile /etc/ssh/auth_principals/%u' >> /etc/ssh/sshd_config
echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config
echo 'PubkeyAuthentication yes' >> /etc/ssh/sshd_config


Install the CA public key:



mkdir -p /etc/ssh/ca
cp ca_user.pub /etc/ssh/ca/
chmod 644 /etc/ssh/ca/ca_user.pub


Restart SSH:



systemctl restart ssh


5. Control Access via Principals


If the certificate contains:

-n ops-notes

Allow it for a user:



mkdir -p /etc/ssh/auth_principals
echo "ops-root" > /etc/ssh/auth_principals/root


Access is granted only if:

  • The certificate is signed by your CA
  • It is still valid
  • The principal matches
  • Source IP matches (if restricted)
     

No authorized_keys file required.

Why Use SSH Certificates?


  • Central trust model
  • No key sprawl
  • Short-lived credentials
  • Role-based access via principals
  • Strong automation support
     

SSH certificates are one of the most powerful — and underused — features of OpenSSH.
For larger or dynamic infrastructures, they’re a major upgrade over traditional authorized_keys management.




 FIDO2  WSL  SSH 

Using FIDO2 (YubiKey) on WSL for SSH

Daniel Nashed – 14 February 2026 17:02:07


In a previous post I looked a FIDO2 key in general.
On Windows 11 you can expose the USB key to your WSL Linux instance. In my case a Ubuntu 24.04 instance.

Here are the steps:


Step 1 ‐ Install usbipd on Windows


WSL does not automatically see USB security keys. We use usbipd to attach the device.

Install the Windows driver:

winget install usbipd


List devices:

Once installed, you can list the devices connected

usbipd list


Connected:
BUSID  VID:PID    DEVICE                                                        STATE
4-1    1050:0407  USB Input Device, Microsoft Usbccid Smartcard Reader (WUDF)   Shared
4-10   8087:0033  Intel(R) Wireless Bluetooth(R)                                Not shared



Step 2 ‐ Attach the YubiKey to WSL



Bind the device:


usbipd bind --busid 4-1


Attach to WSL:


usbipd attach --wsl --busid 4-1


Now inside WSL list available devices:


lsusb


Output should show the device and  WSL can access the FIDO2 interface.


Bus 001 Device 002: ID 1050:0407 Yubico.com Yubikey 4/5 OTP+U2F+CCID



Step 3 ‐ Create a FIDO2 SSH Key



Inside WSL create the key:

ssh-keygen -t ed25519-sk -O resident -O verify-required



What those options mean:

  • -t ed25519-sk → FIDO2 security key
  • -O resident → Key stored on the device (discoverable)
  • -O verify-required → PIN + touch required

You’ll be prompted for:
  • Security key PIN
  • Touch confirmation

Private key material never leaves the YubiKey.

Step 4 ‐ Retrieve the Public Key


Because we used a resident key, you can retrieve it from the device:

ssh-keygen -K

Example output:

sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1l...= root@nsh-t14


Add that public key to your server’s authorized_keys.



Why This Is Better Than Traditional SSH Keys
Traditional Key
FIDO2 Key
Private key stored on disk Private key stored in hardware
Can be copied Cannot be extracted
No user presence check Requires touch
Optional passphrase Enforced PIN




Architecture Overview


Flow:

  1. Windows sees USB device
  2. usbipd attaches it to WSL
  3. OpenSSH inside WSL talks FIDO2
  4. YubiKey performs signing
  5. Server validates via public key

Final Thoughts


TPM is great for machine identity.
FIDO2 is great for human identity.

For privileged SSH access, especially in mixed Windows/WSL environments, this is one of the cleanest and most secure setups you can run today.

 TPM2  SSH 

Using the TPM 2.0 Chip to store Your SSH Private Key

Daniel Nashed – 14 February 2026 16:36:06

In the previous blog post I explained FIDO2 security keys to protect SSH credentials.
But what if we want hardware protection without an external token?


Most modern systems already include a TPM 2.0 chip. We can use it as a hardware-backed SSH key store.
This could be also a VM having virtual TPM assigned.



Our goals:

  • The private key must never exist as a file
  • The key must be bound to this machine
  • Every use must require a PIN
  • It must work with standard OpenSSH

Install Required Software


Install the TPM tools and the PKCS#11 provider that OpenSSH can use.


apt install tpm2-tools libtpm2-pkcs11-tools libtpm2-pkcs11-1 opensc



Initialize the TPM Store


Create a persistent PKCS#11 store backed by the TPM.


tpm2_ptool init
tpm2_ptool listprimaries



Create a Token (Protected by PIN)



tpm2_ptool addtoken \
--label ssh-token \
--userpin 123456 \
--sopin 654321 \
--pid 1


The userpin will be required whenever the key is used.



Generate the SSH Key Inside the TPM


The private key is generated inside the TPM and cannot be exported.
There is no private key file in ~/.ssh.


tpm2_ptool addkey \
--label ssh-token \
--key-label ssh-key \
--userpin 123456 \
--algorithm ecc256



Extract the Public Key


print a normal OpenSSH-compatible public key that you can place into authorized_keys.


ssh-keygen -D /usr/lib/x86_64-linux-gnu/pkcs11/libtpm2_pkcs11.so



Use the TPM-Backed Key


SSH will prompt for the TPM user PIN.
The TPM performs the signature internally — the private key never leaves the chip.


ssh -I /usr/lib/x86_64-linux-gnu/pkcs11/libtpm2_pkcs11.so notes@127.0.0.1



What This Protects Against


  • Copying private key files
  • Disk image theft
  • Backup extraction


The key is bound to this hardware and cannot be exported.


What It Does Not Protect Against


  • Root compromise of a running system
  • A compromised hypervisor


The TPM prevents key extraction, not runtime misuse.


With this setup, we now have an SSH key that:

  • Is hardware-bound
  • Cannot be copied
  • Requires a PIN
  • Works with standard OpenSSH


Some additional notes


They key can be also loaded into a SSH agent and also supports all kind of other flows like signed keys.
My focus for this post is Linux and the use case would be VMs where we want to protect the key.

TPM isn't available on WSL. On a notebook a FIDO2 key would be the better option anyway.
But for protecting keys on a VM TPM can be a great option to protect SSH keys.




 YubiKey  SSH 

Modern SSH Authentication with FIDO2 Security Keys (ed25519-sk)

Daniel Nashed – 14 February 2026 01:42:35


Back in 2020, I described a “Paranoid SSH” setup using:

  • Public key authentication
  • Password authentication
  • TOTP via Google Authenticator
  • Hardware-protected OTP storage


That gave me something close to 3–4 factors.
Today, there is a cleaner and cryptographically stronger way:


FIDO2-backed SSH keys using ed25519-sk

  • No shared secrets.
  • No PAM modules.
  • No TOTP seeds in home directories.
  • No passwords required.

What Is an sk-SSH Key?


When you see a key like this:

sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5...


The sk- prefix means: Security Key backed.

The private key is not stored on disk.


It lives inside a hardware device such as a YubiKey.

OpenSSH delegates signing operations to the hardware token via FIDO2.


Security Model



With a hardware-backed SSH key, authentication requires:


  1. Possession – The physical security key
  2. Knowledge – The PIN protecting the key
  3. Presence – Touch confirmation on the device


Unlike TOTP:

  • No shared secret is stored server-side
  • No secret file exists in ~/.google_authenticator
  • No replayable codes
  • No PAM dependency

Even if the server is fully compromised, the attacker gains:

  • Only the public key
  • No reusable secret
  • No ability to generate signatures
Authentication strength does not degrade with server compromise.

Generating a FIDO2 SSH Key


Requirements:
  • OpenSSH ≥ 8.2
  • A FIDO2-compatible security key (e.g., YubiKey 5 series)

Generate a hardware-backed key:

ssh-keygen -t ed25519-sk -O resident -O verify-required



Options explained:


  • ed25519-sk → Hardware-backed ED25519 key
  • -O resident → Store credential inside the security key
  • -O verify-required → Require PIN verification for every authentication

You will be prompted for:

  • FIDO2 PIN
  • Touch confirmation

This creates:

  • id_ed25519_sk
  • id_ed25519_sk.pub

Important:

  • The file id_ed25519_sk is not the private key. It is only a reference stub.
  • The real private key never leaves the hardware.

Installing the Public Key on the Server


As usual:

ssh-copy-id -i id_ed25519_sk.pub user@server


Or manually append to:

~/.ssh/authorized_keys



Using the Key


Now simply connect:

ssh user@server


You will see:

Confirm user presence for key ED25519-SK

Then:

  1. Enter PIN
  2. Touch the key
  3. Login succeeds


No password required.



Why resident Matters



Using -O resident stores the credential inside the security key.

You can later retrieve it on another machine once the Yubikey is available via USB.
This reconstructs the stub file from the hardware.
Without -O resident, losing the stub file would make the credential unusable.



ssh-keygen -K



Comparison: TOTP vs FIDO2 SSH
Feature
TOTP + PAM
FIDO2 ed25519-sk

Shared secret on server
Yes No
Secret in home directory Yes No
Replayable codes Possible No
Hardware enforced Optional Yes
Offline brute-force protection No Yes (PIN retry limits)
PAM required Yes No




FIDO2 replaces password + TOTP stacks with:
  • Asymmetric cryptography
  • Hardware-enforced signing
  • Built-in rate limiting

Minimal Server Configuration



You can now simplify SSH configuration significantly:

PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
AuthenticationMethods publickey

  • No password fallback.
  • No TOTP.
  • No additional modules.

What Happens During Authentication

  1. Server sends a random challenge
  2. Client forwards challenge to the security key
  3. Security key verifies PIN
  4. User confirms presence (touch)
  5. Security key signs challenge
  6. Server verifies signature using stored public key
The private key never leaves the hardware.

Operational Considerations

  • Register at least two security keys
  • Store a backup in a safe location
  • Document recovery procedure
  • Avoid keeping password fallback enabled
If the security key is lost:
  • Revoke its public key from authorized_keys
  • Enroll a new hardware key

Conclusion


FIDO2-backed SSH keys replace:
  • Password authentication
  • TOTP-based 2FA
  • Shared-secret models


With:
  • Hardware-protected asymmetric cryptography
  • PIN-enforced access
  • User-presence confirmation
  • Zero shared secrets
For hardened infrastructure, this is a cleaner and stronger model than traditional “multi-factor via PAM”.

Linux tools for performance tracing and optimization

Daniel Nashed – 12 February 2026 10:28:24

This week I am working on an application to forward logs received from the Domino server via pipe with STDIN.

The application uses multiple threads to queue, process and push changes to different targets.


To ensure stdin is immediately processed, it is running with multiple threads for different targets and also for retry when pushing directly to a remote server (like Loki via REST API).
This is a C++ helper process sitting next to Domino, not Domino itself.


Tight loops with the right timing, wake-ups, mutexes, queues are really important for a process like this.
Last night I was looking into performance optimizing the code. It turned out that changing the timing of some of the inner loops reduced CPU load and system calls.

Linux has awesome tools to analyze performance. strace is also useful to trace system calls in general. But in my context I am using it to summarize the calls not to trace each of them.

There are two great tools, which help to understand what a process is doing.

In contrast to "top" and other tools the focus is not on how much total CPU, context switched etc a process uses.
It's more about which fraction of the resources is used in which code area.


The following is a quick example how it looks for my application. It has multiple threads and multiple worker loops.
The break down in profiling helps understanding what is going on. For larger applications this needs to be analyzed really narrowed down to a process and/or a short time when a specific problem occurs.

Those two tools can be useful also for other applications. But it is of course difficult to optimize if you can't change the code.
For an application like Domino where the application is layered you can't change the lower part of the code.
But you have influence on your applications in C-API, Lotus Script etc.

Analyzing on Linux level is often only helpful to understand what resources are used to continue tracing on higher levels what your application is doing.
In that case the Linux tools only give you an idea what to look for in higher levels and to measure improvements when optimizing code.


In my case it was looking into the C/C++ code itself and the system calls it was performing.
Using perf you see the system calls and your application code. strace focuses on system calls only.



perf top -p  1853196


Samples: 21  of event 'task-clock:uppp', 4000 Hz, Event count (approx.): 3463221 lost: 0/0 drop: 0/7

Overhead  Shared  Symbol

29.71%  domfwd  [.] sccp

14.51%  domfwd  [.] __stdio_close

14.51%  domfwd  [.] fopen

 7.43%  domfwd  [.] __clock_nanosleep

 4.84%  domfwd  [.] LoadPidMap(char const*, std::unordered_map<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<int>, std::equal_to<int>, std::allocator<

 4.84%  domfwd  [.] PushThread(void*)

 4.84%  domfwd  [.] __fstatat

 4.84%  domfwd  [.] __intscan

 4.84%  domfwd  [.] __lockfile

 4.84%  domfwd  [.] __stdio_read

 4.84%  domfwd  [.] std::basic_filebuf<char, std::char_traits<char> >::basic_filebuf()



strace -c -p 1853196


strace: Process 1853196 detached

% time     seconds  usecs/call     calls    errors syscall

------ ----------- ----------- --------- --------- ----------------

62.65    0.008708          10       862           write

25.78    0.003583           4       754           futex

11.57    0.001608           9       168           read

------ ----------- ----------- --------- --------- ----------------

100.00    0.013899           7      1784           total

 Loki 

Loki Integration next steps

Daniel Nashed – 7 February 2026 22:42:57

The first integration was based on promtail.
But meanwhile Alloy is the new tool. It is a 400 MB binary.

I have implemented Alloy to read log files. But their might be better ways to integrate it.

Detlev came up with the idea for annotating the process ID of the log line.
That's a bit tricky. But there is a way to implement it. pid.nbf contains the process.ids for all running Domino processes.
But Alloy or promtail can't really annotate PIDs.

A custom program could provided this mapping by reading pid.nbf and evaluating the process-id/thread.id information.

The Domino console.log file rotates in some weird way.
The Start Script logs into notes.log which does not rotate at run-time.


This brings up another idea. Why would we want to write the log first before annotating it.

server | nshlog > /local/notesdata/notes.log

The small C++ program can annotate the log and write it to the log file and also push it to Loki in parallel.

Pushing to the Alloy client didn't work out. Even the Alloy client is 400 MB, the HTTP end-point wasn't configurable.
But I came up with another way.

Loki HTTP API

https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs

Logs can be also pushed directly to Loki. It turns out to be a more direct way.
The only challenge is to have a way temporary store logs in case Loki is not reachable.

Other benefits of an own annotator

Instead of trying to annotate the time format -- which are not that easy to parse in different locales.
Because the log is almost in real-time the annotation can use the ingestion time instead of parsing the timedates.

This makes annotation a lot easier. The resulting format is JSON and can be pushed directly via LibCurl.

I wrote a first implementation of nshlog. The next step is to decouple the read from the write part and have it in a separate thread to ensure no write operation gets stuck.

The first tests look good. This might end up to be another GitHub OpenSource project which might avoid using the huge Alloy client.


Image:Loki Integration next steps
 NIFNSF 

Leftover NIFNSF .ndx files after DBMT runs

Daniel Nashed – 6 February 2026 22:29:51

We ran into this in a larger customer environment and it turned out to be a bug.

The never name format for NIFNSF files is the full database name plus the .ndx extension.


Example:  names_nsf.ndx

The old format which is also created during DBMT runs looks like this n0007264.ndx


Once the compact is finished, the file can be safely deleted.

The issue is not yet solved in the upcoming 14.5.1 release.


Here is the SPR and TN for reference


SPR#JPAIDK8EV9 /
https://support.hcl-software.com/csm?id=kb_article&sysparm_article=KB0128064

The leftover .ndx files can have significant size. I just removed the files on my Linux servers.


Here is a simple command line to first find all files and then a second command line to delete those files.


Example for Linux:


find /local/notesdata -name "n[0-9][0-9][0-9][0-9][0-9][0-9][0-9].ndx" -type f -mtime +1 -print

find /local/notesdata -name "n[0-9][0-9][0-9][0-9][0-9][0-9][0-9].ndx" -type f -mtime +1 -delete



 Grafana  Loki 

Is anyone using Grafana Loki for Domino Logs

Daniel Nashed – 6 February 2026 01:53:52

Grafana Loki can be a helpful tool to collect, search and visualize logs.
Has someone looked into it already? In general? For Notes logs?


I have added an Alloy collector to collect Notes console logs and I am looking if I want to annotate the log lines in some way.
If you are using it, I would be very interested to hear from you.

Beside Domino Logs I have looked into writing NGINX logs in JSON format to get them structured pushed into Loki.


Here is an NGINX configuration example:


log_format loki_json escape=json

  '{'

    '"time":"$time_iso8601",'

    '"remote_addr":"$remote_addr",'

    '"request":"$request",'

    '"status":$status,'

    '"method":"$request_method",'

    '"uri":"$uri",'

    '"bytes_sent":$bytes_sent,'

    '"request_time":$request_time,'

    '"upstream_time":"$upstream_response_time"'

  '}';


  access_log /nginx-logs/access.json loki_json;


And I have uploaded a Alloy collection configuration here -->
https://github.com/nashcom/domino-grafana/blob/main/loki/alloy/nginx_config.alloy
It is using environment variables which are set by my exporter at startup.

I have played with the NGINX logs and I could think of getting http request logs from Domino as well.


Domino log.nsf with event meta data?


But discussing with an admin buddy today I we had another idea which could be interesting.
Instead of reading the console.log we could read from log.nsf and get the event types, severity etc from the log document.


Additional logs?

We could do the same with mail routing logs, replication logs and security logs.
Would this make more sense to get structured data with event type and severity?

So far I am just getting console.log. But we could write out the other log output to json files and collect them.
Eventually have one log file to scrape per log type.

In contrast to Splunk universal forwarder, which has a way to push data to the forwarder with Loki we need a file.
But the same kind of interface could later be used for other integrations.


There is also a C-API way to register for events to retrieve. I would need to have a look into if this might be the better integration.
But I am looking for feedback first what type of logging admins would be interested to push to Loki, Splunk or other tools.

I looked into Splunk earlier. It has a simple to use interface to talk to their universal forwarder.


But I want to establish purpose before action.

a.) Would you be OK with just  log file with every console message?
b.) Or would you want to have more granular, categorized log filtered by the Domino event generation, captured either C-API or read from log.nsf?


Right now I am just using a simple log forwarding.


Image:Is anyone using Grafana Loki for Domino Logs

In addition we could turn message logs and replication logs into Loki data.




Image:Is anyone using Grafana Loki for Domino Logs

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]