Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

 
alt

Daniel Nashed

 

Ubuntu 24.04 LTS - Noble Numbat - A first look

Daniel Nashed  19 April 2024 16:41:12
Image:Ubuntu 24.04 LTS - Noble Numbat - A first look


Kernel  : 6.8.0-22

glibc   : 2.39

OpenSSL : 3.0.13 30 Jan 2024

Curl    : libcurl/8.5.0 2023-12-06

NGINX   : nginx/1.24.0

OpenZFS : zfs-2.2.2




The new Ubuntu long term release is about to be available next week while we are at Engage.
A customer asked about Ubuntu LTS versions today and I noticed it is about to be released.

The release notes are already public
https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890
There is also a container image on Docker Hub ( ubuntu:24.04) and there is a daily ISO image available.

I took a quick look and addressed two smaller but fatal issues:
  • Domino start script not detecting systemd
  • Domino Container failed because Canonical added the "ubuntu" user as 1000:1000 in the container --> I needed to move their user to 1001:1001

Beside that Domino runs on Ubuntu 24.04 LTS. But is not officially supported.
The kernel version is too new and has not been tested by HCL. We can expect HCL to look into 6.x kernels for supporting the next SUSE Enterprise service pack. But right now it is untested !!


I can't advice you to run Domino on Ubuntu 24.04 at this point. I would stay with Ubuntu 22.04 LTS until 22.04.1 and a statement from HCL!
There is no need to move right now. And you can in place migrate to newer versions later for server you install today.


Let's discuss at Engage next week at the Linux round table


That statement is not true for early adopters who want to test it out and provide feedback.
For me it is essential to be ahead of the curve and solve problems early on before others hit them.
And for sure this would be a good discussion topic for the Linux session at Engage next week...

I hope to see many of you there!


-- Daniel


Test Results from a container build on a Ubuntu 24.04 LTS host with the matching container image


"testResults": {

  "harness": "DominoCommunityImage",

  "suite": "Regression",

  "testClient": "testing.notes.lab",

  "testServer": "testing.notes.lab",

  "platform": "Ubuntu Noble Numbat (development branch)",

  "platformVersion": "24.04 LTS (Noble Numbat)",

  "hostVersion": "24.04 LTS (Noble Numbat)",

  "hostPlatform": "Ubuntu Noble Numbat (development branch)",

  "testBuild": "14.0FP1",

  "imageSize": "3854944019",

  "containerPlatform": "docker",

  "containerPlatformVersion": "24.0.7",

  "kernelVersion": "6.8.0-22-generic",

  "kernelBuildTime": "#22-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr  4 22:30:32 UTC 2024",

  "glibcVersion": "2.39",

  "timezone": "Etc/UTC",

  "javaVersion": "17.0.10 2024-01-16",

  "tikaVersion": "2.9.2",

  "dominoAddons": "ontime=11.1.1,languagepackÞ,verse=3.2.1if1,nomad=1.0.11-14.0,traveler=14.0,domrestapi=1.0.11-14,capi=14.0,leap=1.1.3",
...



Updateing autoupdate.nsf with the new template (14.0 08.03.2024)

Daniel Nashed  17 April 2024 22:52:58

The new fit & finish work and the new autcat.nsf integration requires template changes.
Please make sure you are getting the template version 14.0 from 08.03.2024 and not the earlier version from 03.11.2023 shipped with Domino 14.

When deploying the container image I noticed an issue with the folder permissions where the container image is getting template updates for Fixpacks.

The directory  /opt/hcl/domino/notes/latest/linux/data1_bck/140FP1/localnotesdata
contains updated templates.

But the directory can be only accessed by "root" and the container runs with the "notes" user.
This is not new to 14.0 FP1. Also 12.0.2 fixpacks had the same permissions, but nobody noticed the missing updates.

I fixed it in the HCL Community container build. But the HCL container image does have the file permissions which prevent the deployment.
For hte HCL image you can remove /local/notesdata/domino_ver.txt, stop and remove the container and run it again.
This will initiate a full release template update - which also contains the FP templates.



In general if you are not using a container image, please make sure design refresh is running on autoupdate.nsf to get the latest functionality.



Domino AutoUpdate AUT Catalog integration in action

Daniel Nashed  17 April 2024 22:02:53

When the new integration is enabled, client web-kits are just pushed to AUT Catalog.
The push will also happen for existing web-kits once the document is updated with data containing the Metadata XML.


No manual steps needed. The documents and the new view have a button to directly jump into AUT Catalog.

The button on top only shows up for software pushed to AUT Catalog.


AUT Catalog sometimes has multiple documents for the same web-kit.


For example the Standard and All Client (Admin/Design client) needs the same FP.

Or the 32bit to 64bit client packages are also a separate file and product document in autocat.nsf


Domino AutoUpdate knows all of the web-kits and dependencies and pushes documents accordingly.


It will also correct missing documents. It uses the AUT Catalog hash to ensure software is only pushed once.
And also knows about the language versions of web-kits.


-- Daniel



Image:Domino AutoUpdate AUT Catalog integration in action

Notes/Domino 14.0FP1 released -- What’s new?

Daniel Nashed  16 April 2024 20:53:23

The What's New section of AutoNotify doesn't show up until you update to Domino 14.0 FP1.


This is actually one of the improvements in the AutoNotifiy back end code in 14.0 FP1
There are a couple of fit & finish changes in AutoUpdate as well.

The software.json data has been improved to use dynamic categories and can distinct different client types.
Beside that there is a brand new AUT Catalog integration to automatically push client web-kits directly to autocat.nsf.

No more Metadata XML to download or manual attach. Configure it once to get web-kits automatically pushed to autcat.nsf.


Along with those autoupdate enhancements, there are also DAOS improvements.
This is the first time HCL added features in a Fixpack.


If you want to hear about details about AutoUpdate including Domino 14.0 FP1 enhancements join me at Engage in my session next week.


If you can't wait for Engage, here is a link to the documentation --> https://help.hcltechsw.com/domino/14.0.0/admin/wn_140FP1.html.
My session will go into much more detail and explain the new functionality.


-- Daniel


Image:Notes/Domino 14.0FP1 released -- What’s new?

Adding TOTP to your own application

Daniel Nashed  15 April 2024 08:32:07

The oathtool is the standard tool on Linux. It comes as a command-line tool or a dynamic and static link lib to be used in your own applications.

You can statically link the code into your application and generate TOTP codes and also validate them.

The homepage contains information about the command line tool "oathtool" and also the lib "liboath".


https://www.nongnu.org/oath-toolkit/



Example how to use it on command-line.


The example used the base32 encoded secret for "test".


oathtool --totp -b  ORSXG5AK



Key URI Format


When importing TOTP secrets into a TOTP client it is very conventient to use a QR code.

Some clients don't even let you specify parameters like signing algorithm manually.


There is a URI format documented here:


https://docs.yubico.com/yesdk/users-manual/application-oath/uri-string-format.html
https://github.com/google/google-authenticator/wiki/Key-Uri-Format

To create a QR code you can use the qrencode Linux tool, which can generate an ASCII graphics QR code.



Example code to generate a QR code for TOTP setup


echo "otpauth://totp/NashCom:nsh@acme.com?secret=$(echo test | base32)&issuer=NashCom&algorithm=SHA1&digits=6&period=30" | qrencode -tANSI256 -o -



Image:Adding TOTP to your own application

Example C code


Without error checking the C code to generate a TOTP code drills down to this:


oath_init();

oath_base32_decode (SecretB32, strlen (SecretB32), &pSecret, &len);

oath_totp_generate2 (pSecret, len, now, OATH_TOTP_DEFAULT_TIME_STEP_SIZE, OATH_TOTP_DEFAULT_START_TIME , 6, flags, szOTP);

oath_done();


It took me a moment to bring all those pieces together.

Specially on the C code side the important part is to that you want the Base32 encoded secret to be stored and use the conversion routine to convert it back as an imput.

Don't try to store the decoded string and pass it manually.


Conclusion


Now you have all your pieces to generate and verify TOTP digits either on command line or in your own application.

For security reasons I would not invoke the command-line tool from an application and instead statically link the lib into your application as show in my simple example.


My first use case will be my own sudo su - implementation to use TOTP to switch to root instead of using a password.

The tricky part will be now to store the secret in a way, that nobody can read it. But that's a different story.


DominoBackupRunFaster=1 with a file back-end

Daniel Nashed  11 April 2024 20:05:14

The standard configuration for Domino backup is a file back-end. This makes mostly sense with de-duplicating storage.
This could be for example a NetApp appliance or any other de-duplicating storage device.


Also an appliance or Linux machine running ZFS as the file-system with compression enabled, is a good backup target.

Just a plain backup to normal storage does not make much sense, because it would be add the amount of your NSF files for every backup to the backup storage.


When Domino Backup was introduced in Domino 12.0 the the native Domino file copy operations used a quite small block size, which lead to low thruput rates on Windows and Linux depending on the back-end.

Therefore Domino Backup increased the buffer to 128 KB by default with the option to increase it further up to 1 MB.


Depending on your storage back-end and file-system, the following parameter can be a true RunFaster=1 parameter for you.


notes.ini FILE_COPY_BUFFER_SIZE=1048576


If you are using Domino Backup with a file back-end, you should really try this out and report the difference back here including your OS version and type of storage (disk, NFS mound, Windows share etc).


See my recent Proxmox ZFS de-duplication blog for ZFS de-dup performance.
The parameter was also listed there. But maybe wasn't sufficient highlighted.


-- Daniel

Linux - Using Cron to schedule periodic jobs like certificate updates

Daniel Nashed  10 April 2024 09:38:27

In all the years I have never looked into cron.
But it is really a very straightforward functionality, which is used by Linux itself.


You can either schedule user specific jobs or use /etc/cron.d files or /etc/crontab.


There is a certificate update script  -->
https://github.com/HCL-TECH-SOFTWARE/domino-cert-manager/blob/main/examples/nginx/cert_upd_nginx.sh

I did not automate it end to end yet.


A quick look into /etc/crontab shows how it works.


I also added Certificate URL Health on certstore.nsf on top.

But this should automatically pull updated certs from certstore.nsf daily and update the NGINX config.


-- Daniel



SHELL=/bin/bash

PATH=/sbin:/bin:/usr/sbin:/usr/bin

MAILTO=root


# For details see man 4 crontabs


# Example of job definition:

# .---------------- minute (0 - 59)

# |  .------------- hour (0 - 23)

# |  |  .---------- day of month (1 - 31)

# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...

# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat

# |  |  |  |  |

# *  *  *  *  * user-name  command to be executed


07 07 * * * nginx /local/nginx/cert-update-dnug-lab.sh >> /local/nginx/cert-update.log 2>&1


Howto convert cert formats from and to PEM

Daniel Nashed  10 April 2024 08:17:05

CertMgr uses PEM internally for all operations. The PEM format is the most important format.
But you might get your files from your admin or a CA in different formats.


CertStore can import and export PEM and PKCS12 (PFX, p12).
But this might not always work in the way you expect it because of legacy encryption.


I just wrote a new howto document providing some background and providing OpenSSL command line options.

https://opensource.hcltechsw.com/domino-cert-manager/howto_convert/

If you are using CertMgr you might want to also look into another document added a while ago:

Anatomy of a TLS Credentials document

https://opensource.hcltechsw.com/domino-cert-manager/tls_credentials_anatomy/


I hope this type of information helps you to understand some of the backgrounds and also to help you converting your certs.


-- Daniel

OpenSSL past and present -- what you need to know about standards and conversions

Daniel Nashed  10 April 2024 23:01:17


OpenSSL is the open source project, which is part of most software today.
It is an integral component of Linux and the foundation software is built on.


There are three major streams you should know about:


- 1.0.2 LTS

- 1.1.1. LTS

- 3.0.x LTS


The jump in the version number to version 3.0 is an indication of a major change.

- OpenSSL 1.0.2 should be avoided for quite some time and I would personally also move off 1.x in general.
- OpenSSL 3.0 is modularized and supports loading different providers like the FIPS provider.

But it also deprecates some older functionality, which must be loaded explicitly from a legacy module.
On the other side it uses some defaults which older software does not support.

Not all software has made the jump to at least OpenSSL 3.0.

Therefore it is important to understand some new defaults and some removed standards.



OpenSSL development changes

If you are a OpenSSL developer you know a lot more changed under the covers and a lot of functionality needs to be changed to be fully OpenSSL 3.x compatible.
The first functions you run into are the RSA/EC keys, which should be replaced with EVP keys in OpenSSL 3.0.
But also using the Fetch functionality is important. You can find the a good starting point in the documentation here -->
https://www.openssl.org/docs/manmaster/man7/ossl-guide-libcrypto-introduction.html

When looking in Milan's post today about an issue he ran into I took a look into the old format he was using.
(
https://milanmatejic.wordpress.com/2024/04/09/hcl-notes-crash-while-importing-pkcs12-database-to-the-hcl-domino-certificate-manager/)

The problem is already escalated to development and there is a SPR and a fix going into the next release and fixpacks.
But reading the old standard with out the MAC is problematic in general. But it should not crash.
Below is a command-line to convert the PKCS12 file even with modern OpenSSL 3.0.x.


------

Export/Import/Conversion challenges

I created some test PKCS12 files without a MAC and noticed the -nomac option is only available in older OpenSSL versions.
Version 3.0 and above does not have this functionality any more.

Windows still uses this old version and creates encrypted PKCS12 files and also encrypted PEM files with a quite old standard.


I tried to open the PKCS12 file with my own certificate command-line tool, which statically links with the latest OpenSSL 3.x versions and ran into an error message importing the very old format.
The same happens when you try to open it with a OpenSSL 3.0 command line:


openssl pkcs12 -in mac.pfx -out export.pem -nodes

Error outputting keys and certificates

40672F2A027F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:../crypto/evp/evp_fetch.c:349:
Global default library context, Algorithm (RC2-40-CBC : 0), Properties ()

If you want to convert a legacy PKCS12 file, you need to specify the -legacy option.



Modern encryption standard used

Here is a text from OpenSSL, which describes the new standard and the legacy option.


The default encryption algorithm is AES-256-CBC with PBKDF2 for key derivation.

When encountering problems loading legacy PKCS#12 files that involve, for example, RC2-40-CBC, try using the -legacy option and, if needed, the -provider-path option.


-legacy


Use legacy mode of operation and automatically load the legacy provider.
If OpenSSL is not installed system-wide, it is necessary to also use, for example, -provider-path ./providers or to set the environment variable OPENSSL_MODULES to point to the directory where the providers can be found.


In the legacy mode, the default algorithm for certificate encryption is RC2_CBC or 3DES_CBC depending on whether the RC2 cipher is enabled in the build.
The default algorithm for private key encryption is 3DES_CBC. If the legacy option is not specified, then the legacy provider is not loaded and the default encryption algorithm for both certificates and private keys is AES_256_CBC with PBKDF2 for key derivation.



Conclusion

Use the -legacy option if you really need it to read old format. But make sure you use the new default, more modern and secure standards whenever you can.

New versions might have the challenge to not being able to read older formats at some point.

The CertStore functionality isn't directly built on OpenSSL code. But it uses the same modern standard for export.
Import/Export are client code. The encryption is always performed on the client.

In case you need an older encryption standard for Java and other applications, there is a client side notes.ini settings to lower the standard to 3DES.

PKCS12_EXPORT_LEGACY
=1

But again this is also just a fallback intended for compatibility.

Importing PKCS12 and PEM encrypted files will still work with older formats without a setting.
Only the missing MAC is a problem, which can be avoided with the OpenSSL command line shown above.



Domino meets Grafana & Loki

Daniel Nashed  7 April 2024 08:17:19


The latest Sametime version offers a graphical statistics dashboard based on Grafana and Prometheus.

Domino statistics out of the box don't play well with Grafana.

Prometheus needs a pull model and the Domino Stats Package added in Version 10 only supports the push model.
Sametime uses the push gateway, but because the Domino statistic names need to be transformed anyway, I wrote a small servertask to provide the stats to be included into the node_exporter, which already is used to provide Linux system statistics.


Beside statistics I also looked into Grafana Loki to collect logs and make them available over the Grafana interface. The data is collected by promtail.


After a couple of interesting experiences and iterations collecting the data, I am now at the stage where I can look into real world statistics in production.


For now I am building my own dashboards and try to better understand the magic behind Grafana.

The key is to collect the data in the right way. Specially bringing the Domino stats into the same metrics collector used by the OS makes statistics much easier to evaluate.


One next step could be converting some Domino text statistic information into labels (e.g. the device names etc).

But it sounds like the platform stats (which are only collect once per minute by Domino) might not be as useful as the node_exporter native Linux stats.


Is anyone using Grafana today? What are your key metrics?

How did you build your integration to  get the data out of Domino?

I am loving my panel already and it brings up new ideas checking my server. The drop of the SAI once in a while catched my interest ..


--- Daniel



Image:Domino meets Grafana & Loki

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]