Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

Domino Blog template search fix

Daniel Nashed  17 April 2021 08:53:58

The Domino community is still one of the most open and helpful communities I know.
I had an issue with the search in my blog after updating my server to Domino 11 and I had no idea where this was coming from.

Ben Erickson from Trusted Computer Consulting, LLC was searching for a post on my blog and noticed the search was broken..
He sent me this quick fix, that he got from HCL support, because he had the same issue.

The background of the problem is changed security.
Domino 11 sends "X-Content-Type-Options: nosniff" for security reasons, which doesn't work well with this content type.
So the solution was to change the agent involved to generate the right content type.
Here is what Ben got from support and what Ben summarized out of the ticket:

-- snip --

1. find the relevant code in the dXSearch agent's file and comment it out:

out.println("Content-Type: text/html;charset="+configdoc.getFirstItem("config_xmlencoding"));

2. replace it with the following:

out.println("Content-Type: application/x-javascript");

-- snip --

More blog template improvements?

I would really like to see more improvements for the blog template. There are some other issues I would like to see fixed.
For example the design isn't responsive at all and does not work well with mobile devices.

Maybe someone looked into this already and can help all others still using the blog template?
There is no other blog template I could switch to on the Notes space and I don't want to start over and loose all the history...

What do you think? What are others doing?

For how I am already happy, that the search is working again...

Thanks Ben you made my day!!!

-- Daniel

Domino Unix Start Script Tip: debug logs

Daniel Nashed  14 April 2021 16:43:47

Maybe I should start writing up tips for some not so well known function in the Domino start script for Linux & Unix ..
I implemented some of the functionality just for my own convenience :-)

For example when you want to debug an issue. just run "domino archivelog" once to archive your current logs into a zip.

Than do your test and run the command again with an additional tag

domino archivelog domino11_connect

Using Domino config File [/etc/sysconfig/rc_domino_config]

Archived log file to '/local/notesdata/notes_210414_164321_domino11_connect.log'

This will archive the log with the current date and time as usually but append the tag you specified.

You can continue testing your next test case and later on collect the ready to go ZIP files.

    Domino V12 Beta 3 CertMgr updates and a workaround

    Daniel Nashed  14 April 2021 07:59:17

    There have been a couple of important changes and additions in Beta 3 for CertMgr.
    Let me share the highlights from my point of view.  And also give some tips to avoid current smaller pitfalls I heard in the community from others testing.

    "Servers with access" selection is important

    In Domino V12 Beta3 is very important to select a "Server with access" for a TLS Credential that you create or update.
    Without a server name, the new TLS Cache will not use it.  In a one server scenario in Beta 2 and earlier the CertMgr server was always the same server as the server using the TLS Credential.

    Starting with Beta3 the Domain wide deployment is fully implemented and cerstore.nsf can be replicated in the domain (load certmgr on another server will automatically create the database).
    Even technically a CertMgr server can always decrypt the private key of the TLS Credential, the server will not use TLS Credentials without selecting them.

    You have to specify the server in "Servers with access" and let CertMgr re-encrypt the document (once you changed the field your are prompted).

    Work around for server select button

    Sadly the new server select button, which should have made the selection of servers easier, isn't working in the Beta3 template.
    You could either a.) past in the server names manually , b.) change the code of the button or c.) add a smart icon with the code listed below.

    This button can also help in own applications to select servers. There isn't a dialog only showing servers. The normal name dialog cannot be customized.
    It would be a great addition to be able to filter names field address dialogs! Like only show users, only groups only servers or a maybe even a custom formula?

    TLS Credentials cache

    There is a new TLS Credentials cache used instead of the older KYR file cache as soon certstore.nsf is available on a server.
    The new cache is fully taking benefit of the certstore.nsf database with new logic also covering wild card certificates and it can differ between RSA and ECDSA keys.

    And most important you don't need to restart any internet tasks like HTTP or SMTP to add or update a certificate :-)

    Trusted roots

    Beginning with Beta 3 also the trusted roots are part of the certstore.nsf.
    You can import your own trusted roots and make them available to TLS Credentials documents.

    Trusted roots are important for client certificate authentication and had to be manually imported to kyr files in earlier versions. Now it is an easy to use dialog in cerstore.nsf .
    You find the new dialog on the Security/Keys tab. The the example below -- of course I don't want to use Let's Encrypt for client certs .. This is just an example how to select those trusted roots.

    Image:Domino V12 Beta 3 CertMgr updates and a workaround

    Let's Encrypt and manual flow also uses new "Trusted Roots"

    Now that trusted roots are available in cerstore.nsf, having separate trusted root information in ACME account documents have been removed.

    The Let's Encrypt flow and also the "manual certificate" operations flow are automatically using the trusted roots in certstore.nsf.
    That means the trusted root is automatically added on those operations if found among trusted roots in certstore.nsf .

    Beginning with beta 3 certstore.ntf also has been updated with trusted roots for Let's Encrypt Staging and Production.

    But you might still see untrusted Staging certificates, because Let's Encrypt currently keeps changing the trusted root used for RSA keys.

    The new root for ECDSA keys and the Let's Encrypt production roots are working.

    I would ignore the missing trusted root warning for the Staging certificates. They cannot be used in production anyway.
    The Staging environment is intended for setup and testing. You can just switch to production any time if you feel like your configuration is OK.

    Pasting certificate PEM data with "automagical" processing

    Not everyone is eating certificates for breakfast.  It was always difficult to find the right trusted root and intermediate certificates.
    And the most difficult part was to bring the certificate chain into the right order.
    They had to be sorted from certificate to root in the right order with all the intermediate certificates.

    Beginning with Beta 3 you can past PEM data in any order and the server will try to make sense of what has been pasted.

    - Any order can be used
    - Duplicate certificates or intermediate certs are ignored
    - Not matching certs are ignored
    - Missing intermediates and trusted roots are added from the "Trusted Roots" documents in certstore.nsf

    So that means you can not only add your own CA trusted roots but also intermediate certificates into the new "Trusted Roots" documents.
    Intermediates will show up with a warning, that they are not a root certificate. But they are used for chain completion.

    The logic first finds the leaf certificate in the PEM data and will than auto complete the chain from the pasted PEM data and the Trusted roots in certstore.nsf if not found in the pasted PEM data.

    Finally the "Certificate Health" check is run to verify if the certificate chain is OK.
    So hopefully in all your certificate operations you will see a green "Valid"  for all your TLS Credentials in future as shown below.

    Image:Domino V12 Beta 3 CertMgr updates and a workaround

    Conclusion / Next Steps

    If you have not tried Domino V12 Beta and you are interested in certificate management for web server certificates, you should really have a look into this final beta -- now that all functionality is in place.

    I have been helping many customers creating kyr files for their environments and for most customers even after years this was still a mystery to them.
    In future you don't need cryptic tools OpenSSL to create keys, nor to create CSRs via OpenSSL. And the import process for certificates will be very straightforward.
    So beginning with Domino V12 you will not need any *.kyr files any more! But they are still supported!

    Oh in case you missed it ... This was already in beta 2 .. You can also use CertMgr to import into certstore.nsf your existing *.kyr files with a simple server console command.
    Either for a single .kyr file or for all .kyr files specified in server.doc and all internet sites ..

    I am looking forward to interesting presentations about CertMgr and certificate management with Domino V12.

    There is a lot to discover beside the simple and standard use cases. And there are also additional integration stories coming up.
    For example if you want to use certificates created by CertMgr for Sametime Meeting servers or a NGINX server etc.

    The integration options for REST based DNS API providers to create DNS TXT records for DNS-01 challenges will be an interesting topic for future blog posts and presentations.

    HCL provides two out of the box examples posted in the beta forum as ready to go DXL files to import for Cloudflare and Hetzner (German provider).
    And I have implemented and tested a couple of other provider interfaces like ACME DNS, Digital Ocean and some crazy variants with CNAME validations.

    There are plans to have an open source platform for the community to share those integrations -- Stay tuned.

    There is a lot to discover beside the standard documentation.. And there is a lot of room for your own integrations based on the provider interfaces.
    I have been using CertMgr already to automatically submit requests and pick up certificates via OpenSSL command-line and my own custom tools.

    -- Daniel

    Work around code for server select dialog issue

    Server:=@Subset(@DbName; 1);

    @If (Server=""; @Return (""); "");

    choice:=@PickList( [Custom]; Server : DB ; "Servers" ; "Servers with access:" ;"" ; 3);

    @SetField ("Servers"; @Unique(@Name([Canonicalize];choice:Servers)))

    Podman 3.x Health Script Issues with OCI Image Manifest Format

    Daniel Nashed  11 April 2021 23:09:27

    CentOS 8 Stream updated to Podman 3.1 -- In contrast to CentOS 8 which still comes with Podman 2.2.1

    When building Docker images on Podman 3 contain health script you get the following error:

    WARN[0340] Healthcheck is not supported for OCI image format and will be ignored. Must use `docker` format

    It turns out that Podman changed the default to the OCSI Image format. Not sure in which version this exactly started.

    But this doesn't support the standard health scripts. And it probably makes sense to stay with the Docker V2 format for other reasons.

    When you look into the image with inspect, you see the manifest type and version:

    "ManifestType": "application/vnd.oci.image.manifest.v1+json"

    How to change the manifest format back to Docker format

    1. There is a format option for the build command.


    Control the format for the built image’s manifest and configuration data. Recognized formats include oci (OCI image-spec v1.0, the default)
    and docker (version 2, using schema format 2 for the manifest).

    Note: You can also override the default format by setting the BUILDAH_FORMAT environment variable.

    2. Or you can specify the format via environment variables.

    export BUILDAH_FORMAT=docker

    Once set Podman will build using the Docker format again:

    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json"

    For our Domino on Docker community image I added added the variable into our
    If not specified in the configuration, it defaults to "docker".

    Continue using dots in servernames with Domino V12

    Daniel Nashed  10 April 2021 18:59:38

    Many of us used a dot in their domino server common names, because specially for internet facing servers, this made DNS a lot easier.
    For example one of my Domino servers is named:

    With Domino 12 the dot has been removed form the list of supported chars.

    So the chars: "&-. _" are prevented from being used when registering a server.

    An underscore was arguable, because it is not officially supported by DNS.
    I recall that discussion when I started with Notes ages ago.
    But the name choice also had to take into account NETBIOS..

    It is always good to prevent naming, that might cause issues later.

    But in this case, the dot is really a useful char and is safe to use.

    I would still avoid underscores and stay with "-".
    Just checked, Let's Encrypt for example rejects underscores.

    There is way to allow underscores and dots again in Domino V12.


    If you are looking into the final beta 3, this info might already help you testing ...

    -- Daniel

    Updating to Domino 11 FP3 in 30 seconds

    Daniel Nashed  8 April 2021 15:49:54

    Domino 11.0.1 FP3 has been released and I added FP3 to our Docker Community GitHub project in the develop tree today.

    To update my servers I have created custom image.

    I first ran the standard
    ./ domino.
    Afterwards I used the new domino_container script to build an add-on image containing my NashCom Tools and pushed it to my JForg registry.

    From there I can pull it on any of my servers.

    domino_container script provides configuration options to specify all parameters including an image name.
    In this case I changed the image name, but an update will work the same way also with the same image name.

    After pulling the image, the
    domino_container inspect command shows the current and the new version.
    To update my server I just run it again with  
    domino_container update.

    This command does the following:

    - Stops the container (which is running the domino start script inside)

    - Removes the container

    - Runs a new container with the new image

    .. and the Docker image automatically takes care applying the new templates (if any -- not in a fixpack)

    Let me share the output from my production server below.

    There isn't a full documentation for the domino container script yet. But I have a couple of BPs already using it.

    Also in our DNUG Container workshop -- after all the other theory and hands-on with Docker and K8s -- we will also have a brief look into the container script.

    It really simplifies Domino on Docker & Podman deployments.

    Workshop link with  few seats left -->

    Before I forget.. Here is the link to the FP3 fixlist -->

    -- Daniel

    domino_container inspect

    (Using config file /etc/sysconfig/domino_container)

    Info: New Image Version available!


    Runtime        :  Podman 2.2.1

    Status         :  running

    Health         :  healthy

    Started        :  30.03.2021 12:51:43

    Name           :  domino-nashcom

    Current Image  :  localhost/nashcom/domino:latest

    New Image      :

    Version CNT    :  11.0.1FP2

    Version IMG    :  11.0.1FP3

    Domino Ver CNT :  11.0.1FP2

    Domino Ver IMG :  11.0.1FP3

    BuildTime CNT  :  30.03.2021 12:50:03

    BuildTime IMG  :  08.04.2021 15:26:20

    Hostname       :

    Volumes        :  /local/podman/local

    Mounts         :  /local


    Container ID   :  d33faab27a09

    Image-ID CNT   :  db7126714fe7

    Image-ID IMG   :  a78a7e5f38ed


    Image Size     :  1652 MB

    NetworkMode    :  host

    Driver         :  overlay

    [root@notes domino-docker]# domino_container update

    (Using config file /etc/sysconfig/domino_container)

    Info: New Image Version available!

    Updating Container [domino-nashcom] ...

    Stopping Container ...

    Stopping systemd domino_container.service

    Creating & starting new Container [domino-nashcom] ...

    Starting systemd domino_container.service

    Successfully updated Container [domino-nashcom]

    [root@notes domino-docker]# domino_container inspect

    (Using config file /etc/sysconfig/domino_container)


    Runtime        :  Podman 2.2.1

    Status         :  running

    Health         :  healthy

    Started        :  08.04.2021 15:32:32

    Name           :  domino-nashcom

    Image          :

    Version        :  11.0.1FP3

    Domino Ver     :  11.0.1FP3

    BuildTime      :  08.04.2021 15:26:20

    Hostname       :

    Volumes        :  /local/podman/local

    Mounts         :  /local


    Container ID   :  ede855e4652c

    Image-ID       :  a78a7e5f38ed


    Image Size     :  1652 MB

    NetworkMode    :  host

    Driver         :  overlay

    Domino V12 Backup & Restore leveraging native Borg Backup Integration

    Daniel Nashed  2 April 2021 18:29:15


    Domino V12 Beta 3 introduces a Backup & Restore which is designed for easy integration with 3rd party backup solutions.

    The server tasks "backup" and "restore" are accompanied by a new
    dominobackup.ntf/nsf which holds all the configuration, logs and a database inventory.
    There is an admin friendly Notes based UI which allows you to restore databases with many options including restoring deleted documents and folders into the original database.

    The solution is intended to complement existing backup solutions and to make it easier for customers and partners to integrate with existing backup solutions which are not Domino aware today.

    This includes simple solutions based on

    • Integrated file backup operations
    • Custom scripted integration
    • and also Snapshot backups!

    It's all a matter of providing the right integration scripts.

    Domino Backup & Restore itself is not intended to be a full backup application in the classical way.
    It is more a middle-ware and an integration point on the one side and it is providing very flexible restore operations on the other side.

    The focus is on NSF backup and transaction log backup -- which are the two components a classical file backup cannot cover without providing an application specific backup agent.

    That means you still have to backup additional files in the data directory like notes.ini etc and also the DAOS repository with a standard file-backup or have them included in a snapshot.

    Storage Back-End for the default file copy implementation

    By default Domino Backup is configured to perform a file copy backup to a location you define in the main configuration tab.

    But running a full backup on all your NSF files to a central location every night doesn't really scale from resource usage point of view.

    If you have de-duplicating and compression storage, it can still be a valid option also for larger environments.

    There are many solutions and even the free ZFS file-system does offer dramatic storage reduction for nightly backup on Domino databases.

    Beside ZFS there is for example also the backup vendor Cohesity offering a file share (internally called: view) to backup data.
    This share is de-duplicated and compressed and can be part of a backup job on Cohesity side.

    You can have similar storage optimization also on storage appliances from NetApp and others.

    One of the advantages is that delta files created during backup, can be automatically applied to the databases.
    And the resulting backup is consistent on it's own -- without the need of a restore operation.

    I have customers using a similar configuration with Cohesity backup with very good de-duplication results in production with a similar solution today.

    Domino Storage Optimization first

    In this customer case we looked into optimizing the compact strategy to further optimize and reduce physical backup storage usage.
    Mainly benefiting of all storage optimization for NSF (data/design compression, off-loading attachments to DAOS, moving out view indexes via NIFNSF).

    There are a lot of storage optimization options introduced over the years, that you should also take benefit for your backups -- not just for performance over the day.

    The DBMT server task be leveraged to compact database with space pre-allocation once per week or even two weeks only!

    Example Implementation: Borg Backup

    Beside file backup operations there are more flexible options to integrate with existing file backup applications and make those applications Domino aware.

    To show how to fully integrate Domino Backup with an existing backup solution I am using Borg Backup for the following reasons:
    • Open source & free solution available in CentOS (epel-release repository)
    • Backup de-duplication and compression support!
    • Support for encrypted backups
    • Local and remote repositories (securely communicating over SSH)
    • Very easy to setup
    • Command-line interface available for backup and restore

    Just showing an example would be going half way. I moved one of my servers to Podman already and it is running
    our Domino Docker Community image and I have been looking for an efficient, fully supported Domino backup solution.

    Bringing Borg Backup into a container environment makes it also very easy to deploy.

    So I took the following components and glued them together with all the software and options needed.

    I can't go into all details in this blog article, but it is all open source and included in our Docker project. And you can have a look yourself (and ask questions) ...

    Also I will write another short blog article describing all the steps involved if using all the components described here.

    Borg Backup

    Domino V12 Backup & Restore Beta 3

    Domino Docker Community Project

    Nash!Com Domino Container Script (domino_container)

    Used and included in the Docker project

    Borg Backup Integration

    Let's look into all the components involved step by step. There is a separate blog entry for installing Borg Backup inside Docker.

    I added the Borg Backup software along with the OpenSSL client and the FUSE software to our Domino Docker container.

    So when you build your image using the "
    -borg" option, the build script automagically includes all components required for Borg Backup.

    This also includes integration shell scripts leveraged by Domino Backup& Restore later.

    Those scripts are technically part of the NashCom start script and added to the
    /opt/hcl/domino/backup/borg directory during installation.
    Of course this integration can be leveraged on any other Domino V12 on Linux installation without containers.

    This directory also contains the default DXL configuration file
    All the references and names in this configuration file are already pointing to the right script locations.

    You can simply import them into your
    dominobackup.nsf via the action menu.

    The existing file-system copy backup configuration, is enabled by default and needs to be disabled.

    New directories in the container

    The Domino Docker project already defines a couple of directories, which could be used as volume mount points.

    For general backup and restore operations there are two new directories:

      --> directory which could hold file-system backup data for NSF/translog and also holds the backup log files by default (configurable in the config document).
    --> empty mount point with proper access permissions to allow backup solutions like Borg Backup to mount their restore for copying restored databases back.

    When you choose the Borg Backup build option another default directory will be created

     --> can be used with a separate volume for a local Borg Backup directory.

    Those directories are all created automatically at image build time to have them available later as volume mount targets.

    Borg Backup scripts

    The software used is all located in the extras directory in my start script -->
    So you can leverage those scripts also native on Linux without the Docker project.

    Running a new container with Borg Backup support

    Now that Borg Backup is added to your image, you would need to create a new container with Borg software installed.

    In my previous blog post -->
    I already explained that the restore operation needs to mount a Borg backup.

    That's where the
    /local/restore mount point is used. But user space mounts only work if you have the /dev/fuse device available in your container.

    To add this to your container add the following options to your docker or podman run statement:  
    --device /dev/fuse --cap-add SYS_ADMIN

    In case you are running the Domino container script, there is a new option available which enables this option automatically for you.

    # Domino V12 Borg Backup support (enables FUSE device)


    If you already have the docker_container script installed, you have to update the script to have the new logic available.
    You just need to execute the
    install_domino_container script again.

    Final Steps to get your first backup running

    Now you can import the DXL configuration into
    dominobackup.nsf and disabled the existing configuration.
    The database is created when you run the backup task for the first time.

    Once the database is created you create a new Borg backup repository in the default location (you should add a separate volume for
    /local/borg ).

    Inside the container run the following command to create a new backup repository.
    The configuration in the Domino backup side you imported via DXL already contains this target location.

    borg init --encryption=repokey-blake2 /local/borg

    Now you have all components in place you can start your first backup and restore using the standard Domino Backup & Restore operations.

    load backup

    I would start with something simple first. The following command-line takes a backup of your log.nsf

    load backup log.nsf

    Next steps

    So this was a lot of material to cover. I will write up another blog entry showing all the steps again as a script.

    And a fellow HCL Ambassador already tested it in his environment and will show it running with production data in a separate blog post on his blog.

    There is a lot of potential using de-duplicating and compression technology with Domino NSF backup.

    See this as a starting point for more information to come.
    This might be also a good starting point for a white paper covering different scenarios and best practices.

    All information in this blog posted is posted to be shared and re-used in other articles or presentations.

    If you are looking into implementing scripts on your own for other backup scenarios, I would be very interested to hear from you.

    From what I heard HCL is planning an open source repository to host integrations like this.

    The scripts included in the GitHub repository are open source and intended for reuse, adopting and to use as an example implementation.

    Here is a link -->

    And there is also script from zero to hero from CentOS 8 plain using the Docker Community project to Domino V12 including Podman and Borg Backup installation referenced.

    Let me know what you think ..

    -- Daniel

    Domino V12 Beta 3 is almost here ...

    Daniel Nashed  30 March 2021 09:36:47

    Domino V12 Beta 3 is almost here ...

    In case you missed it .. there is still time to register for today's beta 3 event, which will be the starting point for beta 3  -->

    The updated on-line documentation is already giving an idea what is new -->

    But you should not ruin your webinar experience and wait for the event this afternoon ;-)

    Image:Domino V12 Beta 3 is almost here ...

    Free DNS wild-card service from Japan

    Daniel Nashed  30 March 2021 07:33:19 style=

    If you are using a home lab and want to test with many different hosts, having a wild-card DNS entry can be helpful.
    This will not work with Let's Encrypt DNS-01 challenges, because it would need DNS TXT records, which are not yet available in a way they can be consumed at MyDNS today.

    Still is  a very interesting option today, because of the sub-domain you can point to your server.
    And they have a very simple to use HTTP request option to update your IP.

    I have created a very simple script to update my IP at mydns.
    To determine my public IP I am using the Google STUN servers, which the Sametime meeting server is also using by default.
    You need a turn client to use the STUN servers and find out about your public IP.

    CentOS as the required software included in the epel-release.

    yum install -y epel-release coturn-utils

    After installing the turn client, the following type of script will just set your current IP address.

    -- --


    IP=$(turnutils_stunclient -p 19302 | grep "addr:" | head -1 | awk -F "addr: " '{print $2}' | cut -f1 -d:)
    echo "My IP: [$IP]"
    curl "$MID&PWD=$PWD&IPV4ADDR=$IP"

    This can be quite useful for Let's Encrypt HTTP-01 challenges or test servers at home in general.

    I have many different integrations for my hosted servers and also for my home servers.
    This includes a ACME DNS server, Cloudfare hosted domains, Hetzner hosted domains, sub-domains at Digital Oceans etc.

    But MyDNS is a very simple option to get started with Domino V12 CertMgr and HTTP-01 challenges without a static IP.
    And this allows to use more than one host name for example to the SNI configuration.

    Sadly this does not allow to request wild-card certificates from Let's Encrypt and other ACME providers -- which requires DNS-01 challenges.
    As soon they fully support setting DNS TXT records I can consume with a scripted flow, this will become a great option also for looking into DNS-01 flows with ACME.

    Free Windows workstation backup from Veeam

    Daniel Nashed  28 March 2021 17:26:44

    Right now I am looking into many different backup solutions to see how they can be best integrated.. Some might already have an idea why ..

    Today I looked into the Veeam backup client for Windows. For integration on command-line this doesn't look like a good match so far.
    But the same Veeam client used on servers does not only runs against their server, but also stand-alone for example with USB drives.
    And the best is, that it is free!  Have a look -->
    I was looking for a new local backup solution for my notebook for a while..  Acronis backup became almost unusable in their last releases.

    So I used local SSD attached to my notebook and took a full backup.
    Afterwards I took another delta backup, which was done in just 5 min.

    The backup is de-duplicated and compressed and freaking fast!  It took 53 minutes for the whole notebook with a quite full 1 TB disk.
    This is the best solution for Windows I am aware of. And it is free using enterprise backup software from one of the leading companies!

    See pictures from my first run and the incremental run.

    This is really cool! And this might be also a good solution for friends & family :-)

    Image:Free Windows workstation backup from Veeam

    Image:Free Windows workstation backup from Veeam



      • [IBM Lotus Domino]
      • [Domino on Linux]
      • [Nash!Com]
      • [Daniel Nashed]