Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

Changing your API Tokens regularly - specially after a recorded presentation published on YouTube

Daniel Nashed  18 June 2021 08:52:02

Changing API tokens regularly is a general best practices.
This is specially important if you do a recorded live demo with your production accounts and show the configuration.
I didn't know which accounts I am going to use for my OpenNTF demos.
But I was well aware I have to change many of my tokens afterwards, because they are also shown in the tracing functionality I demoed.

So I wasn't paying much attention to hide any of the tokens seen on screen and just changed all of them this morning ;-)

Restrict API permissions
Cloudeflare changed their API meanwhile and you don't need a full authentication token any more for DNS TXT records.
You can now do all required operations with the simple API key which can be restricted to the simple DNS operations needed.
And they are really configurable per domain. Some other providers only have one API key per account for all the operations.

So if you are looking for a provider for DNS TXT integration and want to deplopy for example DNS API validation Domain leveraging CNAME delegation, I would really recommend looking into the free basic account at Cloudflare.

Cloudflare is widely used, has a very good API flexible API that you can narrow down on permissions. That's part of the reasons why it is the reference API not only for CertMgr but also other ACME DNS integrations.

-- Daniel

Domino CertMgr OpenNTF webinar and brand new HCL GitHub repository

Daniel Nashed  18 June 2021 04:45:54

OpenNTF webinar

OpenNTF invited me to an on-line webinar to speak about the new Domino V12 Certificate Manager and all components involved.
This was the first public presentation with all the technical details and background.
If you are interesting in the new Domino V12 CertMgr functionality you should at least look into the slides. Or watch the replay on YouTube linked below.


HCL GitHub repository for Domino V12 CertMgr

In the webinar I had the pleasure to announce a new open source GitHub repository which is mainly intended to provide, share and collaborate in the area of CertMgr DNS provider configurations.
But you will also find other related CertMgr related information.


CertMgr Lab Environment leveraging Let's Encrypt Pebble

I just added a Let's Encrypt Pebble based test environment this morning. It's described in the bonus material of the OpenNTF seminar.
And the directory in the GitHub repo comes with a detailed readme to setup your Docker environment.
You can use it for testing in a local environment without any inbound HTTP connection on port 80 for ACME HTTP-01 challenges and also for DNS-01 challenges without the need to have a DNS provider!

The Docker based environment can be also used to trace and understand the ACME flows.
Once you have it setup you see the incoming requests from CertMgr on the Pebble side flowing life on the console.


There have been a lot of questions in the webinar chat. Even this was one of my few presentations I finished in time including the demos, not all could be answered.
I am waiting for the summary of the questions and will make sure they are all covered and the answers will be posted soon.

Still even with a 90 minute session, not all details can be covered. I think the GitHub repository will be a great way for the community for DNS TXT API related information.
Writing new integrations for DNS API providers would be a session on it's own. So I am really happy that we got this GitHub project.

You will find configuration for some well known DNS providers. If you have a DNS provider not listed and they have a REST based interface, I really want hear from you.
Nobody can get accounts for all the different provides. But what has been build into CertMgr should make it really easy to use REST based interfaces and adopt them in minutes.


Resources

June OpenNTF Webinar - Domino V12 Certification Manager

Slides:
https://blog.nashcom.de/presentations/openntf2021_domino_certmgr.pdf

YouTube Video:
https://www.youtube.com/watch?v=sFYdVILM9gU


HCL GitHub Repository for Domino V12 CertMgr
https://github.com/HCL-TECH-SOFTWARE/domino-cert-manager


Let's Encrypt Peeble based lab environment for CertMgr testing
https://github.com/HCL-TECH-SOFTWARE/domino-cert-manager/tree/main/lab/acme


Domino running RSA and ECDSA keys at the same time

Daniel Nashed  15 June 2021 10:17:05


Domino V12 introduced ECDSA support. But you will still also need RSA keys in most cases, because not every client is able to handle ECDSA keys and corresponding ciphers.
This means you need to have both keys configured at the same time.
Depending on how you map your TLS Credentials via server doc/internet sites, it might not work for you as expected.


Let me try to explain how the new TLS Cache works and which options you have.


When you looked into the betas you noticed that HCL tried to get rid of the "kyr"-file names completely.
The physical files are not needed any more and the certificates and keys are stored and distributed securely in certstore.nsf.
But some kind of mapping is needed between the internet site and the TLS Cache which is serving the TLS Credentials for each servertask separately.


Mapping TLS Credentials


If a client is not using SNI (server name indication) to indicate the server to use, the mapping to the internet site can only be done by either IP address or default internet site.
And in this case the internet task involved (for example HTTP) can only use the keyfile name configured to lookup the right TLS Credentials document.

So the name passed is the value set in the keyfile name field in the internet site or server document.

This keyfile/tag lookup is similar to what you know from earlier Domino releases -- It just checks the first keyfile name matching.
There is no special logic to handle multiple key file names -- or wildcards -- when performing keyfile lookups.

--> This means you should never specify the same keyfile name for two TLS Credentials documents for the same Domino server.

With a mixed environment with Domino V12 and earlier servers specifying the name "keyfile.kyr" in one internet site/server document to define the default TLS Credentials uses is the safest bet.
You have to specify the same keyfile.kyr name in the TLS credentials document to have a direct lookup mapping.


Your configuration for the RSA TLS Credential document could look like this:


Image:Domino running RSA and ECDSA keys at the same time


Default key used if no key found


The TLS Cache also tries to provide a default kyr file for RSA and ECDSA separately.
Because a server cannot just map any key in the wild, only certain keys are used as default keys.
  • Hostname specified in the server document
  • TLS Credentials specifying the default "keyfile.kyr" (with exactly this default name) are selected as default TLS Credentials.

But this should be just the last resort if no other mapping works.
I would always recommend to ensure all requests are mapped correctly.


Wildcard certificates


For wildcard certificates this can be a tricky configuration. And this differs depending on your environment:


Domino V12 only servers


If all your servers hosting an internet site are Domino V12 and using cerstore.nsf, you can specify the wildcard name like *.nashcom.de in the keyfile name your TLS Credential documents.

This will make sure always the right key is mapped, because hostname lookups have an extended logic including wildcard certificate matching.
The logic used will not only work for SNI requests, but the keyfile name in internet site is matched with the same logic.
Only keyfile names are just matched on first entry found.


Mixed server environment


In a mixed server environment configuring the default keyfile.kyr for the RSA TLS Credential document will ensure always the right key is looked up.
If a server doesn't support ECDSA this will always fall back to RSA in this case.


Mixed requests with RSA and ECDSA


The type of request is determined by the Hash algorithms requested by a client.
The beginning of the handshake does not have this information nor the name of the server.
So the first lookup is always by the keyfile name specified in the server.doc / internet site that is used by default.

If the client passed hash algorithms, this determines the key type used.
If no hash algorithms are specified, by default HTTP uses ECDSA and all other internet protocols will still use RSA if present.

You can change this default behavior via notes.ini if required:

For HTTP --> notes.ini TLSCACHE_HTTP_PREFER_RSA=1
For all other internet protocols, example for SMTP --> notes.ini TLSCACHE_SMTP_PREFER_ECDSA=1

This should usually not be needed, but has been added to ensure all cases are covered.


Logging and testing connections


The parameter CERTSTORE_CACHELOG=1 enables logging/debugging for the TLS Cache.

It will show all parameters from the TLS Cache request and in a second log line return the results.
The same thread should always have two matching lines in console.log

Below you see he first part of the handshake in the first two lines.
The next two lines are the second part of the handshake where you see the requested hash algorithms and the SNI name.


HTTP request


openssl s_client -connect blog.nashcom.de:443

[792026:000012-00007FF22F403700] TLSCache-Log-http: CacheLookupRequest -> Host/Tag: [keyfile.kyr] RSAHashAlgs: 0x0, ECDSAHashAlgs: 0x0, Key: 1, Cert: 1, OCSP: 1 TrustPolicy: 1 DNList: 0
[792026:000012-00007FF22F403700] TLSCache-Log-http: CacheLookupResult: [keyfile.kyr] -> [RSA 4096] Flags: 0x0, RSAHashAlgs: 0x1, ECDSAHashAlgs: 0x1, DefaultAssigned: 0 -> Err: 0x0
[792026:000012-00007FF22F403700] TLSCache-Log-http: CacheLookupRequest -> Host/Tag: [blog.nashcom.de] RSAHashAlgs: 0x7C, ECDSAHashAlgs: 0x7C, Key: 1, Cert: 1, OCSP: 1 TrustPolicy: 1 DNList: 0
[792026:000012-00007FF22F403700] TLSCache-Log-http: CacheLookupResult: [blog.nashcom.de] -> [ECDSA NIST P-256] Flags: 0x0, RSAHashAlgs: 0x7C, ECDSAHashAlgs: 0x7C, DefaultAssigned: 0 -> Err: 0x0


openssl s_client -sigalgs "RSA+SHA256" -connect blog.nashcom.de:443

[792026:000013-00007FF22F201700] TLSCache-Log-http: CacheLookupRequest -> Host/Tag: [keyfile.kyr] RSAHashAlgs: 0x0, ECDSAHashAlgs: 0x0, Key: 1, Cert: 1, OCSP: 1 TrustPolicy: 1 DNList: 0
[792026:000013-00007FF22F201700] TLSCache-Log-http: CacheLookupResult: [keyfile.kyr] -> [RSA 4096] Flags: 0x0, RSAHashAlgs: 0x1, ECDSAHashAlgs: 0x1, DefaultAssigned: 0 -> Err: 0x0
[792026:000013-00007FF22F201700] TLSCache-Log-http: CacheLookupRequest -> Host/Tag: [blog.nashcom.de] RSAHashAlgs: 0x10, ECDSAHashAlgs: 0x0, Key: 1, Cert: 1, OCSP: 1 TrustPolicy: 1 DNList: 0
[792026:000013-00007FF22F201700] TLSCache-Log-http: CacheLookupResult: [blog.nashcom.de] -> [RSA 4096] Flags: 0x0, RSAHashAlgs: 0x10, ECDSAHashAlgs: 0x0, DefaultAssigned: 0 -> Err: 0x0


For SMTP STARTTLS connections you can use a command line like this

openssl s_client -connect mail.nashcom.de:25 -starttls smtp -crlf
openssl s_client -sigalgs "RSA+SHA256" -connect mail.nashcom.de:25 -starttls smtp -crlf
openssl s_client -sigalgs "ECDSA+SHA256" -connect mail.nashcom.de:25 -starttls smtp -crlf

HCL Domino Online Meeting Integration available

Daniel Nashed  9 June 2021 18:35:59

Wow this is great news! And this is the first time HCL publishes something like this on GitHub as open source.
The Domino meeting integration for the following providers is planned to be in 12.0.1. But if you are on 11.0.1 FP3 clients, you can start using it today!
  • Zoom
  • Microsoft Teams
  • Webex
  • GoToMeeting
  • HCL Sametime
There is a way to customize your standard templates and there is a small web server you have to run for the integration thru a Notes database.
And the server component can be used for example as a Docker container. I have quickly started it today during the DNUG conference meeting when we got the news that's available.
I have not looked into the template side yet. Just wanted to get out this great news ..

The best starting point for the documentation is here --> https://opensource.hcltechsw.com/domino-online-meeting-integration/
And the GitHub repository for this project is here --> https://github.com/HCL-TECH-SOFTWARE/domino-online-meeting-integration


-- Daniel



Anyone using ZFS on Linux for Domino?

Daniel Nashed  7 June 2021 09:24:04


ZFS (
https://openzfs.org) is a quite interesting file system. I have looked into it before but never had the strong requirement to use it.
Beside the license challenges and that it isn't part of the major distributions, I really really like what I see on first look.


For Domino Backup we have another good reason to look into ZFS.

Beside BTRFS this is the only Linux file system with out of the box snapshot support.

For ext4 you can eventually get it working with LVM snapshots. But this doesn't look like a scalable good to maintain approach.


I have a customer running on Proxmox virtualization with ZFS for a long time. And they are quite happy.

For them it was a logical choice because Proxmox is leveraging ZFS in their platform.


ZFS makes a lot of sense an different settings make sense for different environments and data loads.

There are mainly 3 aspects I am interested in:

  • Snapshots

    This is my current main motivation. With Domino Backup having a way to snapshot the file system would be a very convenient way to backup and restore.
    What ZFS offers is exactly what we need.

    - command-line integration for creating a snapshot
    - tagging snapshots with a reference to find them
    - mounting snapshots for restore
  • Deduplication

    When it comes to Domino live data, deduplication can save you up to 30% of space.
    But the down side would be reduced performance and higher RAM requirements.
    So probably deduplication is not a good idea for live data.
  • Compression

    Similar to deduplication this would save some disk space but also reduces the performance.
    I have tested with my own 10 GB mail file that compression can provided the same 30% reduction and lead to zero deduplication
But there are other advantages as well
  • On-line Defragmentation
  • Pooled easy to use flexible storage
  • Easy to extend
  • Support for additional data sets for resilience


Deduplication & Compression not suitable for NSF & Co - But as a backup target!

Deduplication and Compression don't sound like a good match for Domino NSF, translog, NIF or FT data.

But they sound like a very good option for DAOS files when hosted for multiple servers -- for example one server hosting them over NFS.


If using ZFS for a remote storage pool to backup your Domino data (based on snapshots) sounds like a great solution offering high level of deduplication.

I have tested with databases modified over time (without a copy style compact of course). And the data deduplication rate was very high.



Snapshots


My current take is Domino Backup optimization.


ZFS offers a very easy to use and flexible interface to create snapshots.

And they can be mounted as a volume on the fly.


So they can be either used as backup or to provide a consistent source for backup with Domino V12 backup in snapshot mode.



Your feedback


I am currently looking into it. But I would really like to see if anyone out there is using ZFS for Domino production work-loads already.

And specially what experience you have with features like deduplication and compression.


Running Domino on IPv6

Daniel Nashed  6 June 2021 16:26:28


In all my years working with Domino, IPv6 was never on the list of things customers asked me for. I know about very few customers using it for Domino.


Domino doesn't use IPv6 by default -- which is good because it could have some side effects in corporate environments and you want to actively decide to use it.

We had side effects with Traveler and Java which uses IPv6 automatically once enabled on the machine and we had to actively disable it on the Java side.


Enabling it is quite straightforward. There is one switch to enable it:


notes.ini TCP_EnableIPv6=1



Once you set the parameter you have to restart your Domino server! A restart of the port isn't sufficient.


There is one other change depending on your configuration.

In some cases you have to specify IPv6 addresses in the configuration. And because IPv6 contains ":" chars, you have to put IPv6 addresses in brackets like the following:


TCPIP6_TCPIPADDRESS=0,[fe80::209:6bff:fecd:5b93]:1352


This format is also needed for internet sites.


Image:Running Domino on IPv6

For example for an inbound SMTP internet site where you have specified an IPv4 address, you have to add the IPv6 address.


Beside that the server behaves pretty much the same. In all places -- even C-API SMTP call-backs, logging, CGI variables etc, we just get the IPv6 address.


So it is quite straightforward -- specially on Linux, where you can use the same port.


Still looking into it -- specially for the edge cases, logging and API integration.

On Docker you need to enable IPv6 explicitly in
/etc/docker/daemon.json.

{

  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64"
}

But once enabled it works like a charm on Docker as well.

What are your experiences with IPv6 on Domino?
Is anyone using it? I would be interested to hear ..



References


Domino IPV6 documentation

https://help.hcltechsw.com/domino/12.0.0/admin/plan_ipv6andlotusdomino_c.html

Docker IPv6 documentation

https://docs.docker.com/config/daemon/ipv6

Running out of IPv addresses

https://en.wikipedia.org/wiki/IPv4_address_exhaustion


Domino V12 CertMgr corporate CA deployment automation

Daniel Nashed  1 June 2021 07:04:57

Now that Domino V12 is released,  we can look into more deployment scenarios.
I have looked into many different scenarios over the couple of month during beta and played with many integrations and described them in earlier posts.
Let me describe the main use-cases and I see today. And I am very interested to hear what you would like to integrate with.


1. Servers in public internet

The first deployment scenario would be Let's Encrypt and other ACME compatible CAs with HTTP-01 challenges directly accessible in internet.
Usually the most simple way would be to leverage HTTP-01 challenges to confirm your web-server identify.
The only requirement is to run the DSAPI filter shipped with CertMgr to allow ACME HTTP-01 challenge integration.
You can of course also use DNS-01 challenges if your DNS provider has an interface to automate to create DNS TXT records
(there will be more DNS TXT integrations available, if you have one you want to use, I would be interested to hear).


2. Servers in intranet using public DNS domains.

Some companies use the same DNS names in intranet and extra net.Either via split DNS or other name resolution.
In that case you can either use the public facing server using HTTP-01 challenges or if the server isn't exposed, use DNS-01 API.

I have already setup a larger domain in production with wild-card DNS entries for sub-domains, where a single server requests the certificates from Let's Encrypt.
Interestingly this is the first cmd interface integration using AWS CLI for the Amazon Route 53 DNS integration.


3. ACME protocol internally with a SmallStep CA


As posted a while ago, you can setup your own internal CA or sub-ca to leverage the ACME protocol to deploy internal certificates.

This would work for Domino V12 and probably many other applications with an ACME interface.
Not all applications might offer flexible configuration. The Domino ACME implementation has been tested with different ACME providers and is easy and flexible to configure.

So you would basically use the ACME technology to distribute certificates after validating the requests in the same way Let's Encrypt validates servers in the public internet.
This would be a very elegant and easy to use flow -- provided your company allows you to host a sub-CA derived from your corporate CA.

But this would also work in parallel if you deploy a complete new CA and the corresponding trusted roots.

4. Manual flows integrating with any CA

Beside the ACME integration CertMgr offers manual flows.

[A] Create a request for a SAN certificate with one or multiple hosts
CertMgr automatically creates the private key and CSR for your.

[B] You copy&past the CSR to your CA for signing

[C] And finally past the certificate back into the certstore.nsf to have it automatically processed by CertMgr.


The certificate chain is automatically sorted with the certs you pasted and completed with existing trusted roots in certstore.nsf.
If you need additional trusted roots or intermediate certs, you can import them in the same way into certstore.nsf.

5. Custom flows based on the manual flow functionality

CertMgr has been designed for flexibility and extensibility. There are currently no other automated options out of the box. But you can integrate the back-end operations of the manual flow with your own CA operations.
Having an open interface for integration similar to what has been implemented for the DNS-TXT providers would be high on my wish list for the next version.

But if you want to integrate your current CA flow with CertMgr and certstore.nsf it's already possible today.

I have just implemented my own "mini ca" based on the existing flows leveraging what CertMgr already does for us automatically.

[I] Let CertMgr create a private key and a CSR.
[II] Write for example an agent to check documents in a certain status -- waiting for the manual copy CSR operation and let the agent transfer the CSR to your CA.
[III] Get the certificate and chain back (or have the chain already present in cerstore.nsf) and submit the request to CertMgr.
[IV] CertMgr will import the certificate and run the health check operations.

This simple flow would be possible today and I have used it already for my own integrations.
Once the certificate is stored in certstore.nsf you can get it deployed domain wide to the right servers and have a very easy to use interface with all information about your certificates.

I see a lot of room for automation and integration. Not all of it has to be provided by HCL.
But the design of CertMgr and certstore.nsf allows integration today.

And even if you don't fully automate the process, the manual flow is already great help!
No more manual operations! No old and none standard kyr files. Only standard PEM format.


-- Daniel







Important security change Domino 11.0.1 FP3 and 12.0 on Windows

Daniel Nashed  31 May 2021 06:59:47

The following change has been introduced in 11.0.1 FP3 and higher on Windows.
See this technote for all details. But let me explain what this means for you if you are using add-on software.


https://support.hcltechsw.com/csm?id=kb_article&sysparm_article=KB0090343


Other platforms like Linux/UNIX and OS400 always enforced this type of security.
On Windows is was still possible to start an additional process with a different user and interact with the Domino server.

I have never been a fan of starting processes from outside the server.
The best way to invoke any add-on application is to run it from the server console or implement and configure it as a servertask.

Not all applications and use cases might allow this type of integration.
But now you have to start the new process with the exact same environment / user, than you process.


In consequence on a server usually started with the SYSTEM account, you can only use task/external code like virus scanners and backup, if the task is also started with the SYSTEM account.
There are rare cases where a server is started with a different account. In that case the additional processes also need to be started by this exact same user.
This makes a lot of sense and will also prevent instability issues.

There is a way to configure additional accounts with access to the server described in the technote.
But I would really try to configure your environment in a consistent way to not need two different accounts.

For details read the TN. If you have questions let me know. I might update my blog post with a FAQ based on your questions..


Update 31.5.2021 19:00:


After off-line discussion with a couple of partners, I had a quick look for work-arounds on the Microsoft side to see what would be the easiest way to keep their applications running.


It turned out that the system account isn't really as secure as one would think.

I never looked into how I could execute a program using the system account. And I thought I would need to write an own service.


After a minute consulting the web I was surprised there is very easy way to run commands as the local system account.


psexec.exe
from the famous SysInternal tools ->
https://docs.microsoft.com/en-us/sysinternals/downloads/psexec provides the "-s" option.

So one use case an admin asked for was to run commands via Ansible scripts to shutdown and the HTTP task and restart it later.



psexec.exe -w c:\domino\bin -s "c:\domino\bin\nserver.exe" -c "tell http quit"



For tasks like a command-line restore there is even an interactive mode, which allows you to open a new command-line window with the system account.

See this example. When you run this shell, you can execute any command as system account.


psexec.exe -w -s -i cmd.exe


whoami

nt authority\system



Conclusion:


Windows is still not where Linux and Unix is from day one.

Linux and other Unix platforms are designed from ground up as multi-user and multi-tasking operating systems with very clear security models.


Domino always works in user mode. In fact you can't even start Domino using the root user.

Only some very restricted resources like kernel tuning and binding to restricted ports below 1024 require root permissions with binaries having the sticky bit set.


If you specify the right umask, all file permissions will be automatically correct and you could even run multiple partitions securely.


On Windows the only real security would be to completely restrict who can access the Windows machine.

And this might even be difficult in a Windows domain to restrict the number of admins with access to the Domino servers.



Another side comment -- sudo delegations for Domino admin tasks


As many of you know I am really a big Linux fan since day one. And the security model and the way to manage a Linux is much more clear than on Windows.


On Linux there is a very nice way to delegate system permissions.

The "sudo" command can be used to allow users and groups to execute certain commands with root permission.


So for example my Domino start is located in a read-only location which only root can write.

A Linux admin with root access could allow a normal user to just operate the Domino server levering the start script.

This would allow quite granular control on Linux level.


-- Daniel



Domino Docker Community Image updated to V12

Daniel Nashed  30 May 2021 13:09:08

We updated the "develop" branch of the Domino Docker Community regularly in the beta time frame.
Now that Domino V12 shipped, we updated the develop branch and the master branch with the V12 products and removed the beta versions.

The current versions are:
  • HCL Domino 12.0.0
  • HCL Traveler 12.0.0

Optional Components
  • HCL Verse 2.1.0
  • Borg Backup  (Can be used in combination with Domino V12 on Linux)

Additional Image
  • HCL Volt 1.0.3.18

Older versions can still be build, but need to be explicitly specified.

To build all the images, just upload the new software to your software directory( or remote download repository) and start the build.
If software is missing, the script tells you what is missing, before the installation starts.

./build.sh domino -verse -borg
./build.sh traveler
./build.sh volt


To update your containers you just need to switch to the new image and replace the container.
The logic inside the image will take care updating the templates and you are ready to go.



Domino V12 -- A security release

Daniel Nashed  30 May 2021 09:07:08

The count down for the he official Domino and Sametime launch event  June 7, 2021 is ticking louder -->  https://www.hcltechsw.com/products/domino/launch
But as most of you already know Notes & Domino V12 is already available for download and I have moved most of my production environment already to Notes & Domino V12.

I have blogged about many new Domino functionality but I would like to highlight two other interesting bog posts in addition.

https://blog.martdj.nl/2021/05/27/domino-v12-is-here/
https://blog.hcltechsw.com/domino/hcl-domino-v12-the-4-new-security-features-youve-been-waiting-for/

Domino V12 is really a security release. There is a lot of cool and useful functionality as outlined in blog posts like the posts referenced above.

The Domino native Let's Encrypt (ACME) implementation is the most easy to use and most complete implementation on the market.
And you can leverage the new CertMgr and certstore.nsf also for any other CA and distribute TLS Credentials (the new Domino term for private key + certificate + chain (intermediates) + trusted root).

IMHO working with web server certificates have never been easier in any other enterprise product. From proprietary *.kyr file format Domino moved to standard PEM file format.
Let me highlight two of my favorite security features, which are more hidden gems.


Auto magical certificate import

Importing certificates was always difficult. You had to find the right certificates and add them in the right order inside a PEM file.
Now it magically works in any order or even with duplicate and mismatching certificate chains.

CertMgr will just build the certificate chain from private key and matching leaf certificate up to the root certificate and auto complete certificate chains multi level from it's own trust store.
This option came in late in beta 3 when the trusted root functionality was added.

And this is one of my favorite details, which makes certificate management a lot easier for admins.
This is all included in one Domain wide, easy to deploy database.


New TLS Cache

The old and limited KYR file cache in the SSL layer has been rewritten to fully take benefit of the new cerstore.nsf.
As soon you deploy the certstore.nsf database it will be automatically used and allows to auto reload of TLS Credentials without restart.

The new TLS Cache works also per process internet process and each cache instance has a dedicated maintenance thread,
which is monitoring cerstore.nsf and will reaload and swap the cache just in time once updated.

So this means new and updated certificates will be available without any delay or administrator's action.
This is also a feature which has been added in the final beta 3 and is one of the more hidden features without any UI representation.

The old KRY and the new TLS Cache are designed to work in parallel for full compatibility to existing releases.
But you should really switch to the new functionality and have your existing kyr files automatically imported leveraging "load certmgr --importkyr all".
This command will check the server doc and all configured internet for *.kyr files to import into certstore.nsf for your convenience.


I will go into a lot of details in the upcoming OpenNTF seminar in June. And you can expect more blog posts about the new certificate features.

OpenNTF Certificate Manager Webinar 17. June 2021.

https://register.gotowebinar.com/register/6157540408516926219



Links

    Archives


    • [IBM Lotus Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]