Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

 
alt

Daniel Nashed

 

Introducing Domino Borg Backup Integration V2

Daniel Nashed  17 March 2024 11:44:52

Borg Backup
is an interesting backup option for Linux (
https://www.borgbackup.org/) and also works inside a Domino container with a local or remote repository.
The first integration with Domino Backup used bash scripts and Borg commands. But this had limitations due to the way Borg handles backups.

Each database was stored in a separate repository. I have been looking for direct integration to avoid this overhead and store all backup data into a single backup.
There is a newer option to import tar data directly into Borg as a stream ->
https://borgbackup.readthedocs.io/en/stable/usage/tar.html.

Tar is a quite old format coming from the early times and was intended to store data on taps. It still plays a major role in today's Linux environments.The tar format contains the file meta data including file permissions which can be queries.
The Borg Backup option allows to pipe multiple tar streams into a single backup.That made me coming up with a new approach which lead to this GitHub repository providing a new Domino Backup integration
-> https://github.com/nashcom/domino-borg.

There is will work in progress for more detailed documentation. Specially for special options like providing the encryption password need some review. I added multiple ways to provide the Borg repository password to not store it unprotected on disk.

But let me explain the basic idea. See the GitHub repository for details.


-- Daniel


Idea:


A small Linux program (nshborg) is started and controls the borg backup process by writing data over stdin into the Borg Backup process.

Another instance of the same program is invoked per each database and communicates with the running instance controlling the backup process.

Flow:
  • Pre Backup Script starts nshborg a "server instance" which starts the borg backup with the specified target repository
  • Domino backup brings one database after another into backup mode and invoke nshborg (client instance) to interact with the nshborg server instance
  • nshborg server instance invokes tar to get the database from stdout in tar format and pipes it to stdout of the borg process
  • nshborg client instance waits until nshborg server instance completely sent the tar stream
  • Domino backup gets control back and brings the database online
  • If a delta data occurred during backup of the database, the delta is written in the same way
  • After all databases are processed the post backup script invokes nshborg client instance to signal nshborg server instance to complete the backup.
     
Example for commands invoked:

Backup DB command:


/usr/bin/nshborg -b '/local/borg::domino-20240130124242'


Starts the backup and specifies the backup target.



Post-backup command:


/usr/bin/nshborg -q


Stops the backup and signals nshborg to stop the backup



Backup DB command:


/
usr/bin/nshborg '/local/notesdata/names.nsf'

Tells nshborg server instance to push the file to the running borg instance.



Restore Db command:


/usr/bin/nshborg -a '/local/borg::domino-20240130124242' -r '/local/notesdata/names.nsf' -t '/local/notesdata/restore/names.nsf.dad'


Restores a single database


High Domino Backup performance with native ZFS storage on Proxmox

Daniel Nashed  17 March 2024 10:50:56

Introduction


Domino 12+ default native backup is a very easy to use option, which also works on Docker containers.

The resulting backup to a file target is always consistent, because delta information is always applied to the backup file.


But a file target raises the challenge that the whole NSF data will be copied to the target file-share or disk. Therefore a de-duplicating target is highly recommended.

I took a look into ZFS in detail in my new local setup to test out performance.



Protect your target file copy data


In addition to a file copy operation the resulting target should be always protected against Ransomware attacks.

There are multiple ways to protect the resulting file copy data, which isn't scope of this performance write-up.
Valid approaches would include a snapshot of the resulting ZFS data, copying the resulting consistent NSF data to a different backup media.

Any kind of secondary backup would work, because the data is consistent and does not need recovery operations on restore.



ZFS File System


ZFS is a quite special enterprise grade file-system offering couple of very interesting options.


It comes with an own very flexible pool management of native disks and also provides enterprise grade software RAID.

Beside snapshots it also supports compression and de-duplication.


In addition to space saving compression reduces the I/O load by taking just minimal CPU overhead which works perfectly OK with Domino NSF data.


De-duplication in contrast isn't a good idea for active Domino NSF data. But it is a perfect match for a backup target with Domino backup.


My ZFS backup performance on my Hetzner server isn't great. With a native setup of ZFS directly on the Proxmox hypervisor, the performance looks dramatically better.



Test Setup


Hardware


Intel NUC Intel(R) Core(TM) i3-8109U CPU @ 3.00GHz (NUC8i3BEH)

Samsung 980 PRO NVMe M.2 SSD 2TB



Software


Proxmox  8.1.4

LXC container with Ubuntu 22.04.4 LTS ((Jammy Jellyfish)

Domino 14.0 container


File System


With a LXC the file system is a ZFS file-system directly mounted from host. I added a root data disk and a backup volume



Backup Setup and Test


Domino backup comes with a standard configuration.The default target is /local/backup.
The directory inside the container points to the ZFS sub volume tuned as a backup target.


I increased the backup file copy buffer from 128 KB to 1 MB via a special notes.ini parameter --> notes.ini FILE_COPY_BUFFER_SIZE=1048576.

It turned for ZFS with 128 KB record size, this didn't make a big performance difference.
But it is a recommended parameter to push file copy operations to a bigger buffer size and optimize File I/O operations on Linux side.


To a fresh server I copied my 4.6 GB production mailfile for testing.

And enabled de-duplication on the ZFS target volume.


zfs set atime=off tank/subvol-100-disk-3

zfs set dedup=on  tank/subvol-100-disk-3



Basic backup performance is up to 500 MB/sec with compression and de-duplication.


The first backup already showed great performance I didn't expect.

Performance varies a bit. But even like 20% less performance would be already beyond anything I have seen in most corporate environments.


load backup

Backup: Domino Database Backup

Backup: Started

Backup: Pruning backups

Backup: BackupNode: [my-domino-server], BackupName: [default], Translog Mode: [CIRCULAR],  Backup Mode: [FULL]

Backup: LastBackupTime: 03/17/2024 09:28:13

Backup: Starting backup for 123 database(s)

Backup:

Backup: --- Backup Summary ---

Backup: Previous Backup  : 03/17/2024 09:28:13

Backup: Start Time       : 03/17/2024 09:29:02

Backup: End Time         : 03/17/2024 09:29:18

Backup: Runtime          : 00:00:15.47

Backup:

Backup: All              :   123

Backup: Processed        :   123

Backup: Excluded         :     0

Backup: Pending Compact  :     0

Backup: Compact Retries  :     0

Backup: Backup Errors    :     0

Backup: Not Modified     :     0

Backup: Delta Files      :     0

Backup: Delta applied    :     0

Backup:

Backup: Total DB Size    :     7.3 GB

Backup: Total DeltaSize  :     0.0 Bytes

Backup: Data Rate        :   496.9 MB/sec

Backup: --- Backup Summary ---

Backup:

Backup: Finished



More test results


Another first Backup resulted in 581.0 MB/sec

Second Backup (immediately afterwards) had almost the same speed:  577.3 MB/sec


Backup after DBMT did vary in size and might get lower than the 540.3 MB/sec

This depends also in how much the data changes. I saw performance dropped to like 270 MB/sec in some cases for data that changed a lot.



Looking at the de-duplication rates


The first backup resulted in almost zero de-duplication. Which was sort of expected with a single mail file.



Second backup


But already the second backup shows the benefit of ZFS de-duplication


zpool list

NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

tank  1.81T  90.5G  1.72T        -         -     0%     4%  
2.00x    ONLINE  -


Backup after DBMT


DBMT re-writes the whole NSF file and should at most done once per week.

Maybe even less when DAOS, NIFNSF are enabled.


But even after a DBMT the de-duplication rate was quite good in my test. This might vary with real world changing data.



zpool list

NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

tank  1.81T  90.7G  1.72T        -         -     0%     4%  
2.85x    ONLINE  -


Conclusion and additional thoughts


ZFS can be a very efficient backup target from performance and cost point of view.

In my case the ZFS target was on the same Proxmox server. But remote Linux hosts running ZFS natively accessed over NFS or CIFS would work as well.

The performance will be very likely not be the same, because of overhead added by NFS, network etc.


Proxmox might become more important in future for Domino.

Local Proxmox ZFS storage in combination with Domino clustering can be a valid approach including backup with the right backup protection strategy.

The ZFS backup volume should be at least a separate ZFS disk pool.


Remote ZFS over NFS


Probably in larger production environments the ZFS pool will be on a different box, which makes the network the next bottleneck to look into.

A 1 GBit network card could only handle at max 112 MB/sec.  But in corporate environments server to server communication should be hopefully handled by 10 Gbit NICs.

For a small server or home-office backup a local ZFS pool with periodic backup to an external disk would be a valid approach.
A 112 MB/sec backup over 1 GBit NIC would be probably more than sufficient in smaller environments.


Local native ZFS is the fastest option


The best optimization is probably on a local ZFS pool, because compression and de-duplication is handled locally and only the delta has to be written.

That's also why the native ZFS sub-volume mount in my LXC container setup is that important.


For a ZFS zvol in a VM in combination with a local file-system the performance would probably look completely different as well.

The setup used removes all intermediate layers and just uses native ZFS for backup operations - almost comparable to a physical host without any virtualization.


Looking into S3 performance numbers for MinIO -- Is this the right target for backup?

Daniel Nashed  16 March 2024 18:45:33

Introduction


I know MinIO for a while and I have been using it for DAOS T2 testing early on. Years later they are now grow up and play in the cloud native storage league.

Still the devil is in the detail and for using it in production environment customers hopefully use the enterprise subscription to get tuning support.
Paying for support this doesn't make it a cheap storage any more if you look at their price tag.

S3 is an interesting technology. But it isn't the solution to all problems. "If you only have a hammer every problem looks like a nail".
Coming from the cloud it is designed be AWS as a "Simple Storage Service" as the name indicates. It also has embedded verification and optional encryption.

For sure it isn't useful for all type of operations.There is a certain overhead when you are not accessing the file system directly.
And this also requires quite some additional resources like CPU if extensively used.

I am mainly interested in taking a look into it for Domino Backup.
But understanding the nature of access is also important to understand DAOS T2.
Specially when it comes to listing NLOs for resync operations (which is not part of this test setup).


Test the MinIO server


To scale MinIO, you need quite some hardware resources as your see from their report. My first test on a smaller machine failed, because I ran out of memory resources.
When you look at their benchmark, they are running a cluster with a couple of nodes and multiple client drivers to generate the load.
In the use case of a backup and also for DAOS T2 the performance of individual requests are more relevant.

https://min.io/resources/docs/MinIO-Throughput-Benchmarks-on-HDD-24-Node.pdf

MinIO used a GO based test program from Wasabi for the benchmark above.


https://github.com/wasabi-tech/s3-benchmark

I took a larger Hetzner cloud server with Intel CPU and run some quick tests.
The local disks at Hetzner are always fast SSDs as you can see from the results.


The machine I used was pretty untuned. Even MinIO lists some interesting parameters in their load test (this reminds me on the old Domino Notesbench results).



Image:Looking into S3 performance numbers for MinIO -- Is this the right target for backup?


The basic command I used is the a simple performance test also used in their workload.


Example:


./s3-benchmark -a s3-user -s s3-password -u
http://127.0.0.1:9000 -t 1 -z 1M

The MinoIO server is the simple MinIO Docker container without extra tuning using a native disk.

I mostly used the 1 MB object size for testing. But changed it for two test to see the different for backing up larger databases.


The result of parallel operations with the client and the server on the same machine was impressive.
But the write operations for a single object haven't been great for smaller files.


Domino Backup uses a single thread to backup databases. You could see that with parallel write operations the box was able to handle up to 580 MB/sec write performance.
So the disk itself wasn't the bottleneck here. It's probably the overhead of starting the operation which causes the slower performance for a write.


My test has been completely local. This is the lowest network latency you can get.
In a modern network environment the LAN latency should not play a big role here.
But usually machines have only a 1GBit network connection.



Conclusion


~160 MB/sec
for a single writer thread is probably the fastest you can get with S3.

I saw similar performance for uploading files to AWS S3 via AWS CLI form a AWS hosted machine.


For Domino backup S3 does not buy you any simplification and the performance might really depend on your databases size and the tuning of your MinIO server.


In addition S3 itself does not de-duplicate data. Which is essential for Domino backup to a simple storage.
When you look at
https://blog.min.io/myths-about-deduplication-and-compression/ it really sounds like they have no interest in de-duplication at all.

But compression alone will not help for daily backups with like 14 days of backup retention.

They are probably right about the general use case. But for daily backups de-duplication are essential.


I don't see the benefit of using S3 if you have to install, support, tune and back it up on your own.
It's a different story in the cloud where S3 is a native implementation for example in AWS S3 infrastructure and you consume it as an highly optimized service.

For a company putting Domino backup on a MinIO S3 drive does increase the overhead and will potential cost more than storing it to a simple ZFS de-duplicated share.


Also when it comes to backup performance of a simple file copy operation without de-duplication a Hetzner 1 TB Storage box for around 4 Euro/month can copy at 500 MB/sec without any additional CPU overhead.


-- Daniel



Test results



Threads: 1
/ Object Size 1 MB

Loop 1: PUT time 60.0 secs, objects = 3617, speed  =  
60MB/sec,  60.3 operations/sec. Slowdowns = 0
Loop 1: GET time 60.0 secs, objects = 20158, speed = 336MB/sec, 336.0 operations/sec. Slowdowns = 0

Loop 1: DELETE time 9.0 secs, 402.1 deletes/sec. Slowdowns = 0


---


Threads: 1 / Object Size 100 MB


Loop 1: PUT time 60.0 secs, objects =  97, speed =
161.6MB/sec, 1.6 operations/sec. Slowdowns = 0
Loop 1: GET time 60.1 secs, objects = 589, speed = 980.1MB/sec, 9.8 operations/sec. Slowdowns = 0

Loop 1: DELETE time 0.2 secs, 415.6 deletes/sec. Slowdowns = 0


---


Threads: 10
/ Object Size 1 MB

Loop 1: PUT time 60.0 secs, objects = 28594,  speed =
476.5MB/sec,  476.5 operations/sec. Slowdowns = 0
Loop 1: GET time 60.0 secs, objects = 264612, speed =   4.3GB/sec, 4410.0 operations/sec. Slowdowns = 0

Loop 1: DELETE time 4.7 secs, 6093.5 deletes/sec. Slowdowns = 0


---


Threads: 10  / Object Size 100 MB


Loop 1: PUT time 60.4 secs, objects =  776, speed =
1.3GB/sec, 12.8 operations/sec. Slowdowns = 0
Loop 1: GET time 60.2 secs, objects = 3696, speed =   6GB/sec, 61.4 operations/sec. Slowdowns = 0

Loop 1: DELETE time 0.8 secs, 977.7 deletes/sec. Slowdowns = 0


---


Threads: 100
/Object Size 1 MB

Loop 1: PUT time 60.0 secs, objects = 34512, speed  = 574MB/sec,  574.8 operations/sec. Slowdowns = 0

Loop 1: GET time 60.0 secs, objects = 308293, speed =   5GB/sec, 5137.5 operations/sec. Slowdowns = 0

Loop 1: DELETE time 20.6 secs, 1677.5 deletes/sec. Slowdowns = 0


---


Threads: 1000
/ Object Size 1 MB

Loop 1: PUT time 60.2 secs, objects = 28628, speed  = 475.5MB/sec,  475.5 operations/sec. Slowdowns = 0

Loop 1: GET time 60.1 secs, objects = 406941, speed =   6.6GB/sec, 6774.5 operations/sec. Slowdowns = 0

Loop 1: DELETE time 21.1 secs, 1359.6 deletes/sec. Slowdowns = 0



First look at openSUSE Leap 15.6 Beta with Domino 14

Daniel Nashed  16 March 2024 12:02:33


As some of you know from earlier discussions, the latest currently available SUSE Enterprise and openSUSE Leap 15.5 ships with a too old glibc to work out of the box with Domino 14.
You could still run it on a Docker(or Podman) host, because the container image brings the glibc run-time with it and only uses the kernel from the Docker host.

openSUSE Leap and SUSE Enterprise (SLES) share the repositories and are technically more or less the same.


SUSE Linux 15.6 is scheduled for mid 2024


I have been looking into openSUSE Leap earlier with their Alpha version.
Now the official beta is available for download ->
https://get.opensuse.org/leap/15.6/
An update of the Alpha version took me straight to the beta version.


SUSE Linux 15.6 comes with a 6.4 kernel - that needs full re-testing


As expected Domino 14 works natively with the updated glibc. The requirement is glibc 2.34+. This Linux version will introduce glibc 2.38.
But SUSE also switched again to a new major kernel version with a Service Pack.


This means HCL will have to re-rest SUSE Linux once the final version is released.
It will take some time to have fully tested and support SUSE supported for Domino 14.0.

But the more interesting question is what will happen with older versions like 12.0.2 which would need to be separately tested.

SUSE 15.6
is a service pack, but with the changes involved, it qualifies itself to be looked at as a new major release.
So I would not expect it to be tested for the Domino 12.0.2 or earlier code stream.
It could still fall under the category "not officially tested by HCL but works".


Here are the current versions of important packages.

The kernel is pretty new. So as glibc is.

OpenSSL has been bumped up to version 3.1.4 which was previously the 1.1.1 fully patched version.
The curl package is not up to date showing a version of 3/2023. But this might change before release hopefully.


---

uname -a

Linux localhost 6.4.0-150600.9-default #1 SMP PREEMPT_DYNAMIC Fri Feb 23 21:11:52 UTC 2024 (375d88d) x86_64 x86_64 x86_64 GNU/Linux


---


ldd --version

ldd (GNU libc) 2.38


---


openssl version

OpenSSL 3.1.4 24 Oct 2023 (Library: OpenSSL 3.1.4 24 Oct 2023)


---


curl --version

curl 8.0.1 (x86_64-suse-linux-gnu) libcurl/8.0.1 OpenSSL/3.1.4 zlib/1.2.13 brotli/1.0.7 zstd/1.5.5 libidn2/2.2.0 libpsl/0.20.1 (+libidn2/2.2.0) libssh/0.9.8/openssl/zlib nghttp2/1.40.0

Release-Date: 2023-03-20

Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp

Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL threadsafe TLS-SRP UnixSockets zstd



Preparing my Engage conference Domino Auto Update Session - Questions and feedback?

Daniel Nashed  16 March 2024 10:44:28

Engage conference in Antwerp end of April gets closer.
My session will cover Domino 14.0 Auto Update including some background information and latest information from the hopefully at that time released 14.0FP1.


Did you look into Auto Notify, Download and Distribute in Domino 14 already? How do you like what is there so far?
Do you have have any specific questions? Or feedback that could be interesting for HCL for the next steps in Auto Update in Domino 14.0.1?


My session is only 45 minutes. So it will be a challenge to provide all the information and get all the feedback.
That's why I would like to understand what admins in the field already know about it and what specific details I should cover in particular.


You should try it out and get hands on experience to bring your questions and feedback to Engage.


The functionality is pretty easy to setup. That was one of the design goals.
So far I have only 27 slides to also cover questions and a live demo.


-- Daniel


Image:Preparing my Engage conference Domino Auto Update Session - Questions and feedback?

Running Domino on Proxmox in LXC container with Docker

Daniel Nashed  16 March 2024 09:42:41

I am experimenting with different type of Proxmox configurations for Domino.
Proxmox supports LXC containers which combines shared kernel from the host in combination with a light Linux container hosting a Linux server.


This combination offers direct access to ZFS sub-volumes for your the LXC container.
One of the benefits is lower overhead for kernel scheduling. Your Linux container runs on the kernel of your host using native Linux kernel level virtualization.
Another benefit is that there is no disk virtualization in between.

For a full VM with it's own kernel a zvol is created, in which a separate file-system is used to format the zvol device presented to your VM.

In my case I took another step. I am using Alpine Linux in a LXC container to run a Docker host, which then runs a Redhat UBI based container, which hosts the Domino server.


Alpine might not be the choice for a production environment. I would rather use Redhat/CentOS 9.x clones or Ubuntu to run Domino native or in a Docker container.
But it nicely shows the different layers. Alpine Linux is not even running on glibc. But Docker is available on Alpine Linux.

---


This setup shows nicely three different virtualization technologies playing hand in hand.
  • Proxmox as a host level hypervisor
  • LXC as a lightweight virtualization option for Linux servers
  • Docker as an application virtualization using containers

The whole stack is based on Linux native virtualization technologies with very low overhead and a lot of tuning options.


-- Daniel


Image:Running Domino on Proxmox in LXC container with Docker


    Important: For Domino SMTP with ECDSA keys for STARTTLS inbound

    Daniel Nashed  16 March 2024 08:45:15

    The short version of you don't want to know all the technical details:

    If you choose a ECDSA key for your web server, make sure you have also a RSA key for SMTP inbound connections


    In case you are interested in the technical details, read on ...


    Image:Important: For Domino SMTP with ECDSA keys for STARTTLS inbound

    -- Daniel



    What's the big deal running ECDSA keys/certs for SMTP only


    Domino supports modern cryptography with elliptic curve ciphers since version 12.0.

    Web clients/applications usually fully support ECDSA today. But not every SMTP server provider runs their infrastructure ECDSA key ready.
    Outgoing connection from a Domino server over SMTP with STARTTLS are usually not a problem, because the server side drives what is used during TLS handshake.

    But for incoming connections the Domino SMTP server will present supported ciphers based on the TLS Credentials (new name for SSL certificate in certstore.nsf since Domino 12.0).
    If you are running a ECDSA certificate, you would limit the supported ciphers to the following two ECDSA ciphers.

    This might break some older servers to deliver messages or fall back to unencrypted connections.


    With ECDSA Domino by default uses the following two ciphers:


    ./nshciphers blog.nashcom.de


    ------------------------------------------

    C02C, TLSv1.2, ECDHE-ECDSA-AES256-GCM-SHA384 , TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

    C02B, TLSv1.2, ECDHE-ECDSA-AES128-GCM-SHA256 , TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

    ------------------------------------------



    RSA keys recommended for SMTP


    The new TLS cache which is part of the new functionality in Domino 12+ supports both key types in parallel.
    The TLS cache determines which key to use based on the signature algorithms the client passes to the server.

    So you can add TLS credentials with RSA and ECDSA keys in parallel to certstore.nsf


    What determines which certificate/key is used?


    The signing algorithms requested during the handshake determine the certificate used.
    If both or none algorithm are requested, Domino prefers ECDSA for HTTPS and RSA for all other protocols by default (can be flipped per protocol via notes.ini).


    Example requesting ECDSA and RSA with STARTTLS


    Here is an example passing both types of signature algorithms to a STARTTLS connection. You can see that the RSA key is favored.
    You can see that a RSA key/certificate has been picked. Which results in a RSA cipher to be used.


    openssl s_client -sigalgs "RSA+SHA256:ECDSA+SHA256" -connect notes.nashcom.de:25 -starttls smtp


    SSL-Session:

    Protocol  : TLSv1.2

    Cipher    :
    ECDHE-RSA-AES256-GCM-SHA384


    Example requesting only ECDSA with STARTTLS


    In contrast when only specifying a ECDSA signing algorithm, the server prefers the ECDSA key/certificate resulting in a ECDSA cipher to be used.


    openssl s_client -sigalgs "ECDSA+SHA256" -connect notes.nashcom.de:25 -starttls smtp


    SSL-Session:

    Protocol  : TLSv1.2

    Cipher    :
    ECDHE-ECDSA-AES256-GCM-SHA384


    Supported strong RSA cipher list in Domino 14.0 has changed!


    Domino 14 moved more ciphers to the weak list.  Only four ciphers remain on the recommended list.


    ./nshciphers blog.nashcom.de -r


    ------------------------------------------

    C030, TLSv1.2, ECDHE-RSA-AES256-GCM-SHA384   , TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    009F, TLSv1.2, DHE-RSA-AES256-GCM-SHA384     , TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

    C02F, TLSv1.2, ECDHE-RSA-AES128-GCM-SHA256   , TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

    009E, TLSv1.2, DHE-RSA-AES128-GCM-SHA256     , TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

    ------------------------------------------


    Enabling weak ciphers


    If you pick any other cipher, you have to enable notes.ini USE_WEAK_SSL_CIPHERS=1

    Choosing weaker ciphers for SMTP isn't a general problem.
    Modern SSL/TLS stacks support secure renegotiation to ensure to pick the highest cipher the SSL client and server have in common (the order is server determined unless configured differently).

    So allowing older, potentially weaker ciphers isn't a big deal.

    As long you are not in a highly regulated environment and have to ensure a trusted channel, a weaker cipher is much better then a fall back to unencrypted SMTP traffic.


    Changing the cipher list for SMTP


    For outgoing connections the server document is used to configure the ciphers used.
    This is even true if you enable internet sites and the cipher list is hidden.

    To look at the cipher list and change it, disable internet sites in the basic tab, change the cipher list and enable internet sites before saving.

    Test have shown that also for inbound SMTP connections the cipher configuration in server document is used -- the ciphers in the SMTP internet site are ignored.
    But still with internet sites you can distinct between HTTPS and SMTP STARTTLS this way.


    Domino 14.0 Dialog


    Older dialogs have less deprecated ciphers (see further down)


    Image:Important: For Domino SMTP with ECDSA keys for STARTTLS inbound

    Without enabling weak ciphers, Domino 12.0.2 FP3 uses the following ciphers.

    The basic RSA none DHE ciphers have been marked weak for a longer time, because older ciphers don't support Forward Secrecy (FS).

    If you are running on Domino 12.0.2 you can just enable those older ciphers listed in red.
    For Domino 14.0 you would need enable notes.ini USE_WEAK_SSL_CIPHERS=1


    nshciphers domino.nashcom.de -r


    ------------------------------------------

    C030, TLSv1.2, ECDHE-RSA-AES256-GCM-SHA384   , TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    009F, TLSv1.2, DHE-RSA-AES256-GCM-SHA384     , TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

    C02F, TLSv1.2, ECDHE-RSA-AES128-GCM-SHA256   , TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

    009E, TLSv1.2, DHE-RSA-AES128-GCM-SHA256     , TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

    C028, TLSv1.2, ECDHE-RSA-AES256-SHA384       , TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

    006B, TLSv1.2, DHE-RSA-AES256-SHA256         , TLS_DHE_RSA_WITH_AES_256_CBC_SHA256

    C027, TLSv1.2, ECDHE-RSA-AES128-SHA256       , TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

    0067, TLSv1.2, DHE-RSA-AES128-SHA256         , TLS_DHE_RSA_WITH_AES_128_CBC_SHA256

    ------------------------------------------



    Log of week ciphers


    You can see here that the right hand ciphers with RSA only have been listed as weak ciphers when starting HTTP in my case:


    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_AES_256_GCM_SHA384. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_AES_128_GCM_SHA256. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_AES_256_CBC_SHA256. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_AES_128_CBC_SHA256. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_AES_128_CBC_SHA. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_3DES_EDE_CBC_SHA. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.

    SSLDisableExportCiphers> Disabling weak cipher RSA_WITH_RC4_128_SHA. Set notes.ini "USE_WEAK_SSL_CIPHERS=1" to re-enable.



    Conclusion


    Running a RSA key/cert for SMTP is an important requirement.


    Depending on your use case, you might want to also enable weaker ciphers in Domino 14.0 for SMTP only.

    Which isn't really lowering your security in general because of secure renegotiation.

    I am personally keeping the stronger cipher list with RSA and ECDSA.
    But now you know what you can do if you have older SMTP server which can't connect any more.


    ---


    Two test tools that might help you (but you need to compile them on your own)


    https://github.com/nashcom/nsh-tools/tree/main/nshcipher
    https://github.com/nashcom/nsh-tools/tree/main/nshmailx

      End Of General Availability of the Free vSphere Hypervisor (ESXi 7.x and 8.x)

      Daniel Nashed  15 March 2024 22:12:49

      https://kb.vmware.com/s/article/2107518

      Broadcom pulled the plug on VMware's most well known base level project used by many home office users.
      Admins also used it to learn to use the basic functionality working with it at home.

      A 60 days trial license of the full product is not the same experience then using it for own home office operations.
      Free access to the base technology was a great move of VMware IMHO.

      Beside all the changes Broadcom did in the partner program, OEMs, the portfolio, this sounds like the one that will have the biggest long term impact.
      This might not interest the current steak holders who get their bonus on short term profit increase.

      It could be the final start for many IT professionals and companies to look for alternate solutions.

      Linux provides kernel level virtualization "KVM" which is used in other great products like the Proxmox server.
      The internet is powered by Linux and native virtualization.

      Short term profit alone isn't sustainable for a company. Loosing the base in the community will kick back sooner or later.

      You can expect more posts from me about other virtualization technologies.


      ---

      Another interesting blog post came up in private discussions --> https://www.theregister.com/2022/05/30/broadcom_strategy_vmware_customer_impact/

      Maximizing the ROI ins understandable. But I am not sure if this strategy will work longer term -- even for those customers it is harder to move.
      Maybe those companies who can't be move that easy will still look for alternate solutions, because they can't risk to put all they eggs in one basked.

      -- Daniel


      Image:End Of General Availability of the Free vSphere Hypervisor (ESXi 7.x and 8.x)

      Introducing the Domino native Linux installer and Domino Linux Menu

      Daniel Nashed  15 March 2024 11:02:04

      When I ask a question like "why admins are not moving to Domino on Linux" I might have a plan in my head already.

      I cannot solve all the challenges for you at once. But I am helping over years with my Domino Start Script to get Domino on Linux easier to run.


      The start script already helps to perform standard operations.

      Istallation is and some other operations might be still more complicated at first glance on Linux.

      Picking the right distribution should be covered in this HCL community project -->
      https://opensource.hcltechsw.com/domino-linux/
      And you will see more information been added there over time.



      Domino Server Automatic Installation


      I introduced a build menu into the HCL Domino Community image process recently.

      And I took that logic and I am making it available for native installations as well.

      This new option also offers automated downloads via the recently released Domino Download script  
      https://nashcom.github.io/domino-startscript/domdownload/

      The installation works on all major Linux distributions and allows to install all components automatically.

      I am still preferring the container build process, because this will be a much cleaner way to install Domino from scratch for every image build.

      In addition this generates a well defined image which can be tested and applied to any server.


      But this installation option might be helpful for many admins on regular/native Domino servers.


      Image:Introducing the Domino native Linux installer and Domino Linux Menu


      Domino Server Configuration


      Once you have server installed, the next challenge is to get the server up and running.

      The HCL open source project for One Touch Setup (OTS) could be a good starting point for you.

      (see
      https://github.com/HCL-TECH-SOFTWARE/domino-one-touch-setup)

      But the Domino start script already comes with pre-defined OTS configuration templates and will prompt for configuration parameters to merge into the configuration.


      A new menu option for the Start Script project can guide you for invoking the configuration.



      Image:Introducing the Domino native Linux installer and Domino Linux Menu

      Image:Introducing the Domino native Linux installer and Domino Linux Menu


      Domino Start Script Menu


      Once your server is configured, the start script can start and stop the server among many other options.

      I took the most important options and added them to a start script menu.


      The menu is automatically invoked if you run the "domino" command without any parameters.

      Existing functionality is still available via command line.

      I am not planning to build cascaded menus to provide every option of the start script .


      This menu is intended to simplify standard Domino operations.



      Image:Introducing the Domino native Linux installer and Domino Linux Menu


      Feedback?


      This will not solve all challenges admins have today running Domino on Linux.

      But I think this is a good next step and will help many of you.


      We have to spend more time on how-to and tutorial material for Domino on Linux.

      And the community needs your help for providing content and feedback.


      https://opensource.hcltechsw.com/domino-linux/

      Domino Backup on Linux for sure is another interesting topic (part of another HCL open source project -->
      https://opensource.hcltechsw.com/domino-backup/

      What to you think? Is this helpful? Keep me posted on what else is challenging.



      Are you using IPv6? - What about Domino?

      Daniel Nashed  9 March 2024 09:44:36

      IPv4 addresses are becoming a rare resource for years. But still the adoption of the next generation IPv6 protocol (which is available for ages) isn't great.
      Most operating systems, routers and other infrastructure components are IPv6 ready for years.

      I looked into Domino IPv6 support quite a while ago and some of my servers are dual homed. But mostly with separate DNS names mainly for testing.


      There is still only one customer I know about making the switching for their Domino servers.

      I looked into all the different aspects including logging in domlog.nsf and even SMTP Extension Managers a while ago and it just works the same way.


      The basic setting you need to set is: notes.ini TCP_EnableIPv6=1


      Domino IPv6 documentation


      There are some other more specific settings available to configure IPv6 addresses in different places of Domino.

      In some places IPv6 addresses need to specified with square brackets.


      https://help.hcltechsw.com/domino/14.0.0/admin/plan_ipv6andlotusdomino_c.html


      How far is IPv6 enablement completed?


      One statistic (
      https://pulse.internetsociety.org/technologies) says that only around 50% of 1000 websites support IPv6 beginning of 2024.
      Also Google's statistics should a similar adoption rate in more general:
      https://www.google.com/intl/en/ipv6/statistics.html


      Additional resources


      The Internet Society website has an interesting IPv6 section with more details.


      https://www.internetsociety.org/deploy360/ipv6/


      How are you using IPv6 today?


      Some providers increased their prices for IPv4 addressed. And for some test servers I might switch completely to IPv6.


      I would be interested to hear from you, how much you adopted IPv6 on client and server side.

      Did you face any specific challenges when trying to make the move?

      Specially for Domino and related products.



      -- Daniel



      Image:Are you using IPv6?  - What about Domino?


      Links

        Archives


        • [HCL Domino]
        • [Domino on Linux]
        • [Nash!Com]
        • [Daniel Nashed]