Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

 
alt

Daniel Nashed

 

    Passing values back from Notes DialogBox with OK button.

    Daniel Nashed  2 August 2021 07:24:53

    This sounds like nothing complicated. But it turned out the devil is in the detail.
    When you want to create a DialogBox and have special logic in your OK button, you have to build your own OK/Cancel button and hide the original OK and Cancel buttons.
    This provides full control on the logic in the sub-form presented by the dialog box.

    The dialog box it self is straightforward. I am creating TempDoc and pass it to the dialog box and I used a table to display.

    flag = workspace.Dialogbox(FORM,True, True, True, False, False, False, DialogTitle, TempDoc, True, True, False)

    The Cancel button is also straightforward

    @PostedCommand([FileCloseWindow])

    If you don't do anything special, closing the window will make you to loose any values - That's what I want.
    The tricky part was the OK button. You can set any field with a button and I am also using it to display error messages in a field on the sub form in the table.

    But the dialog box did not return any values.

    It turned out there are two parts needed:

    • @PostedCommand([RefreshParentNote])

      This refreshes the back-end document passed via TempDoc.
    • Both commands (see example below) have to be @PostedCommand instead of @Command to have them executed after all other operations like setting your variable values!


    Here is the code in my button. The OK button will only close the window, when there are no validation errors. Else the error is displayed.
    I tried to use validation formulas in the sub and the standard OK button earlier. But this turned out to be not a good idea.
    And this looks like the most elegant approach giving me a very clean dialog box to select values and to report errors back.


    FIELD ErrorMessage:ERR_ALL;
    FIELD SelectedAction := Action;

    @If (ERR_ALL = ""; @Do( @PostedCommand([RefreshParentNote]);@PostedCommand([CloseWindow])); "")


    The RefreshParentNote is documented for Lotus Script and for @formulas. I just overlooked it in documentation.
    And it does only work with
    PostedCommand if you have other logic around it in @formulas


    -- Daniel


    Using ACME HTTP-01 Challenges redirected to other servers

    Daniel Nashed  30 July 2021 21:13:31
    The ACME protocol and Let's Encrypt are pretty flexible and follow rules of standard HTTP requests.

    The following would help you if you are running a Domino V12 server which is not connected to the internet itself.
    Or running on an OS where the DSAPI filter to confirm the HTTP-01 is not available like AIX or OS400.

    You can let another server -- like the CertMgr server -- confirm the challenge for you.

    CertMgr stores the challenge inside certstore.nsf. By design all servers receiving the challenge will lookup the challenge received in certstore.nsf on the CertMgr server.
    This design is used to ensure all servers the requested HTTP-01 points to, need to be able to confirm the challenge (DNS entries or load balancers pointing to multiple servers).

    Taking this concept you can let any server reply to the challenge by generating a redirect for the /.well-known/acme-challenge/* URL for your server.

    If your are using internet sites this would work with a simple redirect rule as shown below.
    I have just tested it with a HTTP-01 challenge with Let's Encrypt and it works like a charm.

    Let's Encrypt will just follow the redirect and ask for the challenge in the new location.

    -- Daniel

     
      Web Site Rule
     

    Basics
    Description: ACME Challenge redirect
    Type of rule: Redirection
    Incoming URL pattern: /.well-known/acme-challenge/*
    Redirect to this URL: http://validation.acme.com/.well-known/acme-challenge/*
    Send 301 Redirect:  



    Administration
    Owners: Daniel Nashed/NashCom/DE
    Administrators: Daniel Nashed/NashCom/DE
    Last updated: 30.07.2021 23:13:11 Daniel Nashed/NashCom/DE





    TrueNAS Scale Beta 1 -- this is freaking awesome!

    Daniel Nashed  28 July 2021 17:56:18
    My friend and co HCL Ambassador Daniele Vistalli pointed out this new platform which is build by the company that originally build FreeNAS.
    When quickly checking the website I got so excited to drop everything for two hours and look into it ASAP.
    I didn't check the diagram below and looked under the covers after it was installed after 10 minutes in a VM.

    Here is the link to check all the details --> https://www.truenas.com/truenas-scale

    This combines all the stuff I have looking into the last weeks!
    The original FreeNAS was always supporting OpenZFS which is an interesting file-system on it's own.

    Scalability, flexibility, reliability, native/transparent compression & deduplication and snapshots!


    The always offered great connectivity like SMB, NFS, iSCSI. But what they did add in TrueNAS Scale is truly amazing.


    When I looked into how it is build, I noticed a lot of technology I already know and love. And this platform brings it all into one seamless integrated platform -- installed in 10 minutes.

    -------
    1. OpenZFS fully integrated as the foundation
    2. Docker current release
    3. k3s - the very efficient but enterprise ready Kubernetes - I blogged about it before and this is the platform I am using for a while already. -> https://k3s.io/
    4. The OpenEBS storage driver for ZFS which allows k3s to use ZFS storage natively --> https://openebs.io/
    5. Helm and an application platform to run containers on top of k3s
    6. You can just define Docker containers or use their catalog application on GitHub to define Helm charts to define your applications. -> https://github.com/truecharts/apps
    -------

    Specially the ZFS storage driver is a very interesting step. This offers a K8s native interface to ZFS volumes in a ZFS pool.
    And it also implements a native snapshot class to snapshot PVC storage.
    I already played around with it to use K8s API to snapshot Domino instances directly from Domino Backup to build a native Kubernetes integration for Domino.

    Setting up this combination on my own wasn't that easy. I had to compile the OpenZFS driver. Deploy the OpenEBS storage and snapshot driver into k3s and get all components glue together.
    Now this platform takes 10 minutes to install and offer all of this as the standard out of the box storage with all options available out of the box.
    In addition they have a simple to use UI with many additional storage options and integrations.

    For example S3 Minio can be installed as one of the pre-defined applications. And I looked into and example how to setup Pi-hole in 5 minutes configuring it thru a Docker image as well.

    This platform has a lot of potential and I am already excited looking into it briefly. They also have REST API and a lot other functionality which makes it enterprise ready.
    OpenZFS also has some other interesting options to mirror and backup storage. And the hyper-converged environment sounds like an interesting scale out option as well.
    And I didn't even mention the more standard functionality like running Windows/Linux VMs and Linux containers ..

    -- Daniel

    this comes out of the box automatically without anything to configure:

    kubectl get storageclasses
    NAME                              PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    openebs-zfspv-default (default)   zfs.csi.openebs.io   Delete          Immediate           true                   6h
    ix-storage-class-minio            zfs.csi.openebs.io   Retain          Immediate           true                   5h58m

    kubectl get VolumeSnapshotClass
    NAME                           DRIVER               DELETIONPOLICY   AGE
    zfspv-default-snapshot-class   zfs.csi.openebs.io   Delete           5h59m



    https://www.truenas.com/truenas-scale



    Image:TrueNAS Scale Beta 1 -- this is freaking awesome!



      Ansible for automating everything

      Daniel Nashed  28 July 2021 05:18:51

      Ansible is very very cool. Not only that you don't have to install any software on the machine because it just runs over SSH.
      But it has so many build-in modules which allow almost everything to be done without writing any bash script.

      The key element is that you describe what your result should look like and Ansible makes all the changes for you.

      For our Lab environment for Domino on Docker and K8s I prepared our Lab servers with Ansible before.
      And I just added some details like creating notes:notes user and group and downloading the software.

      With planned 20 participants the software download could take some time.

      I am also patching existing lab example scripts with the server's IP address and hostname to avoid typos in setup.

      Let me share one of the latest changes I made to show the principle.

      I have defined my lab hosts in a configuration file, which I can create already from my Notes database where I registered my hosts via REST API calls to the Hetzner APIs (My German provider where I host all my servers).
      During setup I already registered a SSH key for authentication which I use for all my scripts.

      After defining variables for the software download I make sure the /local/software directory exists and describe the software to download including the SHA256 hashes.

      If you make changes to the definition -- for example if you forgot to specify the owner of the downloaded files, you just change the definition and run the script again.

      So you don't need to think about how to do it.
      You just need to describe your environment.

      The following downloads Domino for Linux and Domino for Docker web-kit files.
      For download I am just starting our software repository NGINX Docker container, which hosts files from the host-machine /local/software and is stopped once the installation is done.

      -- Daniel


      #!/bin/ansible-playbook

      - hosts: lab
        remote_user: root

        vars:
          ansible_ssh_private_key_file: /local/ansible/lab_ec_key.pem

          DOWNLOAD: http://software.acme.com:7777
          SOFTWARE_DIR: /local/software
          DOMINO_USER: notes

          DOMINO_LINUX_TAR: Domino_12.0_Linux_English.tar
          DOMINO_DOCKER_TAR: Domino_12.0_DockerImage.tgz
          DOMINO_LINUX_SHA256: f8a8618c20717a04826344ca4fec808351c523d754737a688d86edd4f445ff40
          DOMINO_DOCKER_SHA256: 4db06a78b5cabcc5cdf11a71ae949d72c99b0b62dcc375a37af7e2ccdeaa3667

        gather_facts: False

        tasks:

        - name: Create software directory

          file:
            path: /local/software
            state: directory
            owner: "{{ DOMINO_USER }}"
            group: "{{ DOMINO_USER }}"
            mode: 0775
            recurse: yes

        - name: Download Domino Linux

          get_url:
            url: "{{ DOWNLOAD }}/{{ DOMINO_LINUX_TAR }}"
            dest: "{{ SOFTWARE_DIR }}/{{ DOMINO_LINUX_TAR }}"
            checksum: "sha256:{{ DOMINO_LINUX_SHA256 }}"
            owner: "{{ DOMINO_USER }}"

        - name: Download Domino Docker

          get_url:
            url: "{{ DOWNLOAD }}/{{ DOMINO_DOCKER_TAR }}"
            dest: "{{ SOFTWARE_DIR }}/{{ DOMINO_DOCKER_TAR }}"
            checksum: "sha256:{{ DOMINO_DOCKER_SHA256 }}"
            owner: "{{ DOMINO_USER }}"


      C-API - NSFItemGetText() returns multiple entries in a single text string

      Daniel Nashed  27 July 2021 20:07:12


      I learned something new today. After working with the Notes C-API for half of my life, I found out about how NSFItemGetText() handles text lists -- OMG

      My impression was always it would just get the first entry..

      But no ---  It does implode the whole test list for you and creates a single string with the elements separated by "; ".

      So for example when your application reads host entries from the HTTP tab the result could look like this:

      Domino FQDN : [pluto.csi-domino.com; two.csi-domino.com]

      This can be a very wanted display behavior. But in other cases you might only want the first entry.
      In my case this broke the application logic and the resulting string wasn't a proper IDN string which caused a failure in init for a cache -- which wasn't what was intended at all ...

      So if you want to make sure you are always getting just one entry independent of the field type (TEXT or TEXT_LIST) you can use NSFItemGetTextListEntry().

      But you have to take care about important differences between the functions.
      NSFItemGetTextListEntry will get the first entry if you use the index 0. And it will work on TEXT and TEXT_LIST.
      In contrast to NSFItemGetText  the string is NOT null terminated and you need to keep room for an additional null terminator byte if you care for null terminated strings.

      Now with that difference in mind, you see how important it is to use the fields in the Domino directory carefully.
      For example the InternetAddress in the person document is a TEXT field and it isn't guaranteed that everyone who reads it, will always use a function that can handle a TEXT_LIST properly.

      So you could run into some unexpected results with existing applications today. Or applications in future.

      For me the behavior of NSFItemGetText wasn't expected at all. The C-API reference guide does not documented this behavior.
      And it is really strange, I never ran into it in my whole C-API developer life.

      No this isn't new behavior! I just never ran into it and had to find out today..

      -- Daniel



      Reference information:


      WORD LNPUBLIC NSFItemGetText(
             NOTEHANDLE  note_handle,
             const char far *item_name,
             char far *item_text,
             WORD  text_len);



      WORD LNPUBLIC NSFItemGetTextListEntry(
             NOTEHANDLE  note_handle,
             const char far *item_name,
             WORD  entry_position,
             char far *entry_text,
             WORD  text_len);


      -----


      Field Name: InternetAddress
      Data Type: Text
      Data Length: 24 bytes
      Seq Num: 1
      Dup Item ID: 0
      Field Flags: SUMMARY PROTECTED




      Important: In Domino V12 certstore.nsf is the recommended way for TLS/SSL server certificates

      Daniel Nashed  22 July 2021 05:41:53

      Domino 12 introduces a new architecture for certificate management that provides improved flexibility, functionality, and security.

      Due to those enhancements and details in the SNI handling, the older kyr cache can't handle lookups for DNS names.

      The improved mapping can only work with the new TLS Cache which has been introduced in Domino V12 with many improvements -- the main features are listed below.


      This means in consequence that SNI (Server Name Indication) and client certificate authentication will only work using the new functionality described below.

      So for proper SNI handling and for client certificate authentication you have to reconfigure using the new functionality.


      HCL strongly recommends to move to the new functionality in Domino V12 in general!
      Once you updated to Domino V12 you will see the following log message when starting HTTP:


      TLSCache-HTTP: The Certificate Store database (certstore.nsf) is not available on this server. Consider running the CertMgr task to create this database to enable enhanced TLS certificate management.
      Cert Manager is not loaded or configured


      It really makes a lot of sense to move to the new functionality also for many other reasons.

      Let me outline again the main improvements and also show how to import kyr files
      ...
      • Securely storing TLS Credentials (key + leaf cert + intermediates + root cert) in certstore.nsf)
      • Domain wide easy to use, modern UI database
      • It's not just for ACME/Let's Encrypt operations! It also supports manual certificate import and relieves you from using command-line tools like OpenSSL and kyrtool.
      • Easier trusted root central management without the need to look into kyr files. There is a separate view and form for trusted roots in certstore.nsf
      • Support for ECDSA and RSA keys in parallel
      • Full support for SAN name lookups
      • Full support for wildcard certificate lookups
      • On the fly update of TLS Credentials when the database changes

      Here are the steps needed to get started with cerstore.nsf and how to import your existing kyr files automatically.

      The new task has been really designed to make certificate operations easier. This includes the migration to the new functionality.


      For more details check the OpenNTF session and slides i blogged about earlier and which is available as a joint session from OpenNTF and the HCL Software Academy.


      Here is a screen shot to show part of the new UI.
      I am using CertMgr since Domino V12 was released on my production servers with ECDSA and RSA keys.
      And also with Let's Encrypt integration with my DNS provider Hetzner.

      If using Let's Encrypt the certificates auto renew after 60 days and are available immediately thanks to the new TLS Cache.


      Image:Important: In Domino V12 certstore.nsf is the recommended way for TLS/SSL server certificates


      1. Configure CertMgr


      To configure CertMgr, choose one Domino 12 or higher server (Win64 or Linux64) as the CertMgr server. This server is often the domain administration server.

      Run  the following command on that server to create the certstore.nsf database and initialize the CertMgr task:


      load certmgr


      Add CertMgr to the servertasks notes.ini parameter or schedule it to run in a program document to ensure that it always runs.



      2. Configure certstore.nsf on Web servers


      Run load certmgr on Domino 12 Web servers in the domain.

      The certmgr task automatically connects to the CertMgr server and creates a replica of the certstore.nsf database on the Web servers.

      The CertMgr on these servers operates as a CertMgr client and replicates certstore.nsf automatically with the CertMgr server.


      After certstore.nsf is present on a server, the TLS cache is loaded automatically when you start any internet server task like HTTP.
      Any update to the certstore.nsf database on a server dynamically reloads the TLS cache.


      3. Import existing kyr files


      Use CertMgr to import TLS Credentials for existing kyr files.

      To import all kyr files for a server run:


      load certmgr -importkyr all


      This command creates a TLS credentials document for each configured kyr file (server doc and internet sites if configured).


      You can also import individual kyr files:


      load certmgr -importkyr my-server.kyr



      Support for trusted roots


      The import functionality is only intended for the TLS Credentials (key, leaf certificate, intermediate certificates and the matching trusted roots).


      Client certificate authentication requires the trusted root of the issuing CA for all client certificates which are intended to be authenticated.

      Importing selected trusted roots is intended as a manual one time operation to review trusted roots which are still required.


      You can export trusted roots with the kyrtool command line tool in the following way:



      Windows example:


      cd /d d:/domino/data

      c:/domino/bin/kyrtool.exe show roots -v -k keyfile.kyr


      Linux/AIX example:


      cd /local/notedata

      /opt/hcl/domino/bin/kyrtool show roots -v -k keyfile.kyr


      For details how to import trusted roots and assign them to TLS Credentials check the following help topic:


      https://help.hcltechsw.com/domino/12.0.0/admin/secu_addingtrustedroots.html


      Here is an example imported trusted root which you can just assign to a TLS Credentials document.

      Image:Important: In Domino V12 certstore.nsf is the recommended way for TLS/SSL server certificates



      K3s efficient and well done Kubernetes test and production ready distibution

      Daniel Nashed  21 July 2021 22:14:57

      Last week I looked into different type of Kuberntes (K8s) environments.
      And finally found out that the one I have been using is the best for my use-cases already!

      And finally went back to my favorite K3s implementation (https://k3s.io/ ) where everything worked out of the box.

      K3s is a great Kubernetes distribution. On the one side it is very lightweight implemented in a single binary.
      On the other side you can really scale it to a cluster with the right storage infrastructure to start with.

      It's extremely well done and we have been using it for all our workshops.
      You can work with it even in a small VM with just one CPU core and 2 GB of RAM for a smaller Domino server!

      And I just didn't realized how lucky my choice was a while ago.

      It doesn't need an installed Docker environment and installs and starts in under 1 minute.

      My tests with storage snapshots and ZFS have all been done with K3s as well.
      I also submitted the step by step training instructions for a workshop we did a couple of times.

      Link to the lab instructions -->
      https://github.com/IBM/domino-docker/tree/develop/lab/setup

      The examples we used have been already there. Now I added the step by step instructions.

      We are also working on releasing the full material probably jointly with the HCL Software Academy.

      If you never looked into K8s, it might be time to start.
      And if you are using K8s I would be interested to hear which distribution you are using and why.

      Even our community image should run in the same way in different distributions to learn about new functionality and get new ideas.

      But I have to say I have been most impressed by K3s so far.


      -- Daniel

      Kubernetes Snapshot Support -- Is anyone using it in production today?

      Daniel Nashed  19 July 2021 09:43:00

      I am currently looking into the best way to backup applications on Kubernetes and ran into kasten.io (shortname: K10) which is now owned by Veeam (
      https://www.kasten.io/ )

      K10 is well done and easy to install. But you will need storage supporting snapshots.

      There are a couple of provides supported native and they also support native CSI drivers with snapshot functionality.

      K10 is free for up to 10 applications. So I am currently using the free version to explore integration options.



      ZFS Snapshots


      OpenZFS natively supports snapshots. This is the first logical step to look into for me.


      On CentOS 8.4 I had to compile the OpenZFS kernel driver from the sources.

      Once I found our what the latest version it was straightforward (
      https://openzfs.org ).


      CSI Driver for ZFS


      On top of it I needed the OpenEBS
      driver for K8s (https://openebs.io/blog/snapshot-and-clone-for-zfs-localpv/ )

      Now I have CentOS 8.4 + ZFS + OpenEBS + K3s (
      https://k3s.io/ ) with snapshot storage + K10 from Kasten for testing container snapshots.

      In parallel I am looking into native container snapshots on command line and K8s API.
      This looks like an interesting combination to backup applications either way.
      K10 really hides the complexity and makes it easy with a graphical interface.
      But the underlaying snapshot technology in K8s is available native in K8s in the current versions -->
      https://kubernetes.io/docs/concepts/storage/volume-snapshots/

      [ Disclaimer about ZFS ]

      You should note that ZFS has not been tested by HCL for Domino.
      I posted before about ZFS and it looks like some customers are using it.
      The only known issue is that you cannot use translog optimized block size alignment enabled by create_r85_log=1 which still causes Domino V12 to crash.


      Other CSI Storage drivers


      Using k3s the Longhorn storage (
      https://longhorn.io/  ) could be also a good choice.
      I saw installation instructions for k3s. But I don't want more moving parts for now..
      The beauty is that once you have a CSI Storage driver, all the snapshot operations are transparent and the details are hidden behind the APIs.


      Your Feedback?


      I would be very interested to hear if anyone looked into this before or even is using it in production.

      Of course my background is always Domino. So my ultimate goal would be to have snapshot integration for Domino as an application..

      I have all the building blocks ready to go including invoking snapshots from the K8s API.


      • Is there anyone out there using snapshots?
      • What type of storage do you use?
      • Type of environment you are using?

      Daniel

      Domino listed as Let’s Encrypt ACME V2 provider

      Daniel Nashed  22 June 2021 20:44:03


      Wow! HCL Domino is now officially listed as a fully supported Let's Encrypt ACME 2.0 provider.
      I looked around it doesn't even need to hide behind reference implementations like CertBot and it is fully integrated into Domino V12 on the other side.
      It's a complete own development implementing ACME RFC 8555.
      Image:Domino listed as Let’s Encrypt ACME V2 provider

      https://letsencrypt.org/docs/client-options/


      Image:Domino listed as Let’s Encrypt ACME V2 provider


      I blogged about it in detail and there was the OpenNTF webcast. And I had another session about it at AdminCamp this week.

      But yes Let's Encrypt is just one of the components of the new CertMgr. And there a many other security enhancements in Domino V12.
      You should really check out what has been added.

      -- Daniel

      Rocky Linux available

      Daniel Nashed  22 June 2021 20:30:29
      Rocky Linux is available since yesterday and I took a quick look.
      It is available a minimum ISO like CentOS and there is also a Docker image available.

      I have installed it and it really looks & feels like CentOS 8.
      And for the Domino Docker Community Image I just cloned the CentOS 8 dockerfile.
      So now you can also build your Domino container based on Rocky Linux.
      https://rockylinux.org/


      At first glance it just works and if someone really wants to move to Rocky Linux, the support statement for Rocky Linux and CentOS 8 is the same.
      It has not been tested by HCL but it meets the support requirements and should work.

      If you want to build your Domino image with Rocky Linux you just have to switch to the development branch of our Git community image and use the following command-line

      ./build.sh domino dockerfile_rocky

      The docker image they provide is properly labeled and the /etc/os-release file looks like this:

      NAME="Rocky Linux"
      VERSION="8.4 (Green Obsidian)"
      ID="rocky"
      ID_LIKE="rhel fedora"
      VERSION_ID="8.4"
      PLATFORM_ID="platform:el8"
      PRETTY_NAME="Rocky Linux 8.4 (Green Obsidian)"
      ANSI_COLOR="0;32"
      CPE_NAME="cpe:/o:rocky:rocky:8.4:GA"
      HOME_URL="https://rockylinux.org/"
      BUG_REPORT_URL="https://bugs.rockylinux.org/"
      ROCKY_SUPPORT_PRODUCT="Rocky Linux"
      ROCKY_SUPPORT_PRODUCT_VERSION="8"



      Links

        Archives


        • [IBM Lotus Domino]
        • [Domino on Linux]
        • [Nash!Com]
        • [Daniel Nashed]