Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

Setting TXT records via AWS CLI for ACME DNS-01 challenges

Daniel Nashed  13 May 2021 08:57:00

Most of the Domino V12 CertMgr integrations for DNS-01 challenges I built so far, leverage REST API interfaces.

This isn't an option for AWS DNS. But we finally found a straightforward way using the AWS CLI.

All the AWS Route 53 implementations I saw for other ACME implementations looked pretty complicated.


We just authorized the machine to modify the DNS sub domain. Requests looks like what you see below. You can just specify a JSON file.

In my final script I am just passing the zone ID and replace variables I added instead of the sample values below.


This might be useful also for others integrating with AWS for TXT record updates -- or even other DNS automation.


You will find the full integration script later in the planned HCL open source GitHub repo for Domino V12 CertMgr along with more DNS-01 API integration configurations.

If you need this for CertMgr today, just ping me.


-- Daniel



Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/simple-resource-record-route53-cli/

Example command and JSON file:


aws route53 change-resource-record-sets --hosted-zone-id Z012345671ABCD123L42 --change-batch
file://txt_create.json

The inner quotes in the value are important. AWS expects an inner quoted string for TXT records.



txt_create.json

---------------


{

 "Changes": [

   {

     "Action": "CREATE",

     "ResourceRecordSet": {

       "Name": "_acme-challenge.newton.acme.com",

       "Type": "TXT",

       "TTL": 30,

       "ResourceRecords": [

         {

           "Value": "\"duakBxodnTeocISUOQr1vnQfQ09Axv0Sihk0GrHSevI\""

         }

       ]

     }

   }

 ]

}


Looking into first SPF record analysis

Daniel Nashed  11 May 2021 19:01:55

Before adding SPF record checking to SpamGeek, I wanted to find out how much SPF record checking will help me.
And also what would be the best strategy to configure it.

It wasn't a surprise that most e-mail I want passes the SPF check.
But there is also SPAM or at least unwanted e-mail, that might have proper SPF configuration.

Spammers mostly have no SPF configuration. And they are mostly not misusing systems that have proper SPF configurations.
This makes kind of sense, because probably most servers will give very negative ratings for failing SPF checks.

So without SPF records being used in many environments, there would be more misuse of domains.
Now that many environments use it, this made it more safe for others as well.

But for my environment added SPF checks, will not reduce SPAM dramatically.
Adding SPF checks makes still sense and it is even more important to have a poper SPF configuration on your own.

It's not something nice to have. It's a must have today for your domains.
And your domain just needs proper SPF TXT records in DNS.

-- Daniel

SpamGeek database 2021 -- Good and possible SPAM Category only -- % of the total sum of data presented in each sub category
48,4%
1. Very Good
0,0%
-- No SPF Status --
0,2%
0 - invalid
0,1%
1 - neutral
44,0%
2 - pass    --> Most of the really good mail has a proper SPF record
0,0%
3 - fail
2,0%
4 - softfail
1,8%
5 - none
0,0%
6 - temperror
0,1%
7 - permerror  --> Some people I know, got t heir SPF configuration wrong -- I already sent mails ;-)
7,6%
2. Good
0,0%
-- No SPF Status --
0,0%
0 - invalid
0,1%
1 - neutral
6,8%
2 - pass    --> Most of the good mail has a proper SPF record
0,0%
3 - fail
0,1%
4 - softfail
0,4%
5 - none
0,0%
6 - temperror
0,0%
7 - permerror
11,1%
3. Possible Spam
0,5%
-- No SPF Status --
0,0%
0 - invalid
4,1%
1 - neutral
3,1%
2 - pass     --> Some possible SPAM has proper SPF records! Many don't see themselves as SPAMers ..
0,2%
3 - fail
0,6%
4 - softfail
2,4%
5 - none
0,1%
6 - temperror
32,9%
4. Spam
0,7%
-- No SPF Status --
0,8%
0 - invalid
3,3%
1 - neutral
7,2%
2 - pass
0,2%
3 - fail
0,5%
4 - softfail
20,3%
5 - none     --> Real SPAM has mostly no SPF record
0,0%
6 - temperror
0,0%
7 - permerror




Can I assume a SMTP sender has a SPF record

Daniel Nashed  11 May 2021 10:05:50

Just checked all my mails my SpamGeek received this year for DNS-TXT records.
I added all DNS TXT records to existing SpamGeek logs.

Now I am looking into the results..

I am not yet automatically checking SPF records -- this is the step before to check if I want SPF records ...
  • Almost all my customers have SPF records
  • Many spammer or newsletter sender have SPF records

So what would be the strategy?
  • I would give someone without SPF record a small negative reputation?
  • Give anyone with not matching SPF higher negative reputation?
  • Do nothing for someone with a good SPF record? --> Because that is what I would expect?

Not sure you can modify the behavior of your anti-spam in that detail?
But if you could, what would be your strategy?

The next step is really to add libspf to the Linux version of SpamGeek.
The code is ready to go. I just need to plug it in --> https://www.libspf2.org/


-- Daniel



    Getting DNS TXT records in C

    Daniel Nashed  10 May 2021 11:37:10


    Getting the TXT records for a DNS entries is important today (checking SPF records, ACME challenges, DMARC records .. ).

    But this isn't part of the standard resolver processes you would use for normal DNS queries.


    I have been looking for TXT record resolution for a while and finally found it in one of my oldest IT books "DNS & Bind".
    Even there wasn't a complete example and I needed to do a lot of research this morning.
    The result doesn't look that complicated. But the devil is still in the detail.
    This functionality has been always there and you would need it also to query MX records for example.

    Windows implemented their own interface - DnsQuery(). The call also looks simple but I didn't find any example or full documentation for the call.
    My new functionality in SpamGeek will be Linux only for now -- Because "libspf" is also only available for Linux ..

    In case you need a routine to get the DNS TXT records, here is an example that uses the newer thread safe interface.


    It needs to init a structure per thread. Or like in my example code per request.
    The routine doesn't have all error logging and is more an example.


    -- Daniel


    #include


    int GetTxtRecord (char *pszDomain)

    {

    unsigned char Buffer[8000]   = {0};

    unsigned char Result[2048]   = {0};

    const unsigned char *pResult = NULL;

    struct __res_state ResState  = {0};

    ns_msg nsMsg = {0};

    ns_rr  rr;


    int  ret      = 0;

    int  size     = 0;

    int  len      = 0;

    int  count    = 0;

    int  i        = 0;

    int  res_init = 0;


    ret = res_ninit (&ResState);


    if (ret)

    {

      printf ("error initializing resolver\n");

      goto close;

    }

    res_init = 1;


    memset(Buffer, 0, sizeof (Buffer));


    size = res_nquery (&ResState, pszDomain, C_IN, T_TXT, Buffer, sizeof(Buffer)-1);


    if (0 == size)

      goto close;


    ret = ns_initparse (Buffer, size, &nsMsg);


    if (ret)

      goto close;


    count = ns_msg_count (nsMsg, ns_s_an);


    for (i=0; i < count; i++)

    {

      ret = ns_parserr (&nsMsg, ns_s_an, i , &rr);


      if (ret)

        goto close;


      len = ns_rr_rdlen (rr);

      pResult = ns_rr_rdata (rr);


      if ( (len>1) && (len < sizeof(Result)) )

      {

        len--;

        memcpy (Result, pResult+1, len);

        Result[len] = '\0';

        printf ("#%d [%s]\n", i, Result);

      }


    } /* for */


    close:


    if (res_init)

      res_nclose (&ResState);


    return ret;

    }


    Domino backup -- increase timeout for large databases

    Daniel Nashed  9 May 2021 19:42:30

    The time-out per database has been always 15 minutes.
    For a large database the file copy operation could take longer.
    We have been testing with larger physical databases and wondered where the timeout error came from when getting the change info for the database.

    Specially for a large database the chance for non-zero change info is quite high.
    If your backup application doesn't log those errors, your backup might be incomplete without the backup application lets you know.

    So you should at least increase it to 60 minutes. With a transfer-rate of 60 MB/sec this should work for a 210 GB database.
    Hopefully your backup is even faster for taking a backup of a large database...

    notes.ini BACKUP_TIMEOUT=60


    -- Daniel

      Domino Diagnostics and Crash detection + Fault Recovery

      Daniel Nashed  5 May 2021 11:35:16

      Based on an AHA idea that I don't really agree on, I want to explain the background why Domino is implemented in regard to fault recovery.
      I had discussions about why Domino is restarting servers if one task fails very long time ago at Lotusphere in the developer labs with the responsible developers for NSD, fault recovery & co.
      They spent a lot of effort in making Domino reliable, available and make it easy to run diagnostic.
      Domino has many features and earlier called it RAS (Reliability, Availability and Serviceability).

      Let me explain some of the aspects and I will link this blog post to the AHA idea.

      https://domino-ideas.hcltechsw.com/ideas/DOMINO-I-1682

      -- Daniel


      NSD/Fault Recovery and Diagnostic Collection

      Domino Fault recovery, NSD, Memcheck, trapleak debugging and other features as been developed long time ago and is still a really outstanding in the industry to collect diagnostic data to help HCL support and development to pin-point issues.

      Some of it is really geeky and not documented in detail. I did a two days customer workshops including hands on for this topic long time ago.
      But clearly the information is intended for developers to look at. Still I got many questions for the LND tool, which was able to get some details from NSDs.
      This tool has been developed by someone in support and that tool wasn't transferred from IBM to HCL. But there are many out of the box features that already help you.

      Domino itself has a fault analyzer, which is already a great way to correlate call-stacks and crash information.

      Why does Domino crash if one servertask fails?

      A kill -9 is a hard kill (SIGKILL), which always is the last resort. Other applications use SIGHUP to notify a task to reload configuration.
      Many configuration changes are applied without restart. And for example for Domino V12 certificate changes don't require and restart when certstore.nsf is used -- other applications even on Linux still need a trigger to reload.


      Any servertask using the C-API initializes the Domino run-time environment.
      So it becomes part of the Domino environment leveraging all kind of resources.
      Those resources use shared memory, semaphores and other resources, used among the processes.

      If one process crashes, there are resources which are not cleaned up.
      And also the process could have overwritten memory in shared memory for other processes.
      In addition the process could have locked a semaphore, that might never get released if the process is gone unexpected.
      So this will lead to more damage and also lead to server hangs later on.

      That's why Domino has an internal monitor to check if Domino processes are cleanly shutdown.
      On Linux/Unix also the SIGCHLD is checked for process terminations.
      On Windows there isn't a signal so the process monitor panics the server if a process terminates unexpected.

      This is all designed to protect the remaining processes and data in memory.


      Fault Recovery / Transaction Logging & Co
      There are a couple of features playing together hand in hand.
      Here is a very brief overview of the most important components -- but there is much more that those main functions.
      Each of them is something I could fill pages of blog posts to explain them in detail.

      No panic -- You don't need to know all those technical details.
      It is important to use those functions and HCL support or a HCL business partner can use the data to help you.


      1. Fault Recovery (server doc)
      Detects a crash and restarted the server

      2. Diagnostic Collection/NSD (server doc)
      Collects a NSD and memcheck at crash time and plays hand in hand with Fault recovery

      3. Transaction Log (server doc)

      Ensures databases are always consistent. Improves run-time performance by writing changes into translog first to let the process continue it's work.
      Writes changed data asynchronously into the databases.
      And most important: Ensures that at restart of the server transaction log applies changed information to make the database consistent without the need for a fixup.
      So the server will be up and running with all databases quickly.

      4. Automatic Data collection (config doc)
      Collects NSDs + other diagnostic information and sends them into a central database.

      5. Fault Analyzer (config doc)
      Runs on the central mail-in database to annotate and correlated NSDs.
      This works for servers and clients and you have all information in one place.
      There is no need to request a NSD from a Notes client or log into a Domino server.

      There is much more. But those are the most important parts to configure to leverage Domino diagnostics.
      Of course there are many Notes.ini settings for debug information, which is all written into the console.log.
      And there is also specific tracing for HTTP, SMTP and other tasks.



      New Notes Online Meeting integration

      Daniel Nashed  5 May 2021 07:51:02
      His is awesome news announced at NCUG.
      This new integration will be available for Notes 11.0.1 FP3 and higher implemented into your Notes mail template.
      I haven't seen it myself but two fellow HCL Ambassadors have already blogged about it.

      There are changes required in the Notes back-end code to make it work.
      But beside that it's implemented on template level.
      From what I understood so far the installer/config tool will change the template.

      This will be available soon on GitHub as Cormac wrote in his blog post.
      I had no chance to look at it on my own. But from what I read and heard this will be easy to integrate.
      Have a look at Cormac's and Roberto's posts for details.

      https://dominopeople.ie/domino-online-meeting-integration-will-be-here-in-days/
      https://www.robertoboccadoro.com/2021/05/04/domino-online-meeting-integration/

      Huge thanks to the team at HCL who implemented it!!
      This is very important functionality -- especially those days with all the on-line meetings.

      -- Daniel


      Domino V12 using CertMgr for certificates used outside Domino

      Daniel Nashed  3 May 2021 12:03:08

      Domino V12 introduces the new CertMgr as most of you already know.
      The focus was to implement all functionality that is needed inside Domino and to secure the private keys.
      There is currently no way to export private keys. They are generated and encrypted for the CertMgr server and all servers which are selected to have access.

      Exporting the keys also would help you to use the kyrtool to create kyr files for older servers.
      You would just need to "kyrtool import all -k keyfile.kyr -i all.pem".

      What works today

      So you can't export keys today. But you can import keys and this can help you to use the certificates and keys outside Domino.
      Generating keys by CertMgr would be more comfortable. But you only have to create each key once.
      All the certificate operations can be automated today -- including picking up the certificate and chain.

      Step by Step

      The -importpem functionality expects at least a key and a leaf certs.
      So we create a dummy self signed cert for the key.


      1. Create a private key and create a self singed

      openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out cert.pem
      cat key.pem > all.pem
      cat cert.pem >> all.pem


      2. Import kyr into certstore.nsf

      load certmgr -importpem all.pem


      3. Use CertMgr to get a certificate

      Now that you have the key imported and a dummy certificate, you can request a certificate the standard CertMgr way.
      (note: the CN is currently not added to the document as a host name  -- fixed in GA -> today only SANs are added to the hostname).

      Make sure your CertMgr server is listed in "Servers with access" so that the new certificate is automatically used.


      4. Pickup the certificate chain

      a.) You can copy the certificate chain from the TLS Credentials document.

      b.) And you can even use an automated way leveraging CertMgr to pick up the certificate and chain.
      If SNI is enabled, you could even pickup the cert, if the CertMgr server doesn't have DNS entry for this certificate.

      The following command line can be used to get the full cert chain over HTTPS.
      Private key that you already have + certificate chain will perfectly work on NGINX and can converted to P12 for ST.

      openssl s_client -servername blog.nashcom.de -showcerts -connect 1.2.3.4:443 /dev/null | sed -ne '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > chain.txt

      The cool part is that as long the key stays the same, you can renew just use step 4 to pick up the changed cert  ;-)
      So once you have copied the key you created to the remote machine, the certificate rollout and update can e automated..

      I have my own OpenSSL based command-line tool written in C to do all kind of certificate operations (more my internal use Nash!Com tool for testing etc).
      But this can be done also via shell script invoking openssl command-line as shown above.

      -- Daniel


        How to pass a Domino V12 OneTouch JSON to your pod or container

        Daniel Nashed  3 May 2021 08:16:12
        For Docker it is quite easy to pass the JSON file.
        You just add another volume into a known location:


        Environment variables:

        SetupAutoConfigure=1

        SetupAutoConfigureParams=/etc/domino-cfg/auto_config.json


        When using Kubernetes you can either pass data via screts or better juse a ConfigMap for your JSON file.
        Using a ConfigMap already prepares you for Helm charts later, where you can work with place holders.
        In your volume definition you just specify the configMap and in your pod you just mount this volume to the right place.


        We started to add new material into our Docker Domino repository.
        This is still work in progress and we will have more over time.
        But here is a start link for now:
        https://github.com/IBM/domino-docker/tree/develop/lab/kubernetes/domino

        Special THANKS to Daniele Vistalli for bringing us on to the right track for Helm over the weekend ;-)

        -- Daniel


        apiVersion: v1
        kind: ConfigMap
        metadata:
        name: domino12-cfg
        namespace: default
        data:
        auto_config.json: |
          {
            "serverSetup": {
              "server": {
                "type": "first",
                "name": "master.domino.lab",
          ...

           }

        -----

        volumes:
        - name: domino-data
          persistentVolumeClaim:
            claimName: local-path-pvc

        - name: domino-cfg
          configMap:
            name: domino12-cfg


          VMware Habor Registry for Images and Helm Charts

          Daniel Nashed  1 May 2021 10:31:30

          Now you see me impressed and I have to give huge credit and respect to VMware.
          All the material VMware has in their GitHug repository really looks great and has great documentation.
          They use their own software for everything. All the underlaying software like NGINX is build by them on their Photon OS.
          All managed in open source projects ...

          References and entry points

          https://goharbor.io/
          https://github.com/goharbor/harbor

          I setup the registry on Docker with a very simple small downloaded script which checked all the version info for Docker and Docker Compose. And there is one simple YAML file to configure.
          I only had to add a proper certificate for the NIGINX which they also took care of (I took my Let's Encrypt wildcard created with Domino V12 CertMgr).

          After 5 minutes I had my working server. But they also offer a full functional demo account.

          Proxy Registry

          The registry also provides easy to setup proxy functionality.
          Some of the configurations are already build-in. I tested the connection to the Docker Hub registry.
          In earlier days for the first hands on workshop I had to setup once for us by hand. Now it is just a couple of clicks away.


          If you are looking for free a registry for Images and Helm Charts, you should seriously consider it.
          I will probably replace my Docker registry:2 with my own NGINX configuration very soon.

          Demo Server --> https://demo.goharbor.io/

          Info for demo server -- >https://goharbor.io/docs/1.10/install-config/demo-server/


          Image:VMware Habor Registry for Images and Helm Charts



          Links

            Archives


            • [IBM Lotus Domino]
            • [Domino on Linux]
            • [Nash!Com]
            • [Daniel Nashed]