 Daniel Nashed
| Daniel Nashed 21 May 2022 15:33:44 Engage was the last conference I have been. And it will be the first one after two years. I am really looking forward to meet many of you again next week! Even it was a very difficult time, it was an opportunity for me to work on exciting new projects. Meanwhile Domino 12.0 and Domino 12.0.1 shipped and I was never deeper involved into it, then in the last two years. And I spent a lot of time on other related projects. My blog was never more active with many different topics. It as been quite a ride and I virtually presented and gave workshops about Domino 12.0.x. Still there will be brand new functionality shown at Engage. There are also new GitHub projects I am involved with: HCL Domino Docker Community Container Image build https://github.com/hCL-TECH-SOFTWARE/domino-container Nash!Com Domino Start Script https://github.com/nashcom/domino-startscript HCL Domino V12 Backup Integrations https://github.com/HCL-TECH-SOFTWARE/domino-backup HCL Domino V12 Certificate Manager Integrations https://github.com/HCL-TECH-SOFTWARE/domino-cert-manager I can't present about all my current projects. But Bill and Martijn -- two fellow HCL Ambasserdors -- will present about material that is related to my work. And there is a Linux round table at the end of the conference as well for Linux, Docker and other related topics. If you can't make it into those sessions, you should still stop by for questions and to say hello. I am looking forward to meet many of you finally again next week! Have a safe trip to Bruges!! Daniel Related sessions What's new in Domino 12.0.1 Certificate management Ad06 / Tuesday, May 24 | 11:30 - 12:30 | C. Room 6 Daniel Nashed What's new in Domino 12.0.1 Backup and coming in 12.0.2 -- Practical jump start and deep dive Ad05 / Tuesday, May 24 | 15:45 - 16:30 | A. Room 1-2 Daniel Nashed The Domino 12 on Linux Technical Deep Dive Ad17 / Tuesday, May 24 | 16:45 - 17:30 | E. Room 12 Bill Malchisky Why it's a good time to use Domino on Docker in production and how to start Ad11 / Wednesday, May 25 | 09:00 - 10:00 | C. Room 6 Martijn de Jong (See Martijn's blog post: https://blog.martdj.nl/2022/05/16/im-presenting-at-engage-in-bruges/) The HCL on Linux Round Table Ro09 / Wednesday, May 25 | 16:00 - 16:45 | R. Room 10-11 Bill Malchisky, Thomas Hampel, Tim Clark, Daniel Nashed Daniel Nashed 21 May 2022 12:58:43 On Linux by default the notes.ini is in the data directory. On Windows it is per default in the binary directory. You could move it to the data directory, which would make sense from backup point of view in many cases anyway. But what if you have it in the program directory and install a new major version where you get rid of all your binaries as a best practice? I always forget to save the notes.ini.. Before running a restore -- often you might not have a backup from a test server anyway -- there are faster ways to recover. Domino captures the last server doc and config doc in DXL files. But it also creates sysinfo NSDs in IBM_TECHNICAL_SUPPORT directory. Those sysinfo NSD are created when the configuration changes and are also very helpful to check which changes occurred meanwhile when you run into a crash situation. Here is the trick: Extract the notes.ini information from the corresponding section in sysinfo NSD and create a new notes.ini file -- like I did just before writing this blog post ;-) I hope this helps others as well.. -- Daniel Daniel Nashed 21 May 2022 12:00:59There are not that many AIX Domino customers around any more. But for everyone using Domino on AIX, you should know about this critical issue. Domino uses the IOCP API for all native TCP/IP connections (it's used on the NTI layer, which is the abstraction layer between Domino and the OS level TCP/IP interfaces). According to the ARPA only 7.2.5.1 bos.iocp.rte is affected. https://www.ibm.com/support/pages/apar/IJ33605 I am running into some weird NSFSearch (SEARCH transaction) performance issue with Domino on AIX on larger databases and it isn't clear if this might be also network related. But for sure you should be aware of this IOCP AIX issue. And if you have performance issues or network issues on AIX, open a ticket at IBM first, before looking into low level Domino traces. I ran into this before some month ago. But have a current case, where this issue or a similar issue could be the root cause. That's why I am posting for awareness. -- Daniel Daniel Nashed 13 May 2022 11:07:05 Quite interesting results.. I have been looking into different hash algorithms to see the overhead today. It turns out that SHA256 is the slowest option and SHA1 is the winner. But it is interesting, that SHA384/SHA512 are also faster then SHA256. My main interest right now is in SHA algorithms for checking large files. But SHA algorithms come into play in may other scenarios. In general I would avoid SHA256 if possible. For security related operations SHA384 looks like the better option. For file operations where you calculate a file HASH and SHA1 is probably the best choice. You can combine the file hash also with the size of the file, if you are really paranoid. Actually this is what the Domino DAOS NLO hash uses for objects representing the files off-loaded from the NSF databases. I wrote a test tool building a hash from in memory block of 64 KB size in a long loop to get relevant data. MD5 : 278 MB/sec SHA1 : 388 MB/sec SHA256 : 179 MB/sec SHA384 : 276 MB/sec SHA512 : 274 MB/sec This is matches command-line Linux tool performance with a file that was in cache already. In normal cases your file I/O is probably the limiting factor. But if hash performance is important in large scale operations -- like DAOS hashes for example, SHA1 is still a really good choice -- specially in combination with file-size. If you really care about a very unique file hash on it's own, SHA384 might be the best choice today. But for standard file hashes, I would still go with SHA1. Test on Linux with a file in cache time sha1sum Domino_12.0.1_Linux_English.tar 8eb2ff4d480c001866db3dcd90b8cd3fcb372d37 Domino_12.0.1_Linux_English.tar real 0m1.434s user 0m1.177s time sha256sum Domino_12.0.1_Linux_English.tar a9d561b05f7b6850ed1230efa68ea1931cfdbb44685aad159613c29fa15e5eea Domino_12.0.1_Linux_English.tar real 0m2.706s user 0m2.505s time sha384sum Domino_12.0.1_Linux_English.tar 7b321875a5a0f5f48701fb9760a6558f80a04ca606a32e5747369f67045cf0a9cd44242ff5efb9f39c02fd6c59a20002 Domino_12.0.1_Linux_English.tar real 0m1.985s user 0m1.827s time sha512sum Domino_12.0.1_Linux_English.tar 3152bc50b94392942525a18ae14c9568bb6ea0977f6661a81fd83eff04c5e13d3cc6187dc44648d9bb57cdc362d3810c5cfe4663f8918d8618255a458ec1119a Domino_12.0.1_Linux_English.tar real 0m1.908s user 0m1.744s Daniel Nashed 12 May 2022 23:59:53 Domino waits for 10 seconds after shutdown before are restart for some legacy reason. There is a notes.ini variable to reduce the number of seconds. I tested with Domino 12.0.x that it can be reduced to 1 second. set config SERVER_RESTART_DELAY=1 So a "restart server" will restart the server directly after shutdown. Specially on test servers this is a very useful for impatient admins and developers -- like me. -- Daniel - Daniel Nashed 8 May 2022 16:26:56 HTTP/HTTPs based services When adding TLS to a web based/REST service like Apache Tika, it is very straightforward. That's a typical HTTP/HTTPS configuration you can run with any load-balancer -- in case of K8s this would be the perfect example for a Ingress. Actually Tika is a very good example of a scalable container. It has no data and just provides attachment parsing service via REST. But what, if your service isn't HTTP based like the ClamAV protocol? NGINX Stream config with TLS off-loading You can create simple stream configuration with a TLS key/cert. The example config below just uses one back-end. It could run in an extra container. Or you could add it into the same container ClamAV container. I looked into ICAP gateways from Trend Micro and McAfee. Both support TPS but for security oriented companies they have a very lousy TLS implementation with older TLS versions... They also only work well with their self signed certs. If you use a certificate from a CA, you will notice that they only send the certificate without the intermediates. Remote clients usually expect the certificate chain and only have the CA root in their trust store. And some have not really up to date Linux versions. So you might want to put something more up to date in front of your ICAP servers - even they support TLS. Here is the simple sample configuration. -- Daniel /etc/nginx/nginx.conf events {} stream { upstream backend { server 172.21.163.102:3310; } server { listen 3311 ssl; proxy_pass backend; ssl_certificate /local/cert.pem; ssl_certificate_key /local/key.pem; ssl_ciphers HIGH:!aNULL:!MD5; ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_session_reuse on; } } Daniel Nashed 5 May 2022 10:38:40 Domino certificate manager works like a charm and is the best option for native Domino 12 certificate management. But in a K8s environment you might want to better have certificates deployed outside Domino in front of your Domino K8s service. Mostly you will use a so called Ingress controller, which offloads your TLS traffic. I took a look into https://cert-manager.io/docs/concepts/certificate last night. It turned out the issues I ran into only occurred because of a messed up k3s installation. After I re-created my server, I was ready to go in minutes. K3s uses Traefik instead of NGINX All the documentation on the Certificate Manager site, use NGINX as an ingress controller. K3s uses Traefik as the default Ingress controller. When troubleshooting the configuration I read a lot of settings admins had to do to get it working. But it turned out it is pretty simple and you just have to specify the right annotation. See the example below. I am also posting this to make it easier for others. You don't need all the fancy options admins used to get it working on K3s. It's very straightforward today - it might have needed tweaks earlier. Installation of Certificate Manager is pretty straightforward and well documented --> https://cert-manager.io/docs/ I just installed it via HELM. And checked the verification steps. Let's Encrypt configuration Once it is installed you can just create a configuration. In my case I am using Let's Encrypt staging for testing: apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: nsh@acme.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: le-staging-account-key solvers: - http01: ingress: class: traefik Once the configuration is in place, you can add a cert-manger.io annotation to your Ingress. That's really all you need to get a certificate from Let's Encrypt. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: domino-http namespace: default annotations: kubernetes.io/ingress.class: traefik cert-manager.io/cluster-issuer: letsencrypt-staging spec: tls: - secretName: domino-tls hosts: - k3s.acme.com rules: - host: k3s.acme.com http: paths: - path: / pathType: Prefix backend: service: name: domino-http port: number: 80 TLS certificates and key stored in K8s secret As you know me, I am always interested to understand how it works and how the certificate and key is stored. You can get the secret in JSON format k get secret/domino-tls -o json Get certificate from secrect The certificate is stored in base64. The following command extracts the certificate chain: k get secret/domino-tls -o json | jq -r ".data.\"tls.crt\"" | openssl base64 -A -d Get key from secret The key is also stored base64 encoded k get secret/domino-tls -o json | jq -r ".data.\"tls.key\"" | openssl base64 -A -d The result for both commands is the decoded cert chain and key in PEM format. Conclusion I think this is good to know and can help if you want to reuse certificates or want to import own certificates. K8s Certificate Manager is quite powerful and has many different options. Not just Let's Encrypt. But Let's Encrypt is widely used, free and works like a charm. Daniel Nashed 28 April 2022 23:19:16 This is quite cool! Docker client and server can be on different machines. There was recently an exploit when you use TCP/IP connections. But there are a lot of other options. In my case a SSH connection made sense. My old macbook can't run a current Docker host. But I wanted to use it at least to test all my scripts. I downloaded the latest docker client and configured a remote connection. This allows to test all my scripts on my old macbook, running the build process on a remote machine. You just need to configure a SSH key, add your remote user to the "docker" group and with a simple export, I tell my macbook where my Docker host is located. export DOCKER_HOST=ssh://notes@192.168.99.100 This is very helpful in my case and can also be helpful if you want to run Docker commands from your local machine to a remote hosted server. See the Docker documentation for details --> https://docs.docker.com/engine/security/protect-access/ -- Daniel Daniel Nashed 26 April 2022 12:04:57 Again a German speaking workshop, organized by DNUG. If there will be a high demand for English, I would do it again in English. This is a unique workshop, where participants get a ready to go Windows server configured at Hetzner with Domino 12.0.1 and Veeam Backup & Replication already pre-installed. The installation is the boring "next, next.. wait,.." part and we directly jump into the configuration starting with Domino in the practical part. I found a way to have pre-installed Windows servers clonable at Hetzner. Which give us a very easy to use via RDP lab environment. But we will also look into all the details of the Domino backup application with information about how to integrate with other backup applications. Veeam is one of the applications used as an example integration. But the workshop is also interesting if you are not using Veeam. Agenda and a link to register are on the DNUG website: https://dnug.de/event/dnug-deep-dive-administration-domino-12-backup/ As always I am having a lot of fun preparing the lab. it will be similar prepared than the Linux labs we had at earlier workshops. If you have questions, drop me a mail or comment. -- Daniel Daniel Nashed 24 April 2022 11:25:58 Ubuntu released their new LTS release. I am not the biggest fan of Ubuntu on servers. But on desktops this is my favorite distributions. And my dad is also happy with his Ubuntu desktop. https://ubuntu.com/blog/ubuntu-22-04-lts-released I checked the basic version information. There are a couple of version updates, I waited for. Specially OpenSSL 3.0.2 and ZFS 2.1.2 are an important addition! - Kernel 5.15.0-25 - OpenSSL 3.0.2 - ZFS 2.1.2 - Docker 20.10.12 Hetzner already provides a cloud image "ubuntu-22.04". I still prefer CentOS Stream 9. But I am seriously considering it for servers, because it contains ZFS already. SUSE Leap is sadly not available on Hetzner as cloud image. Update an existing machine To start with I updated my local test environment. It took quite a while, but I finally had a fully migrated 22.04 LTS machine. apt-get update apt-get dist-upgrade do-release-upgrade -d Ubuntu 22.04 LTS on Docker Hub The container image is also available on Docker-Hub --> https://hub.docker.com/_/ubuntu. I tested a new build of the Docker community image via ./build.sh domino -from=ubuntu The "ubuntu" tag does now point to 22.04 LTS -- which is what I expected. This command took me to the current LTS release and this allows the full stack to be Ubuntu 22.04 LTS. I still have to note that 5.x kernels are not yet supported by Domino. But there are no known issues. Conclusion Ubuntu 22.04 LTS is a welcome addition to my Linux zoo. And my Docker based pi hole continues to work well after the upgrade. | |