Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

    Domino Linux Start Script Feedback Request

    Daniel Nashed  23 May 2019 17:41:53
    This week I had a couple of discussions about start script management and deployment.
    I have been at a customer who wanted to keep their existing none-standard locations for the Domino data directory and also the binary location.

    Now with systemd environments we have another file that contains the path to the main script logic which is located by default in /opt/ibm/domino/rc_domino-script.

    New Start Script Location (/opt/nashcom, /usr/bin)?

    I have introduced simple install script which installs and updates the start script -- if you are using the default locations.

    But the location might change again with Domino 11 because the binary path contains "ibm".

    I thought about how to make the Domino binary directory and the data directory configurable in an easier way, without a complicated install routine that replaces text in configuration files.

    Taking this all together this sounds like I should move the start script main script "rc_domino_script" into a location which isn't depending on the Domino binary directory name.

    This would allow me to easier implement automatic configuration and be more flexible for the future.

    Of course the install script would take care of the current location for your rc_domino_script. It would read your current configuration and create a link to the new location to ensure existing configurations would continue to work.

    It's still quite a change which could impact the way you install and run the start script.

    Start Script New Home on Git?

    In addition I am thinking about other ways to maintain the start script. What would you think about having the start script and the documentation located on github?

    I would need to move the documentation to MD format and it would look good on-line. But I would keep a tar file in the project to allow to download it as a single tar for anyone not using git software today.

    Those changes would be quite big. So I thought about asking for feedback before implementing them.

    Your feedback via simple 10 question survey

    I have created a simple survey which doesn't ask for very specific information like number of servers and users etc. It also will not ask for a name. It's only for me to understand where you are and what you need.

    You can comment here or fill out the survey which I have created. This is the first survey I am doing at all. And I am really curios of the results.

    Of course you can also comment here or send me private emails. But I would really appreciate your feedback on the survey.

    Some of the questions are needed for new functionality and I have been discussing with another BP for example how many admins are using "sudo".

    Here is the link -->

    Update: Just noticed the website was in German, when I started.
    I thought they will translate the text " Sonstiges (bitte angeben)" automatically into other languages.
    But it took my language on the website... I changed the text but the change did not show for me.
    So it means "Other (please specify). Next time I will switch the website to English first.

    And of course I will share the results.



    Domino Server Install on Linux Fails with Timing Issues

    Daniel Nashed  22 May 2019 17:21:03
    We ran into an interesting issue yesterday which almost made our Domino 10.0.1 on Linux with brand new RHEL 7.6 machines fail.

    The installer always ended like this:

    Initializing Wizard.....    
    Extracting Bundled JRE.    
    Verifying JVM      

    No Java Runtime Environment (JRE) was found on this system.

    A deeper tracing with debug settings lead us to those errors:

    Verifying... /tmp/istemp30481141104256/_bundledJRE_/bin/java  -cp  /tmp/istemp30481141104256/Verify.jar Verify  java.vendor java.version
    Verification failed for /tmp/istemp30481141104256/_bundledJRE_ using the JVM file /tmp/istemp30481141104256/_bundledJRE_/jvm

    When it stops the temp-files remain on disk. Interestingly the manual check works afterwards.
    I checked everything in the script which isn't just a normal script but also contains the installer code that is executed from there.

    [root@host /]# /tmp/istemp30481141104256/_bundledJRE_/bin/java  -cp  /tmp/istemp30481141104256/Verify.jar Verify  java.vendor java.version

    IBM Corporation


    After testing all the debug options listed somewhere in the install scripts, I found one parameter that gave me one final idea.
    "-is:jvmtimer" sounded like a delay that you can add during install.

    After I found this parameter I checked for references. And the only  reference I found was an older IBM installer issue for another platform on another product where this parameter is documented.

    It is needed for slower platforms where the extract and start of the JVM takes too long.
    It's not in detail clear but maybe in this case maybe the machine was too fast and the check routine did not wait sufficient long. The machine had 4 Cores with 8 threads each.

    So the solution was to give the JVM extract and verify.jar a couple of seconds more time. It worked with 2 seconds but 4 seconds might be a better bet.

    I never ran into this before on any machine over the years. But I am documenting it not just for curiosity but also if someone might run into it.

    Here is how you specify the parameter.

     ./install -is:jvmtimer 4

    I think I will add the parameter to the Docker Domino install script as well, because it could hit uns there as well.

    Install Shield multi platform is a quite old installer, which will be replaced by Install Anyware in Domino 11.

    -- Daniel

    Notes 9/10 Standard Client issues in combination wth LibreOffice 6 installed fonts "NOTO"

    Daniel Nashed  16 May 2019 09:00:05

    This issue came up already some days ago but I wasn't able to reproduce. The first reports have been that this occurs when too many fonts are installed.
    But I had more fonts and I did not happen. This is a specific problem with the some fonts that LibreOffice 6 installs. LibreOffice 5 doesn't have the "NOTO" Fonts and works file.

    My friends at University Zürich (thanks guys to narrow the issue down for us!) told me a Engage that they have this issue and they have narrowed it down to the name of the fonts beginning with "NOTO".
    So this even happens when you rename other fonts to the same names.

    Development is aware of the issue (SPR #SMOYBC7HJD) and support says this impacts not only Notes 10 but also Notes 9 clients.

    What happens when those fonts are present is that after a while the client, ST functionality and also the designer does not work properly any more:

    - Some dialogs don't show up any more
    - Some text is bold
    - ST client cannot accept chats
    - The client might hang

    The work-around is to delete those "NOTO" fonts and ensure the RecentFontList is cleared as well.

    Below is the original statement from support.

    -- Daniel

    -- snip --

    When I installed Libre office it added a number of NOTO MONO fonts to my windows profile which is available under the folder - C:\Windows\Fonts.
    Even Notes client notes.ini picks up this font name into it automatically:

    RecentFontList=Palatino Linotype, Noto Mono, Default Sans Serif, Niagara Engraved, Nirmala UI Semilight

    Work around to fix the issue for now is to follow the below steps:

    1) Go to C drive Windows\Fonts directory or folder and delete all the NOTO related fonts from it.

    2) Remove the Noto Mono font from Notes.ini as well.

    We have even received few other cases related to it, where not only Notes 10.0.1 is impacted but Notes 9.0.1 Notes & designer client display has issues.

    -- snip --

      Linux History & Domino on Docker at Engage Conference

      Daniel Nashed  12 May 2019 22:09:54

      Image:Linux History & Domino on Docker at Engage Conference

      Yes I have a long history of Notes&Domino, but I have even a longer history of Linux as I figured out form the book I still have.

      This wasn’t my first version.. but I thing the first SuSE version.

      The book was made with LaTeX which I did not use for a long time ..

      A that time Domino had no Linux support, which was introduced later in Domino 5.0.3

      My Domino Start Script was first developed for our Domino on HP-UX servers and later on first ported to AIX and Solaris.

      And finally when Domino on Linux came up to Linux.

      HP-UX and Solaris are no longer supported and AIX isn’t used by many companies any more.
      But Linux became the OS that runs the internet. And is a great platform for Domino.

      In the last weeks I did a lot work in the Domino on Docker area, which is a lot of fun, too -->

      Next week at Engage conference in Brussels we will have a Domino on Docker session and again a Domino & Docker on Linux round table.

      I think this year it’s going to be really interesting. There is a lot new stuff to discuss.

      If you are at Engage and you are interested in Domino on Linux or Docker, you should really come to our round-table session.

      Thomas Hampel (IBM) and me also have a Domino on Docker also on Wednesday afternoon.

      -- Daniel

        Docker Host to Container Secure File-Transfer - For example needed for AppDevPack Deployment

        Daniel Nashed  11 May 2019 17:14:04

        I already had a lot of fun working on Docker Domino project in the last couple of month.
        It was a great learning curve because a lot of stuff was new for me. But this was just the beginning and I am now hitting even more interesting topics.

        Right now I am working on an automated Domino AppDevPack deployment implementation.

        I am not sure if I have completely implemented before Engage but I can already show it.

        The basic components get already installed. Like the proton servertask, the DSAPI filters. But this is the more simple part.

        Automatic configuration for the AppDevPack is ticky and I might not get all pieces automatically configured -- but hopefully it will be very close.

        Yesterday I looked into the sample files shipped with the AppDevPack 1.0.1 to create certificates from a simple CA.

        Certificate Management Script for Domino on Docker

        Because that wasn't sufficient flexible for Docker I wrote a management script for creating certificates yesterday.

        That script will help to create certificates with the simple, local CA and also help with getting certificates requested and deployed which are issued by your corporate CA.

        This might help you even outside Docker deployments. The script is just one of the pieces we have to glue together and isn't a final version yet.

        If you are interested, I have documented it inside the script because you have to change variables to adopt it to your environment.

        But all of the variables are on top of the script.  For the final version of the script we might move the configuration into a separate file -- like I have implemented for the Linux start script and the Docker management script.


        Deploying other components

        I also wrote a small servertask yesterday which can create a person doc and upload the x.509 cert into that person doc.
        This helper servertask will be part of the AppDevPack Docker implementation and can be used to write the person document with the client x.509 certificate needed for Proton authentication.

        Deploy certificates

        But the central part for a successful AppDevPack deployment is a common CA infrastructure and certificate deployment.

        The idea is that a central script on the Docker host can handle creation & deployment for all components of the AppDevPack.

        IMHO it's not the best solution to have the CA located on the Docker container running the Domino server with Proton.
        For Docker there should be central administration and auto deployment.

        Multiple components of the AppDevPack will need certificates and they should all come from the same CA.

        For example for authentication on Proton (user authentication via client cert or the IAM part doing the same) the keyring file needs to include the Root CA and intermediate certificates for the certificate issuer of the client certificate used.
        That means that I will have a central place on the Docker host where those keys and certificates are create, managed and distributed.

        Challenge - Secure File Transfer

        The key challenge here is the "file transfer" between the different Docker containers.

        When searching for a solution I did not find any out of the box solution on the Docker side..

        So I had to implement an own "secure" mechanism to transfer files to the Docker containers.

        Here is a short write up that might end up later in a readme on the Domino on Docker project.


        Of course it would be much nicer if we could just leverage technologies like Let's Encrypt.

        But most of the servers are going to be located inside a corporate network.

        So at least the public Let's Encrypt solution isn't an option for many customers.

        There are projects out there to implement Let's Encrypt inside a company -->
        But knowing many customer environment this would be quite a difficult discussion with responsible infrastructure admins.

        By the way while writing this document I wasn't sure the blog is the right location for those type of posts.

        I might end up adding those kind of documents in a git based WIKI or readme for the component of the Docker project.

        And I just write a short note on my blog.

        Let me know what you think ...

        -- Daniel

        Implementing a secure file transfer from Docker Host and Docker Container

        The direct communication between the Docker host and the Docker container is quite limited.

        During start of the container you can pass environment variables.

        Those variables can be accessed on the Docker container and once they are set, you cannot remove them from the outer layer of your Docker container.

        That means every new process that is started via exec or during the start for example the entry point, has still access to all variables.

        From security point of view this can be problematic if you want to transfer sensitive data like passwords.

        But beside that there is no mechanism at all to transfer files directly - not even in an insecure way.

        So you have to build your own routines.

        File Transfer from outside the container

        If you want to transfer files, you have to store them outside the container and access them from the container for example via curl or wget.

        I have found multiple posts dealing with that once I had the idea. But most of the post deal with just transferring the data not with security.

        If you are transferring from outside your Docker host you have to take special care securing the channel.
        You don't always have HTTPS available or can leverage ssh file-transfer.

        In theory the local network on your Docker host between the Docker containers should be quite safe.

        But if you build a professional solution you might want to add extra security.

        Basic Container Security

        If you provide a local NGINX container which will host your files, you are quite save because the network is quite restricted.

        Only your own containers have access to the Docker internal network.
        But still there is the chance that someone could intercept the files that you are transferring -- even they only exist for a very short time and the transfer takes only a couple of milliseconds.

        Encrypted Transfer and Hidden Download Locations

        The first idea is a hidden download location. Beside having a Docker local host running in a temporary NGINX container hosting files in a local directory, the html root could contain a "secrect" directory.

        You can pass the name with the exec statement when running a script inside the container.

        But still this is security by obscurity and also does not provide any channel encryption.

        Introducing Encryption

        You can encrypt the files you want to transfer using a password-protected zip or even better with more control you can encrypt it with openssl specifying the security algorithm of your choice.

        In the next step you can pass this password along with the exec command and the download URL.

        This provides additional security by encrypting the files. The password chosen in my case is 32 hex string.
        One additional idea is to add some additional information to the password that only the Docker host and the Docker container has available.

        In my case I am using the Docker container ID which the Docker host knows and also the container itself can query this information.

        This is still no 100% security but provides already a quite good protection. You have to get access to multiple factors at the same time to be able to decrypt the container file which is transferred.

        The container file only exists for less than a second and the one time password is only passed over the container exec command.
        From security point of view the Docker admin does have access to the local file-system anyway.
        So we are more protecting from other machines in the same local network.

        To summarize here are the layers of security

        Restricted network

        The data will be transferred over the Docker internal network.

        Probably we could further limit the network communication between only those two Docker guests.

        Hidden URL

        Create a hidden URL on the download server (NGINX)  which we pass with the Docker exec command.

        Encrypting the container data

        The container data is encrypted via openssl using AES 256

        Split the password information for decrypting the container file.

        One piece (the 32 hex byte string) is transferred along with the URL
        The other piece is derived from the container ID of the Docker container we are transferring the data to.

        IMHO this should e sufficient secure for most environments.

        -- Daniel

        Preview implementation with some additional comments

        -- Sending side on the Docker host --


        log ()


        return 0

        echo "$1" "$2" "$3" "$4"


        nginx_transfer_stop ()


        docker stop "$TRANSFER_CONTAINER"

        docker container rm "$TRANSFER_CONTAINER"

        echo "Stopped & Removed Transfer Container"



        nginx_transfer_start ()


        # Create a nginx container hosting transfer directory download locally

        # Stop and Remove existing container if needed

        STATUS="$(docker inspect --format "{{ .State.Status }}" $TRANSFER_CONTAINER 2>/dev/null)"

        if [ ! -z "$STATUS" ]; then



        echo "Starting Docker container [$TRANSFER_CONTAINER]"

        docker run --name $TRANSFER_CONTAINER --network="bridge" -v $TRANSFER_DIR:/usr/share/nginx/html:ro -d nginx 2>/dev/null

        TRANSFER_CONTAINER_IP="$(docker inspect --format "{{ .NetworkSettings.IPAddress }}" $TRANSFER_CONTAINER 2>/dev/null)"

        if [ -z "$TRANSFER_CONTAINER_IP" ]; then

         echo "Unable to locate transfer container IP"

         return 1



        echo "Hosting Transfer Container on [$TRANSFER_CONTAINER_IP]"



        TransferToContainer ()







        TRANSFER_RAND=`openssl rand -hex 32`



        if [ -z "$TRANSFER_LOG" ]; then



        PW=`openssl rand -base64 32`

        CONTAINER_ID="$(docker inspect --format "{{ .Id }}" $DOCKER_CONTAINER 2>/dev/null)"

        # Create transfer dir including random sub dir

        mkdir -p "$TRANSFER_DIR/$TRANSFER_RAND"  

        openssl aes-256-cbc -e -k $PW$CONTAINER_ID -salt -in "$TRANSFER_FILE" -out $TRANSFER_DIR/$TRANSFER_OUTFILE -md sha256





        rm -rf "$TRANSFER_DIR"

        return 0






        tar -cf "$CERT_FILE" *.txt

        TransferToContainer $CERT_FILE / transfer.log

        rm -r $CERT_FILE

        echo "------------------------------"

        cat transfer.log

        echo "------------------------------"




        exit 0

        -- Receiving side inside the Docker container invoked via docker exec --




        if [ -z "$1" ]; then

         echo "Error - No Download URL specified"

         exit 1


        if [ -z "$2" ]; then

         echo "Error No Download PW specified"

         exit 1


        CONTAINER_ID=`grep "memory:/" < /proc/self/cgroup | sed 's|.*/||'`

        wget "$1" 2>/dev/null

        openssl aes-256-cbc -d -k $2$CONTAINER_ID -salt -in container.bin -out container.tar -md sha256 2>/dev/null

        if [ -e container.tar ]; then

         echo "DOWNLOAD Successful"


         echo "Error - Downloading file"

         exit 1



        ReceiveFromDockerHost "$1" "$2"

        Domino Docker Update

        Daniel Nashed  11 May 2019 05:23:56
        We had a great full day DNUG Docker workshop and I learned for the next time.
        Not every Domino admin is very familiar with virtualization software on their notebook.
        Next time I will come up with check list and recommendations.

        Beside that we got great feedback and I think it also makes sense to have dedicated workshops for ISVs and other business partners.

        We are planning to support more add-on products on the IBM/HCL side.
        Beside the Domino and Domino Community Image, we have a Traveler image.
        And we are currently working on the AppDevPack image.

        Those images can be customized with a management script that we also added to the project.
        It's a script that gives you a way to customize existing images (derive your own image) and also manage containers on your local Docker host.
        This helped the participants in the workshop to get things up and running.

        I also started to prepare scripts for automatic certificate installation including importing the proton/IAM certificate into a person document.
        And also to create a keyring file.

        One key principal for Docker is that very thing should be automated.
        For example from installation, to updating and monitoring . A while ago we added a basic health monitor which can be extended modifying the script that comes with the image.

        The AppDevPack documentation is more like cookbook. If you mess up some ingredients, the result might not what you expect and you don't know why.
        Putting the security components together isn't that intuitive. And I think we will need a documentation for the security part which goes thru all aspects in one document instead of splitting up all components.

        Building routines to automate the configuration for Docker is quite helpful. And my current plan is to write up some documentation about how this all fits together -- which will be more than a cookbook.

        -- Daniel

        Docker Workshop preparation done

        Daniel Nashed  6 May 2019 23:42:43
        The is a Docker Workshop from DNUG in Frankfurt this week on Thursday -->
        Thomas Hampel and myself are doing hands-on part. From scratch installing CentOS, Docker CE and also Domino on Docker.

        Between working on the IBM Docker Domino Script (, my Backup Solution and customer projects I setup a lab environment.
        I learned a lot specific details about CentOS software mirrors, Docker installations, Git repositories and some fancy DNS and DHCP stuff etc.

        For this workshop I got an new Intel NUC which is very small but quite beautiful hardware.
        I got the right new model working with a ESXi Server and installed the current version 6.7.

        On that box I added couple VMs to host a complete local environment in a local LAN and WLAN which does not need any internet connection during the workshop.

        CentOS Server with NGINX for
        • CentOS software mirror for CentOS package install & updates
        • Download Server for CentOS ISO image and all other software

        Docker Host for three images (2 ip addresses)
        • Local Docker Registry acting as a proxy to host all required images for the workshop
        • Local Git Server to mirror the official IBM Domino Docker repository
        • Local documentation mirror for the Docker documentation

        Another CentOS Server to provide general services
        • Local DNS Server for the local domain and as a DNS forwarder
        • Dedicated DHCP Server, because the DHCP server provided by the WLAN router wasn't flexible enough

        This preparation will hopefully make the hands-on part of the workshop quite interesting.
        We will go thru all the details installing CentOS, Docker and Domino on Docker and will have time to look into administration, maintenance and other details.

        Having prepared this environment will allow future workshops with similar workshops for example on customer site or other events ;-)

        For everyone who is interested, see the detailed technical information below.

        -- Daniel

        Technical details key components for the LAB environment

        Here are the technical details about what I used for the lab. There is a lot more like the DNS server and the DHCP server.
        But most of it is from CentOS out of the box. I played with newer compiled Git clients from the sources. But finally used all software in the version that is shipped with CentOS 7.6.
        In addition I am using the current Docker CE edition. The repository is also added to the local CentOS mirror.

        Intel NUC -- A small server with almost everything needed.

        The only missing part is a second LAN card. But there are USB LAN cards.

        ESXi 6.7 Server

        Works well on the small NUC which runs a  Intel® Core™ i3

        USB Network Card

        ESXi does not support USB network cards out of the box.
        And it does also not support the WLAN interface included in the NUC.

        But there is a quite new project for a new driver that has been released this year.

        CentOS 7.6 installed from Minimum ISO

        All VMs are running CentOS 7.6

        Software Server and Mirror

        Runs on NGINX with some special scripts to sync and build the CentOS repository files

        Docker Registry Proxy

        A Docker image that runs a Docker registry, which can be used in proxy mode and "caches" downloaded images

        Docker Documentation

        A Docker image which runs a documentation server with the full Docker documentation.
        The whole documentation you find on the Docker website is really inside this Docker container.

        Git Server

        Small Git server with nice web front end. Runs also inside a Docker container!

        OpenWRT Project

        The router I am using for optional internet connectivity via LAN to WLAN bridge doesn't have great software.

        But it can be updated to OpenWRT (

        So this update allows any type of flexible configuration. I can connect the LAN port to the NUC and use the WLAN to bridge to a hot spot if really needed.

        I had a lot of fun over the last weekends building this and looking into technical details.


        WOW - IBM Domino Mobile Apps released

        Daniel Nashed  24 April 2019 01:41:00
        IBM Domino Mobile Apps (IDMA) as been released today!!
        It has been available in beta for quite a while in public. Now it is finally officially available in the app store!

        That it is released into the normal app store is a nice surprise, because the first plans have been to release it thru the B2B app store only.

        But this is making deployments so much easier!

        Image:WOW - IBM Domino Mobile Apps released

        The app is downloadable for free but you need a license to run it in production connected to your Domino server.

        From licence point it's the same as using a Notes client. And you have to be on active subscription to use it with your Client Access License (CAL).

        To get started you just install and configure it very similar to a Notes desktop client. That also means the IDMA client requires a
        The best way to supply it is to have it downloaded via ID-Vault during configuration like on the Notes client.

        You will need a direct NRPC (port 1352) connection to your Domino server (either directly or thru VPN).

        We have been using IDMA in beta for quite a while. The first beta versions have been already impressive and it was evolving quickly.
        The IDMA client is the first offering in this area. IBM and HCL are planning an Android and there are also plans for an iPhone version.

        Below are the links to the app store (iPad only and you need at least iOS 11.4).

        Take a look on your own. This is kind of a late Easter present :-)


        Here is the link to the official documentation:

        -- Daniel

        US App Store

        German App Store

        Docker Timezone Challenges

        Daniel Nashed  23 April 2019 21:02:12
        When working on the Traveler for Docker support I ran into an issue.
        The Traveler server complained about a timezone difference between the Domino server and the JVM used by Traveler.

        I assumed that the timezone recommendations in the Docker technote would be sufficient

        But it turned out that they are not working correctly -- at least in the current Docker and CentOS releases.

        The idea was to map the Docker host /etc/localtime to the Docker container.

        -v /etc/localtime:/etc/localtime

        But it turned out that the mount does not work and the Docker container was still pointing to /usr/share/zoneinfo/UTC.

        The solution is to pass the wanted timezone to the Docker image during build as a build argument.

        The Domino on Docker does now by default read the host settings and pass it to the build process.
        And there is also build script option to specify it manually.

        There will be also an option to pass the timezone in the run statement and have it changed by the entry point script.

        -- Daniel

        Reading the timezone could be implemented like this

        DOCKER_TZ=$(readlink /etc/localtime | awk -F'/usr/share/zoneinfo/' '{print $2}')
        echo "[$DOCKER_TZ]"

        And here is the logic that can be used check and update the timezone.
        This might be also useful for other Docker projects.

        set_timezone ()
          if [ -z "$DOCKER_TZ" ]; then
            return 0

          CURRENT_TZ=$(readlink /etc/localtime)

          if [ "$CURRENT_TZ" = "$SET_TZ" ]; then
            echo "Timezone [$DOCKER_TZ] already set"
            return 0

          if [ ! -e "$SET_TZ" ]; then
            echo "Cannot read timezone [$SET_TZ] -- Timezone not changed"
            return 1

          echo "Timezone set to [$DOCKER_TZ]"
          ln -sf "$SET_TZ" /etc/localtime

          return 0


        Weekend Project "Domino on Docker Update"

        Daniel Nashed  13 April 2019 19:09:53
        There are a couple of updates pending for the offical IBM Docker project -->
        • Support for 10.0.1 FP1
        • Support for Domino Community Edition
        • Preparation for supporting add-on products like "Traveler", "VOP", "AppDevPack" and later maybe "Sametime"
        • Making software downloads more reliable and provide better error checking
        • Install Log checking for installed software
        • Official implementation of the Domino on Dock management script.

        The main challenge is proper version support without having multiple dockerfiles.
        So I am working on a way to have the version tags and the installation files outside the dockerfile.

        Lables and variables can be passed to the docker build command and overrwrite default settings.

        Every FP will be an own full image installed on top of centos/latest.
        Add-on products or customized versions are always installed as an own layer .

        We discussed this week about having multiple layers like "10.0.1 -> 10.0.1 FP1 --> 10.0.1 FP1 IF1 but that would make image management more difficult for admins.

        The build script could take care of building a "10.0.1" image before building a 10.0.1 FP1 image that is based on the other image.

        But there isn't much benefit, beside a bit space reduction.

        The add-on products and customization will have a separate layer and will use the current Domino image. For example Domino --> Traveler.

        Does this make sense for you?  I first also thought that having the different versions build on each other would make sense. But we don't see the benefits. What do you think?

        There will be a new file "software.txt" containing version numbers, download-file names and hashes.

        Filenames of the downloads are the biggest challenge. The community edition, for example, has complete different filenames...

        This map and download file will allow specifying versions without adding the download filename in the dockerfile or in the build file.

        The community edition will be installed as a different product "DOMINO_CE" instead of "DOMINO" because also the FPs have different names and hashes ..
        And of course, the directories inside the extracted software tar have a different directory structure (e.g. linux64/DominoEval/.. ).
        The guys building it, have no idea how we are using it.

        But I think I figured out a good way to organize versioning :-)

        The new version will be also prepared for upcoming Interims Fixed and Hotfixes.

        And I will propably also add JVM patches in the next step.

        There is also a management and customization script to configure, build, run, mange, update Domino Docker containers.

        -- Daniel


        • [IBM Lotus Domino]
        • [Domino on Linux]
        • [Nash!Com]
        • [Daniel Nashed]