Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...


Daniel Nashed


    Docker Host to Container Secure File-Transfer - For example needed for AppDevPack Deployment

    Daniel Nashed  11 May 2019 17:14:04

    I already had a lot of fun working on Docker Domino project in the last couple of month.
    It was a great learning curve because a lot of stuff was new for me. But this was just the beginning and I am now hitting even more interesting topics.

    Right now I am working on an automated Domino AppDevPack deployment implementation.

    I am not sure if I have completely implemented before Engage but I can already show it.

    The basic components get already installed. Like the proton servertask, the DSAPI filters. But this is the more simple part.

    Automatic configuration for the AppDevPack is ticky and I might not get all pieces automatically configured -- but hopefully it will be very close.

    Yesterday I looked into the sample files shipped with the AppDevPack 1.0.1 to create certificates from a simple CA.

    Certificate Management Script for Domino on Docker

    Because that wasn't sufficient flexible for Docker I wrote a management script for creating certificates yesterday.

    That script will help to create certificates with the simple, local CA and also help with getting certificates requested and deployed which are issued by your corporate CA.

    This might help you even outside Docker deployments. The script is just one of the pieces we have to glue together and isn't a final version yet.

    If you are interested, I have documented it inside the script because you have to change variables to adopt it to your environment.

    But all of the variables are on top of the script.  For the final version of the script we might move the configuration into a separate file -- like I have implemented for the Linux start script and the Docker management script.


    Deploying other components

    I also wrote a small servertask yesterday which can create a person doc and upload the x.509 cert into that person doc.
    This helper servertask will be part of the AppDevPack Docker implementation and can be used to write the person document with the client x.509 certificate needed for Proton authentication.

    Deploy certificates

    But the central part for a successful AppDevPack deployment is a common CA infrastructure and certificate deployment.

    The idea is that a central script on the Docker host can handle creation & deployment for all components of the AppDevPack.

    IMHO it's not the best solution to have the CA located on the Docker container running the Domino server with Proton.
    For Docker there should be central administration and auto deployment.

    Multiple components of the AppDevPack will need certificates and they should all come from the same CA.

    For example for authentication on Proton (user authentication via client cert or the IAM part doing the same) the keyring file needs to include the Root CA and intermediate certificates for the certificate issuer of the client certificate used.
    That means that I will have a central place on the Docker host where those keys and certificates are create, managed and distributed.

    Challenge - Secure File Transfer

    The key challenge here is the "file transfer" between the different Docker containers.

    When searching for a solution I did not find any out of the box solution on the Docker side..

    So I had to implement an own "secure" mechanism to transfer files to the Docker containers.

    Here is a short write up that might end up later in a readme on the Domino on Docker project.


    Of course it would be much nicer if we could just leverage technologies like Let's Encrypt.

    But most of the servers are going to be located inside a corporate network.

    So at least the public Let's Encrypt solution isn't an option for many customers.

    There are projects out there to implement Let's Encrypt inside a company -->
    But knowing many customer environment this would be quite a difficult discussion with responsible infrastructure admins.

    By the way while writing this document I wasn't sure the blog is the right location for those type of posts.

    I might end up adding those kind of documents in a git based WIKI or readme for the component of the Docker project.

    And I just write a short note on my blog.

    Let me know what you think ...

    -- Daniel

    Implementing a secure file transfer from Docker Host and Docker Container

    The direct communication between the Docker host and the Docker container is quite limited.

    During start of the container you can pass environment variables.

    Those variables can be accessed on the Docker container and once they are set, you cannot remove them from the outer layer of your Docker container.

    That means every new process that is started via exec or during the start for example the entry point, has still access to all variables.

    From security point of view this can be problematic if you want to transfer sensitive data like passwords.

    But beside that there is no mechanism at all to transfer files directly - not even in an insecure way.

    So you have to build your own routines.

    File Transfer from outside the container

    If you want to transfer files, you have to store them outside the container and access them from the container for example via curl or wget.

    I have found multiple posts dealing with that once I had the idea. But most of the post deal with just transferring the data not with security.

    If you are transferring from outside your Docker host you have to take special care securing the channel.
    You don't always have HTTPS available or can leverage ssh file-transfer.

    In theory the local network on your Docker host between the Docker containers should be quite safe.

    But if you build a professional solution you might want to add extra security.

    Basic Container Security

    If you provide a local NGINX container which will host your files, you are quite save because the network is quite restricted.

    Only your own containers have access to the Docker internal network.
    But still there is the chance that someone could intercept the files that you are transferring -- even they only exist for a very short time and the transfer takes only a couple of milliseconds.

    Encrypted Transfer and Hidden Download Locations

    The first idea is a hidden download location. Beside having a Docker local host running in a temporary NGINX container hosting files in a local directory, the html root could contain a "secrect" directory.

    You can pass the name with the exec statement when running a script inside the container.

    But still this is security by obscurity and also does not provide any channel encryption.

    Introducing Encryption

    You can encrypt the files you want to transfer using a password-protected zip or even better with more control you can encrypt it with openssl specifying the security algorithm of your choice.

    In the next step you can pass this password along with the exec command and the download URL.

    This provides additional security by encrypting the files. The password chosen in my case is 32 hex string.
    One additional idea is to add some additional information to the password that only the Docker host and the Docker container has available.

    In my case I am using the Docker container ID which the Docker host knows and also the container itself can query this information.

    This is still no 100% security but provides already a quite good protection. You have to get access to multiple factors at the same time to be able to decrypt the container file which is transferred.

    The container file only exists for less than a second and the one time password is only passed over the container exec command.
    From security point of view the Docker admin does have access to the local file-system anyway.
    So we are more protecting from other machines in the same local network.

    To summarize here are the layers of security

    Restricted network

    The data will be transferred over the Docker internal network.

    Probably we could further limit the network communication between only those two Docker guests.

    Hidden URL

    Create a hidden URL on the download server (NGINX)  which we pass with the Docker exec command.

    Encrypting the container data

    The container data is encrypted via openssl using AES 256

    Split the password information for decrypting the container file.

    One piece (the 32 hex byte string) is transferred along with the URL
    The other piece is derived from the container ID of the Docker container we are transferring the data to.

    IMHO this should e sufficient secure for most environments.

    -- Daniel

    Preview implementation with some additional comments

    -- Sending side on the Docker host --


    log ()


    return 0

    echo "$1" "$2" "$3" "$4"


    nginx_transfer_stop ()


    docker stop "$TRANSFER_CONTAINER"

    docker container rm "$TRANSFER_CONTAINER"

    echo "Stopped & Removed Transfer Container"



    nginx_transfer_start ()


    # Create a nginx container hosting transfer directory download locally

    # Stop and Remove existing container if needed

    STATUS="$(docker inspect --format "{{ .State.Status }}" $TRANSFER_CONTAINER 2>/dev/null)"

    if [ ! -z "$STATUS" ]; then



    echo "Starting Docker container [$TRANSFER_CONTAINER]"

    docker run --name $TRANSFER_CONTAINER --network="bridge" -v $TRANSFER_DIR:/usr/share/nginx/html:ro -d nginx 2>/dev/null

    TRANSFER_CONTAINER_IP="$(docker inspect --format "{{ .NetworkSettings.IPAddress }}" $TRANSFER_CONTAINER 2>/dev/null)"

    if [ -z "$TRANSFER_CONTAINER_IP" ]; then

     echo "Unable to locate transfer container IP"

     return 1



    echo "Hosting Transfer Container on [$TRANSFER_CONTAINER_IP]"



    TransferToContainer ()







    TRANSFER_RAND=`openssl rand -hex 32`



    if [ -z "$TRANSFER_LOG" ]; then



    PW=`openssl rand -base64 32`

    CONTAINER_ID="$(docker inspect --format "{{ .Id }}" $DOCKER_CONTAINER 2>/dev/null)"

    # Create transfer dir including random sub dir


    openssl aes-256-cbc -e -k $PW$CONTAINER_ID -salt -in "$TRANSFER_FILE" -out $TRANSFER_DIR/$TRANSFER_OUTFILE -md sha256





    rm -rf "$TRANSFER_DIR"

    return 0






    tar -cf "$CERT_FILE" *.txt

    TransferToContainer $CERT_FILE / transfer.log

    rm -r $CERT_FILE

    echo "------------------------------"

    cat transfer.log

    echo "------------------------------"




    exit 0

    -- Receiving side inside the Docker container invoked via docker exec --




    if [ -z "$1" ]; then

     echo "Error - No Download URL specified"

     exit 1


    if [ -z "$2" ]; then

     echo "Error No Download PW specified"

     exit 1


    CONTAINER_ID=`grep "memory:/" < /proc/self/cgroup | sed 's|.*/||'`

    wget "$1" 2>/dev/null

    openssl aes-256-cbc -d -k $2$CONTAINER_ID -salt -in container.bin -out container.tar -md sha256 2>/dev/null

    if [ -e container.tar ]; then

     echo "DOWNLOAD Successful"


     echo "Error - Downloading file"

     exit 1



    ReceiveFromDockerHost "$1" "$2"


    1Lars Berntrop-Bos  11.05.2019 23:48:01  Docker Host to Container Secure File-Transfer - For example needed for AppDevPack Deployment

    At the risc of being a noob, couldn't you generate a storage container and then attach it to the instance that needs the data? If you encrypt the data with the ssh key already in the instance it should be safe...

    2Daniel Nashed  12.05.2019 12:26:00  Docker Host to Container Secure File-Transfer - For example needed for AppDevPack Deployment

    @Lars, of course you could use the already available ssh key.

    But than you need to get the public key for that user on the Docker server to your machine first.

    That's something that could be done. But also needs automation.

    Attaching something to a Docker container would be a volume.

    But than we have to get rid of the volume afterwards, because we just need it once.

    Removing a volume from a container would mean to create another image. Only the RUN statement allows to assing volumes.

    Or depending on the configuraiton the volume might be local. Than you could copy the file just to that volume.

    But you don't have always access to that volume. It really depends on the configuration.

    In any case this is just one deployment method. It will not work in an managed Docker environment and we would need other ways to distribute security relevant files.

    When you use the ssh public key, you should first create a symmetic "session" key and than encrypt that key with the public key of the user.

    This would be comparable to what we use for email.

    Depending on the environment this would not be the first server in the Domino and we would have a way to use databases to distribute such kind of information.

    But we are trying to get this build in a way that it is flexible.

    The described method would also work at build time of the image.

    For example if you specify passwords they are currently in the environment of the root user.

    What I wrote is a script that can be used for different scenarios and it doesn't have to be Domino on Docker specific.

    Does that make sense for you?

    -- Daniel



      • [IBM Lotus Domino]
      • [Domino on Linux]
      • [Nash!Com]
      • [Daniel Nashed]