Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

 
alt

Daniel Nashed

 

Why run Domino in a container today

Daniel Nashed  17 July 2022 08:57:40

As my of you know, I am a big fan of running Domino and other applications in a container.

This can be a classical Docker/Podman deployment or K8s.


Containers might not be good for everyone. But a lot of software is available in a "Docker image", which can run in multiple environments.

Domino's main deployment model will not change to Docker.

Whereas some HCL applications like Sametime changed to a container deployment model.


Domino is one big monolithic application, running in one container.

But there can be still benefits to running Domino containerized.


Containerization does not mean it has to be in the cloud or that you can't use native resources like standard disk volumes or direct host network access.

It's more about running the Domino binaries from a standardized, automatically deployable image.



Most Domino container deployments are on Docker, not Kubernetes (K8s)


So far I see more demand on Docker than on real-world K8s deployments.

Running Domino on K8s usually makes sense if you have a K8s strategy in-house.


Domino isn't the platform where you would start if you look into K8s first.

Unless you are a large provider running many containers at scale for many customers.

The principles of containers are very much the same for Docker and K8s.

And also the container images are usually the same.



The benefit of running in a container


A container is built from a blueprint and the installation is always the same.

So you are getting always the same installation already tested in a QA environment or on a test server.

Also, many others are running the same installation from a consistent image.


Updating


The Domino community container image comes with updated templates and binaries and logic inside the image takes care of deploying and updating templates.

So an update drills down to
  • Shutdown and remove the old container
  • Create a container based on the new image with the data you already had

Bringing up new servers


Configuring a new server does not require any new software installation.

Instead, you just need some configuration steps:
  • Define your container run-time environment (image to use, volumes to mount, network to use ..)
  • Define environment variables to configure your container
  • For Domino, you also need a OneTouch Setup JSON file to configure your first or additional servers in your Domain

This is a pretty easy and consistent way to bring up new servers.

And also updates are much easier. For setting up Domino I added a "setup" option to get a standard JSON-based Domino OneTouch Setup for the first servers and additional servers.

Which makes it easy to get your server up and running in minutes. Including a standardized, best practices configuration.



Additional software for a container


Additional software, which is not in your data directory has to be added to the image.

If this is an in-house application or you have to just copy some files and make them executable, this is a very simple add-on image, based on a standard Domino container image.

But for commercial add-on applications, which also need configuration and have their own update mechanism, this could be complicated.


Add-on software like virus scanners and backup agents for Domino are probably the best examples.



Domino 12 features helping in the container world


On a local single Docker instance, deploying native tools like backup could still make sense.

But moving to K8s is usually quite challenging.

One of the reasons Domino 12 offers native backup integration for different kinds of backup back-ends and also to support native anti-virus in Domino 12.0.2 is to support containerized environments and to make deployment easier and standardized.

For the new content scan anti-virus integration you would for example just need a configuration and point your Domino server to an ICAP server -- maybe even running in another container. Or using an appliance.



Applications composed of multiple containers


I created a new SafeLinx Nomad Web container image recently.

The image contains SafeLinx and also Nomad Web and optionally also drivers for MySQL.

In smaller environments, you don't need a relational state database. But in already deployments you need an external database.


The beauty of containerization is that databases like Microsoft SQL Server or MySQL are also available as a container.

So you only have to add a MySQL container and configure it to be used by your SafeLinx server.

There is no installation needed. The image is just "pulled" from Docker Hub.


Bringing up two applications together becomes just a configuration step in glueing the components together.

So in my example, I created a docker-compose setup and added a container for MySQL.



QA environments & Test automation


Bringing up a consistent, auto-deployed environment of multiple containers together, can be also very interesting for QA and test automation.

Let's say you are a software vendor and want to test your nightly builds to run a certain test, a Docker environment can be very helpful.

You could use a docker-compose.yml to define your environment and after bringing up the servers in a defined way, you could run commands inside the container for automation testing.


There are many ways to leverage this type of environment and I have built automation scripts for different use cases to test complex applications including injecting commands at different states of the test flow.

The key components here are a well-defined always consistent test environment, which can be recreated any time with updated application or Linux base level software.



Container environments are usually Linux based


There are also containers on Windows and you can run Docker images on Docker Desktop on Windows.

In the end, it is always Linux based in the back-end in one or another way.

Docker Desktop is leveraging WSL2 containers today.

Depending on how you run your Docker environment there is not much you see directly from WSL2.

But in general, in a production environment, the full infrastructure is a Linux stack.


So you need some base Linux to know how to get containers up and running. And also the base OS for containers is Linux and you should have some basic Linux skills.

On the other side my Domino container script (dominoctl) and my Domino start script running inside the container, make it very easy to support and maintain Docker/Podman-based Domino container environments.


I don't see Linux as a huge challenge. And any IT professional should build up some basic Linux skills today.

There are so many benefits to running Linux-based servers. And Docker or Podman makes running applications on Linux easier.



Testing with different Domino and base operating system versions


The Domino community container image allows you to build the container image on various Linux base images including RedHat, CentOS-based Linux, SUSE, Debian, Ubuntu and a couple more.

The images can be used to quickly switch between different base operating systems and Domino versions.

This helped me to narrow down issues between different versions many times.



My personal conclusion


I have many Domino servers running on Docker and Podman for various reasons. And I am using it not only for production but also for development purposes.
It helps me dramatically in my work for different customers and environments. And I am enabling customers and partners to leverage containers for their requirements.


Containers are not solving all your problems automatically.

But in today's IT infrastructure containers play an important role.

And even Domino isn't a microservice-based architecture, you can benefit from many container functionalities.

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]