Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 NVIDIA 

Monitoring NVIDIA GPUs

Daniel Nashed – 1 December 2024 09:23:02

NVIDIA tools

NIVIDA has great tools and toolkits. My focus right now is mostly Linux. But some of the tools are also available for Windows
Some useful tools are already come with Ubuntu. For example nvtop is a great and simple to use ad-hoc monitoring tool


NVIDA card under load

The NVIDIA RTX 4000 SFF Ada Generation is a small server grade card with 20 GB of RAM.

You can see the fan is already around 50% and the temperature on the card goes up. Power consumption is at 65 of 70 W.
My load test in this example is a simple multi threaded servertask performing LLAMA requests.

You can see all the relevant parameters monitored here. This includes the processes using the GPU. In my case the llama-server you can see at the very bottom of the screen print.


Image:Monitoring NVIDIA GPUs


nvptop is a great tool. But not really helpful in long term monitoring or to see those stats in combination with other stats.
There are some Grafana integrations which either use the NVIDIA toolkit to low level read the stats.


Some projects use the  nvidia-smi to query information about the card. It also allows to print stats like other Linux tools (vmstat, iostat).
Beside the stats it can also print timedate and the card model.

Here is a sample command line and output:

nvidia-smi -l 1 --format=csv --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used
2024/12/01 09:59:36.259, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 57, 84 %, 67 %, 20475 MiB, 6514 MiB, 13532 MiB
2024/12/01 09:59:37.264, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 57, 68 %, 39 %, 20475 MiB, 6514 MiB, 13532 MiB
2024/12/01 09:59:38.265, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 57, 66 %, 39 %, 20475 MiB, 6514 MiB, 13532 MiB
2024/12/01 09:59:39.266, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 58, 65 %, 41 %, 20475 MiB, 6514 MiB, 13532 MiB
2024/12/01 09:59:40.267, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 58, 68 %, 40 %, 20475 MiB, 6514 MiB, 13532 MiB
2024/12/01 09:59:41.268, NVIDIA RTX 4000 SFF Ada Generation, 00000000:01:00.0, 565.57.01, P0, 4, 4, 58, 66 %, 43 %, 20475 MiB, 6514 MiB, 13532 MiB

The example above shows the values which can be printed.Not all those parameters make sense to combine.
For example the card model might need to be queried only once.

There are also options to leave out the headers and units: -format=csv,noheader,nounits. Those help to integrate the results into own applications.


Next Steps

Those tools the door for all kind of statistic integrations.
I could for example add those stats to Domino server stats and leverage existing integrations or the out of the box Domino statistics collection.

For now the simple graphic and the command-line is already very helpful to monitor the card.

The nvidia-smi command line is also available on Windows. nvtop is Linux only. But NVIDIA has also Windows based applications to show the status of your cards.



 llama  NIVIA 

Black Friday - Getting some new AI ready hardware

Daniel Nashed – 30 November 2024 00:24:19

AI functionality requires specific hardware resources. It's pretty clear that you can't avoid looking into NVIDIA GPUs.
But there is new Intel hardware with which helps with AI workloads as well. My test on my new 12th Gen Intel(R) Core(TM) i9-12900HK based machine has some good performance with it's 20 cores.
It's always a combination of GPUs and CPUs. So the new test notebook will be interesting to test.

My Thinkpad has a NVIDIA T1000 card, which is already quite OK for local tests.

The smaller Hetzner GPU server has a Nvidia RTX™ 4000 SFF Ada Generation with 20 GB RAM.


But I need some local test hardware to compile and run AI projects locally on Windows and Linux.
The main project which many other projects including Ollama are built on is llama.cpp  
https://github.com/ggerganov/llama.cpp.

But also the run-time aspect is interesting. On Docker and Kubernetes you need drivers to support GPUs inside the container.
Also hypervisors like Proxmox support GPU mapping into VMs.

A gaming notebook looks like the most reasonable hardware for a AI lab environment locally.
Black Friday is a great opportunity to get some cool new hardware.

Nvidia RTX™ 4060 comes with 8 GB RAM and decent performance.
The latest Intel CPUs support modern instruction sets
Ordered and looking forward to get my hands on it ..


https://www.asus.com/laptops/for-gaming/tuf-gaming/asus-tuf-gaming-f17-2023/

Image:Black Friday - Getting some new AI ready hardware


https://github.com/ollama/ollama/blob/main/docs/gpu.md

Image:Black Friday - Getting some new AI ready hardware

 LLM 

Getting started with LLMs - Ollama is a good starting point

Daniel Nashed – 24 November 2024 19:49:40

llama.cpp
is one of the most GitHub projects, which is used for many integrations
https://github.com/ggerganov/llama.cpp.
But it is also quite complex to run, because you need to download LLMs.
Once you looked into other projects, you can really appreciate how much easier it can be to use Ollama.


https://ollama.com/

It's available for Linux, Mac and Windows.Ollama takes away all the complexity of compiling optimized for the target CPU and GPU.
And even more important: It automatically downloads and runs LLMs for you, without any conversion.


Ollama provides a registry for downloading LLMs.

For Windows and Mac there is a simple download. For Linux there is a simple convenience script to download.



Image:Getting started with LLMs - Ollama is a good starting point

Once installed you can download and start the LLM via


ollama run llama3.2



That's really all you need to get started. Anyone who downloaded a LLM manually and you can start querying the LLM.

The server also listens to 127.0.0.1:11434 with an OpenAI compatible endpoint.



>>>
what is the meaning of 42?
The number 42 has become a cultural phenomenon due to its appearance in Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the book, a supercomputer named Deep Thought is asked to find the "Answer to the Ultimate Question of Life, the Universe, and Everything." After 7.5 million years of computation, Deep Thought finally reveals that the answer is... 42.


However, the book also notes that the characters don't actually know what the ultimate question is, making the answer essentially meaningless without context. This paradoxical situation has led to various interpretations and

memes surrounding the number 42.


Some possible meanings or interpretations of 42 include:


1. **The Answer**: In a humorous sense, 42 can be seen as the ultimate answer to life's mysteries.

2. **Absurdity**: The number 42 represents the absurdity and randomness of the universe, highlighting the limitations of human understanding.

3. **Symbolism**: Some see 42 as a symbolic representation of the impermanence and fleeting nature of existence.

4. **Mathematical curiosity**: For mathematicians, 42 is an interesting number that has no special properties or significance beyond its numerical value.


Ultimately, the meaning of 42 is subjective and open to individual interpretation. It has become a cultural icon, symbolizing the search for answers to life's big questions, and often used humorously to represent "the answer" to

any question.



List LLMs


ollama list

NAME               ID              SIZE      MODIFIED

llama3.2:latest    a80c4f17acd5    2.0 GB    44 hours ago



Help command-line


Here are the command line options. It's pretty simple.


ollama

Usage:

ollama [flags]

ollama [command]


Available Commands:

serve       Start ollama

create      Create a model from a Modelfile

show        Show information for a model

run         Run a model

stop        Stop a running model

pull        Pull a model from a registry

push        Push a model to a registry

list        List models

ps          List running models

cp          Copy a model

rm          Remove a model

help        Help about any command



Infos about models


To get details about a model, there is a useful command including the quantization.
That's specially important for lower performing GPUs.


ollama show llama3.2

Model

  architecture        llama

  parameters          3.2B

  context length      131072

  embedding length    3072

  quantization        Q4_K_M


Parameters

  stop    "<|start_header_id|>"

  stop    "<|end_header_id|>"

  stop    "<|eot_id|>"


License

  LLAMA 3.2 COMMUNITY LICENSE AGREEMENT

  Llama 3.2 Version Release Date: September 25, 2024



Grafana & Prometheus meets the Domino Container Image

Daniel Nashed – 24 November 2024 18:55:01

A while ago I started a new Grafana for Domino project ->
https://github.com/nashcom/domino-grafana
It's a separate repository designed to work for Linux, containers and Windows in combination with the Prometheus Node Exporter.


A /metrics end-point was a missing component for the container image.

On Docker you usually install the Node Exporter native and read the domino.prom file from the container.

But in some scenarios the native /metrics exposed by the container makes sense.
To simplify installations the Nash!Com Domino Grafana project now contains a pre-compiled Linux binary added to software.txt.

Also the Node Exporter is added to software.txt.
At container image build time a single new option allows to install both components automatically.


-- Daniel


-node_exporter   Installs Prometheus node_exporter into the container

-domprom         Installs Domino Prometheus statistics exporter

-prometheus/prom Installs Domino Prometheus statistics exporter & Node Exporter



Image:Grafana & Prometheus meets the Domino Container Image

 Domino  Tika 

Apache Tika 3.0.0 released - Available in the Domino Container

Daniel Nashed – 24 November 2024 14:01:58

Apache Tika is a Java based project leveraged in Domin to parse text from attachments when full text indexing using the search filters.
It's a single JAR running as a separate process listening on the loopback interface to perform attachment parsing.

Tika could actually also be used for your own applications, if you start another instance.
I blogged about it some time ago --> https://blog.nashcom.de/nashcomblog.nsf/dx/tika-in-notesdomino.htm.

Domino 14.5 EA1 and 14.0 FP2 containes the latest stable Tika Server 2.9.2 release.
Now that Tika 3.0.0 is finally released, you can expect Domino 14.5 also to switch to the new major version.

The container project provides a build option to replace the Tika version
I have just updated Tika to 3.0.0 in the container build and did a quick test.



Image:Apache Tika 3.0.0 released - Available in the Domino Container


Benefits using the Domino community container project to build your container images

Daniel Nashed – 12 November 2024 08:27:24

Domino is running in a (Docker) container for quite some time as a community image.
It all started at IBM when Thomas Hampel initiated the Domino on Docker project for Domino 9 at IBM.


Meanwhile the container image moved to the HCL Open Source repositories and HCL is shipping a ready to go image to customers based on this image.
The project provides an own GitHub pages based documentation page (
https://opensource.hcltechsw.com/domino-container/).

You can download the ready to go image from MyHCL Software aka MHS (
https://my.hcltechsw.com/) or the HCL Harbor registry (https://hclcr.io).
But it is only available with the standard build for Domino including Verse, Nomad server and OnTime. And also a separate image for Traveler.


The ready to use container image is built based on the Domino and Traveler web-kits using the same community build script by HCL.

But there are a couple of benefits using the open source container image build script on your own.


The build script in the project is easy to use and provides MHS software download automation and comes with a build menu.


Below are some differences and benefits when building the container image on your own.
A standard vendor build image can't provide the same flexibility and has to focus on the functionality of the product itself.

When running the open source container image you are running the same HCL provided software with the same level of support.
It's just built on your own in your own environment with software packages downloaded from the MHS.


Specially container environments require flexibility building and enhancing images.
I would be interested to get your feedback. And I want to specially understand which reasons you might have to use the HCL pre-build images.


Would you want to move to the community image based on those additional benefits highlighted below?
What is missing or more difficult using the community image? What can we improve?

The container project wants to offer full flexibility without making it more complicated.
If you have feedback, we want to hear from you either here by mail or as an issue in the GitHub project (
https://github.com/HCL-TECH-SOFTWARE/domino-container).


-- Daniel



Main differences and benefits using the community image


  • Building the image on your own ensures you have the latest Redhat UBI image 9.x version included. HCL only updates the image at release time
  • The container image supports Domino add-on packages like the Domino Leap, the REST API and the language pack
  • It allows to install the latest version of all add-on products of HCL Verse, Nomad Server, Traveler, REST API,  Domino Leap as soon they are available
    You can build an all in one image or separate images for different server types

     
  • The HCL container image only supports the English locale. The community image allows to build with any locale support and adds your build machines locale as the default
  • The community image comes with full timezone support. The HCL Container image is intended to run in UTC locale.
  • A shipping container image can only include the bare minimum software needed to run the application.
    To install additional software you would need to create your own container build environment and build a derived image
    The community project supports to define your own add-on packages, which can be installed during the build process

     
  • By default the community image is built on the latest Redhat UBI 9.x minimum only selecting the packages needed for Domino and adds a couple of additional useful packages
    The HCL image is built on the bigger Redhat UBI 9.1 standard image (see details here:
    https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image)
  • The community image also supports to build on other Domino supported base images.
    This allows full flexibility and also helps you on software testing if you want to run Domino on a different Linux flavor
    (See
    https://opensource.hcltechsw.com/domino-container/concept_environments/#supported-base-images).
  • If you are a C-API developer you can create a build container which allows you to build for different Domino versions using different versions of the C-API SDK.
    When selecting the C-API option the container provides a ready to use build environment.
  • In case you need additional Linux packages,  the container build script allows to specify those packages when building the image.


Image:Benefits using the Domino community container project to build your container images

First look -- Windows 2025 Server

Daniel Nashed – 10 November 2024 15:05:05

Windows 2025 server is available. It very much looks like Windows 11 with Windows server admin interface we all know and "love".


What's new?


Here is a list of new features for you to get your own impression ->
https://learn.microsoft.com/en-us/windows-server/get-started/whats-new-windows-server-2025

There are no big surprises for a Domino admin.
But Windows admins will have to look carefully into the new features.
Specially to check for default installed applications.



Disabling Windows Definder Antivirus


The first service I disabled was Windows Defender. It took a lot of system resources on my small NUC server.
I even disabling all options the service took quite some CPU.


It's not that straightforward to uninstall unless you have this Powershell command at hand:


Powershell:


Uninstall-WindowsFeature -Name Windows-Defender


After a reboot your machine works much smoother. This is specially important for test environments with small CPU capacity and machines on your desk which are expected to stay silent.
What I also noticed on the virus scanner, that sending samples to the internet was enabled.



OpenSSH Server installed by default


A positive surprise for me was that OpenSSH server is installed by default but set to manual.

This can be convenient for secure remote access if you don't need a GUI.



Winget is installed by default


It's a tool to install and manage software. It's quite convenient. But your Windows admin might not like it.


In addition to installing software, Winget is also an easy way to list installed software packages.



Windows 2025 works on Proxmox


It is already listed as a supported platform when you create a machine.


I have installed Windows 2025 on my Proxmox server using Virtio drivers and SCSI.

That's probably how a Proxmox admin would expect it.
To get it working you have to add the Virtio Driver ISO during install to have Windows detect the disk during installation.

Afterwards you just install the driver package from ISO as well to get network access etc.



Windows container image ltsc2025


I also quickly looked at the new container image. It's good for testing and I am using Windows containers for testing Domino.

The new image also works in the same way the previous 2022 image worked.


docker pull mcr.microsoft.com/windows/servercore:ltsc2025



There is no official support for Domino on Windows 2025.

But we can't expect Windows admins will introduce this brand new version in production soon.


There is still some homework to for a Windows admin to find out about new standards and functionality enabled and to secure the machine.



My first conclusion


All my favorite add-on tools still work and my Domino container image also still works with some minor changes (but this isn't for production use, just for local testing).


The new package manager is a welcome tool in my environment. But maybe corporate admins don't like it.

Ubuntu LTS on WSL and containers continue to work in the same way they did.


Not that this update made me a huge Windows fan. But this is an improvement.

This was a first look only! Not a detailed walk thru. Just to show you what I have seen so far.


All in all I like the new Windows 2025 and prefer it over previous versions.

I just ordered new hardware for a new Proxmox server. So I can do more testing.


Feedback


Does anyone already planning to run Windows 2025 in production?

Which type of applications would you update first? Which applications do already support Windows 2025?


Usually Domino should get support for a new major OS version at it's next feature release.

This would be Domino 14.5. So I raised this question in the Domino 14.5 EA forum today.



Example Winget list



As you can see I already installed my favorite Windows helper tools.

Some of them are already managed by winget by default from what it looks like.



Name                                                               Id                                                                       Version           Available     Source

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

7-Zip 24.08 (x64)                                                  7zip.7zip                                                                24.08                           winget

Git                                                                ARP\Machine\X64\Git_is1                                                  2.47.0.2

HCL Domino                                                         ARP\Machine\X64\HCL Domino                                               14.5.0.0

Mozilla Firefox (x64 en-US)                                        Mozilla.Firefox                                                          132.0.1                         winget

Mozilla Maintenance Service                                        ARP\Machine\X64\MozillaMaintenanceService                                132.0.1

Notepad++ (64-bit x64)                                             Notepad++.Notepad++                                                      8.7.1                           winget

QEMU guest agent                                                   SoftwareFreedomConservancy.QEMUGuestAgent                                108.0.2                         winget

Virtio-win-driver-installer                                        ARP\Machine\X64\{ECC9556E-D54A-457E-86FE-4D555DA605DF}                   0.1.262

MobaXterm                                                          Mobatek.MobaXterm                                                        24.3.0.5248                     winget

Microsoft Visual C++ 2015-2022 Redistributable (x64) - 14.38.33135 Microsoft.VCRedist.2015+.x64                                             14.38.33135.0     14.40.33816.0 winget

Ubuntu 24.04.1 LTS                                                 Canonical.Ubuntu.2404                                                    2404.1.24.0                     winget

App Installer                                                      Microsoft.AppInstaller                                                   1.24.25180.0                    winget

English (United States) Local Experience Pack                      MSIX\Microsoft.LanguageExperiencePacken-US_26100.20.23.0_neutral__8weky… 26100.20.23.0

Windows Security                                                   MSIX\Microsoft.SecHealthUI_1000.26100.1.0_x64__8wekyb3d8bbwe             1000.26100.1.0

Microsoft.UI.Xaml.2.8                                              Microsoft.UI.Xaml.2.8                                                    8.2310.30001.0                  winget

Microsoft.UI.Xaml.2.8                                              Microsoft.UI.Xaml.2.8                                                    8.2310.30001.0                  winget

Microsoft Visual C++ 2015 UWP Desktop Runtime Package              Microsoft.VCLibs.Desktop.14                                              14.0.33728.0                    winget

Microsoft Visual C++ 2015 UWP Desktop Runtime Package              Microsoft.VCLibs.Desktop.14                                              14.0.33728.0                    winget

Microsoft Visual C++ 2015 UWP Runtime Package                      MSIX\Microsoft.VCLibs.140.00_14.0.33519.0_x64__8wekyb3d8bbwe             14.0.33519.0

Microsoft Visual C++ 2015 UWP Runtime Package                      MSIX\Microsoft.VCLibs.140.00_14.0.33519.0_x86__8wekyb3d8bbwe             14.0.33519.0

Windows Terminal                                                   Microsoft.WindowsTerminal                                                1.21.2911.0                     winget

Windows Package Manager Source (winget) V2                         MSIX\Microsoft.Winget.Source_2024.1110.1517.48_neutral__8wekyb3d8bbwe    2024.1110.1517.48

Windows Subsystem for Linux                                        MSIX\MicrosoftCorporationII.WindowsSubsystemForLinux_2.3.24.0_x64__8wek… 2.3.24.0

Notepad++                                                          MSIX\NotepadPlusPlus_1.0.0.0_neutral__7njy0v32s6xk6                      1.0.0.0



 Alpine  Redhat 

Building NGINX on Redhat UBI minimal vs. Alpine

Daniel Nashed – 8 November 2024 22:51:29

Redhat UBI minimal image

The Redhat UBI minimal is a small image with all the packages you need to run Domino.
It is smaller then the standard image and you can install additional packages to keep the image small.

The minimal image uses the micro-dnf stack, which is smaller than the yum/dnf stack used by the standard image.
Here is a blog describing the differences:

https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image

Alpine Linux

But when building other applications like NGINX you have another choice. Alpine is an amazing Linux and a great base image to use.
It's not fully supporting glibc applications like Domino. But it's great for NGINX and other Linux applications.

I have written a NGINX stream module a while go which needs a matching container image with the same NGINX version.
It's a multi stage docker build where the first step builds NGINX and the stream module.
The second build stage copies it into a base container without the build environment.

When you build it on Redhat UBI minimal, the image is almost 150 MB. On Alpine Linux it's around 30 MB.
It's not just size issue. But also contains less packages with less dependencies and is also available on ARM architecture.

docker images
REPOSITORY         TAG      IMAGE ID       CREATED          SIZE
domino-nrpc-sni    latest   cc7fb31ff24a   10 seconds ago   29.7MB
domino-nrpc-sni    latest   faee6825892d   30 seconds ago   146MB

Running the Domino Download script on Windows

Daniel Nashed – 3 November 2024 14:39:02
The Domino download script is written in bash.
It is mainly intended for Linux with special options for automating downloads for example for the Domino Container image.

But did you know it can also run int GitBash on Windows?
And it also runs in any WSL based Linux on your Windows machine.

On Windows I am mostly using it in combination with WSL. But GitBash which comes with the Git client also works well.

https://nashcom.github.io/domino-startscript/domdownload/

Of course it also works on MacOS beside most Linux distributions.

Tip: If you know what to download, you can specify the file name directly.
Also the info command or the -curl command works this way.



Git Bash example

Image:Running the Domino Download script on Windows


Specifying a webkit file name directly


Image:Running the Domino Download script on Windows





 GitHub 

Consuming GitHub projects - Don’t use Download ZIP

Daniel Nashed – 31 October 2024 07:58:16

Git and Github are one of the very common ways to consume software today.

The best way to consume it would be to use the Git client on Linux.
If you don't have Linux or at least a WSL based Linux on your Windows machine, you should install the Windows Git client.
The Windows Git client also brings many helpful Linux tools and a very current OpenSSL command line version to your Windows machine.

Usually GutHub projects provide releases which can be downloaded directly.
But not all software puts the latest changes always directly into a release.

For example the HCL Domino Container project constantly changes to add new functionality and also to update software information for Domino and companion products.
Changes go first into the develop branch and are merged in to the main branch when completely tested.



Cloning a Git repository is the preferred way

With the Git client you just clone the git repository to a local directory.
The big benefit is that you can switch between branches of the repository (like main and develop).
But the bigger benefit is that you can pull changes to your local instance of the repository.


Git works with a proxy

In corporate environments you can't connect to the internet directly.
But the Git client supports proxies. If you can connect to GitHub directly, you should always use "git clone".


Git GUI clients

Personally I am not a big fan of Git clients with a UI. And they are not needed when consuming Git projects.
But you could also use a graphical Git client. This might help you to understand changes and look into details of the repository.

Usually the command-line makes most sense.


Domino ZIP option isn't the right way

Beside the clone button GitHub shows a "Download ZIP" button.
This option provides the latest version of the selected branch as a ZIP file.

Sadly specially for Linux software a ZIP isn't the best format, because it does not preserve file permissions.
When using the Nash!Com start script project or the HCL container project it is essential that the script keep their execute permissions.


Get a GitHub project as a tarball

There is another option you don't find in the web GUI as a button.
You can download a branch as a tarball instead. This would preserve the file permissions and is the better way to download a GitHub project if you really need to and can't clone it.

The general format looks like this

curl -L https://github.com/{username}/{repository}/tarball/{branch_name} -o repository.tar.gz

For the HCL Domino container project it would look like this downloaded via curl command line.

curl -sL https://github.com/HCL-TECH-SOFTWARE/domino-container/tarball/main -o domino-container.tar.gz


If downloading via browser, Git generates a file name for you. The URL would look like this:

https://github.com/HCL-TECH-SOFTWARE/domino-container/tarball/main


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]