Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

Domino Database Maintenance and Fragmentation

Daniel Nashed  2 November 2018 14:19:30


I am currently looking into this for a customer. And I think most customers are in the same situation.
What I would assume is that most of you are using storage optimization like compression, DAOS and NIFNSF already.
Sorry for the long post but I think this might be helpful including the details about file fragmentation at the end of the blog post..
Also my older but still relevant compact presentation ( dnug2016_compact.pdf
 ) might be helpful. Let me know what you think...

-- Daniel



Current situation in most Domino environments


The current situation in most Domino environments is that the classical compact and maintenance operations are used.
Classically you have configured something like compact -B -S10.
Either nightly compacting databases with more than 10% space or just before your backup in case you have archive style translog.


This should recover free space and reorganizes your databases to some extent.
But even in-place compact, which also reduces file-sizes, isn't the perfect solution.

Also if you run a compact -B on system databases or other databases which are highly used, those databases are a lot slower during compact because of extensive locking.


What Compact -B does


Compact -B is an in-place compact that allows users to continue to work with the database and also frees up remaining space in the database by truncating free space at the end of the database.
If the database is in use this compact can be quite slow and will slow down your user/server operations in this database during the compact.
The compact generates a new DBIID. And the database will need a new backup. This is why these compact operations are usually scheduled before a full backup (in case archive style translog is used).


What Compact -B doesn't do


The existing file will be optimized. But only to a certain extent. Also this will not change the file-fragmentation of the database file.

It would make more sense to use a copy style compact to reorganize the database completely.
A copy style compact is also what the database team is recommending and what IBM is using in the cloud.


Why isn't Compact -C the right solution


The standard copy-style compact does reorganize the database from NSF point of view.
A copy style compact takes a database off-line, generates a new temporary file and copies notes, objects etc into the new file which will be finally renamed into the original database filename.


In this type of compact the new file is allocated small on OS level and will be increased step by step until compact has completed the copy action, then renames the .tmp to NSF.

And this usually leads to a fragmented NSF file on OS level after the compact - especially on NTFS/Windows.
The OS tries to optimize those small allocations and will end up writing blocks into smaller free spots in the file-system.


Extra Tip : Make sure .tmp files are entered into your Anti-Virus solution exclusion list.


DBMT -- Database Maintenance Tool


The DBMT servertask replaces operations for compact, updall and other tasks and can be used in a variety of ways.
You can configure it to run without compact every day and every weekend with compact before the backup begins.


It can be also scheduled in a way that it only compacts databases after n-days and also lets you specify a time window for your compact operations.


In contrast to the standard compact task, DBMT determines the size of the new compacted database and will allocate space in one large chunk!
This allows the OS to optimize where the database is allocated and you end up with a very small number of fragments on disk.

Via notes.ini e.g. DBMT_PREFORMAT_PERCENT=120 you could increase this allocation to have 10-20% more space free in the database.

This would ensure that you don't need new small allocations in the database for creating a new note or object.


The extension granularity of the database currently is still quite small. So additionally you end up with a new allocation for most of the document updates.

Having free space in the database allows faster allocation and less fragmentation.
Furthermore; if you have DAOS enabled 10-20% for additional NSF disk requirement, isn't a large overhead in comparison to the benefit gained in optimized file allocation.


I have added some OS specific information about fragmentation for Windows and Linux at the end for details.


DMBT also has some limitations. For example system-databases are not compacted and that you cannot use the free space option.
But it offers other flexible options that make sense. You can specify the number of days between compacts (databases are skipped if they have compacted recently).

And with the pre-alloc you are specifying the free space in the database anyway.

Separate Maintenance for system databases (on Linux optimized at startup)


System databases are always in use. But it is important to have a copy-style compact for those databases as well.
This can only be done when the server isn't started. That's why I added special pre-startup compact operations to my Linux start script.


I have separate options for log.nsf and for other system databases in the start script. The current configuration has new examples leveraging DBMT.


But this wasn't completely flexible because it was always executed at startup, manual or with a "restartcompact" command.
So I added another option today for one of my customers which might be also useful for your environments.
The customer is doing regular Linux patching and they are rebooting the Linux machines afterwards.

I added an new one-time start compact option for system databases. It is using the already available compact options and is triggered by an extra file in the data directory.
It can be enabled and disabled via a start script command.


New Feature in the next Domino Start Script


Here is what I planned for the next version. The idea is to have a flexible way to control when a start should compact databases.

It can be also automated if you have a central patch management. It's just a file that needs to be created to trigger the compact at start-up.

So this provides a more flexible control over system database compacts and this ensures the OS admin can run those compacts without knowing the exact syntax.

-- snip --

compactnextstart on|off|status

------------------------------


Allows you to configure one time compacting databases at next startup.

This functionality controls a text file 'domino_nextstartcompact' in your data directory.

If this file is present, the compact operations specified via

DOMINO_COMPACT_TASK, DOMINO_COMPACT_OPTIONS, DOMINO_LOG_COMPACT_OPTIONS are executed at next start.

The 'domino_nextstartcompact' will be deleted at next startup.


This is for example to be intended to be used after a planned OS reboot or OS patch.

And it avoids separate steps executed by the OS level admin.

compactnextstart on  --> enables the compact at next startup

compactnextstart off --> disables the compact at next startup


Specifying no or any other option will show the current settings.


-- snip --



Summary and Recommendations


You should really look into DBMT for normal operations and also system databases!


The command lines for DBMT are quite flexible. I have presented multiple times about those features which have been introduced with Domino 9.

But they are still not used widely. I would really recommend you have a closer look into DBMT.


I have my own tool "nshrun" for years which does what DBMT does and a couple of other more flexible options.

But in general DBMT is a great out of the box optimization for your database maintenance.


I have added extracts from an old presentation below as an attachment for details about all compact and other maintenance options.

There are some parameters to set and there are specify switches to enable threads for compact, update and other options for DBMT.


If you are interested in fragmentation details, check the following abstract as well.


Appendix - File Fragmentation


Fragmentation of file-systems have different effects on Windows and Linux.

Many customers use tools on Windows to reorganize their files.


But as discussed above, it makes more sense to use Domino compact with pre-allocation to create the files with low fragmentation and keep the fragmentation low.

The performance impact is hard to measure. But for sure your backup operations and also some Domino operations will be faster with a well maintained NSF file.

We are looking for maintenance of the data in the NSF file and and also the file itself at the same time.


So I would not use tools to defrag your Domino files today with DBMT available.


But the following abstract gives you an idea how to analyze file fragmentation for Windows and Linux.


The well known tool from SysInternals allows you to analyze and defrag files.

I am using it to just analyze the fragmentation level.


Windows


[See Contig reference for more information and download

https://docs.microsoft.com/en-us/sysinternals/downloads/contig]

You can see that even my local log.nsf on my client is quite fragmented.


D:\notesdata>n:\tools\Contig64.exe log.nsf


Contig v1.8 - Contig

Copyright (C) 2001-2016 Mark Russinovich

Sysinternals


Summary:

  Number of files processed:      1

  Number of files defragmented:   1

  Number unsuccessfully procesed: 0

  Average fragmentation before: 10592 frags/file

  Average fragmentation after : 10592 frags/file


For a Notes client no DBMT is available and once databases have the right ODS level, there is no automatic copy-style compact trigger anyway.


But in this blog post my main focus is on the server side where you could leverage DBMT.



Linux


On Linux I never looked into fragmentation before. But there are tools available to analyze fragment levels on Linux as well.


filefrag allows you to see the number of fragments. If you are interested in details run it with filefrag -v on a single NSF.


But I was more interested in seeing the fragmentation of my databases.


The following command-line gets fragments for all NSF files and lists the 40 files with most fragmentation.

On my secondary server it looks quite OK. But I did the same today on a customer server and got databases with thousands of fragments.


I tested on my own mailfile and the DBMT compact did reduce the number of fragments (Only works if you have no blanks in your file-names).



find /local/notesdata -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -40


find -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -20

./domlog.nsf: 594

./nshtraceobj.nsf: 84

./log.nsf: 71

./big/mail10.nsf: 31

./nshmailmon.nsf: 26

./statrep.nsf: 23

./dbdirman.nsf: 17

...

Archives


  • [IBM Lotus Domino]
  • [Domino on Linux]
  • [Nash!Com]
  • [Daniel Nashed]