Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

Domino Database Maintenance and Fragmentation

Daniel Nashed  2 November 2018 14:19:30

I am currently looking into this for a customer. And I think most customers are in the same situation.
What I would assume is that most of you are using storage optimization like compression, DAOS and NIFNSF already.
Sorry for the long post but I think this might be helpful including the details about file fragmentation at the end of the blog post..
Also my older but still relevant compact presentation ( dnug2016_compact.pdf
 ) might be helpful. Let me know what you think...

-- Daniel

Current situation in most Domino environments

The current situation in most Domino environments is that the classical compact and maintenance operations are used.
Classically you have configured something like compact -B -S10.
Either nightly compacting databases with more than 10% space or just before your backup in case you have archive style translog.

This should recover free space and reorganizes your databases to some extent.
But even in-place compact, which also reduces file-sizes, isn't the perfect solution.

Also if you run a compact -B on system databases or other databases which are highly used, those databases are a lot slower during compact because of extensive locking.

What Compact -B does

Compact -B is an in-place compact that allows users to continue to work with the database and also frees up remaining space in the database by truncating free space at the end of the database.
If the database is in use this compact can be quite slow and will slow down your user/server operations in this database during the compact.
The compact generates a new DBIID. And the database will need a new backup. This is why these compact operations are usually scheduled before a full backup (in case archive style translog is used).

What Compact -B doesn't do

The existing file will be optimized. But only to a certain extent. Also this will not change the file-fragmentation of the database file.

It would make more sense to use a copy style compact to reorganize the database completely.
A copy style compact is also what the database team is recommending and what IBM is using in the cloud.

Why isn't Compact -C the right solution

The standard copy-style compact does reorganize the database from NSF point of view.
A copy style compact takes a database off-line, generates a new temporary file and copies notes, objects etc into the new file which will be finally renamed into the original database filename.

In this type of compact the new file is allocated small on OS level and will be increased step by step until compact has completed the copy action, then renames the .tmp to NSF.

And this usually leads to a fragmented NSF file on OS level after the compact - especially on NTFS/Windows.
The OS tries to optimize those small allocations and will end up writing blocks into smaller free spots in the file-system.

Extra Tip : Make sure .tmp files are entered into your Anti-Virus solution exclusion list.

DBMT -- Database Maintenance Tool

The DBMT servertask replaces operations for compact, updall and other tasks and can be used in a variety of ways.
You can configure it to run without compact every day and every weekend with compact before the backup begins.

It can be also scheduled in a way that it only compacts databases after n-days and also lets you specify a time window for your compact operations.

In contrast to the standard compact task, DBMT determines the size of the new compacted database and will allocate space in one large chunk!
This allows the OS to optimize where the database is allocated and you end up with a very small number of fragments on disk.

Via notes.ini e.g. DBMT_PREFORMAT_PERCENT=120 you could increase this allocation to have 10-20% more space free in the database.

This would ensure that you don't need new small allocations in the database for creating a new note or object.

The extension granularity of the database currently is still quite small. So additionally you end up with a new allocation for most of the document updates.

Having free space in the database allows faster allocation and less fragmentation.
Furthermore; if you have DAOS enabled 10-20% for additional NSF disk requirement, isn't a large overhead in comparison to the benefit gained in optimized file allocation.

I have added some OS specific information about fragmentation for Windows and Linux at the end for details.

DMBT also has some limitations. For example system-databases are not compacted and that you cannot use the free space option.
But it offers other flexible options that make sense. You can specify the number of days between compacts (databases are skipped if they have compacted recently).

And with the pre-alloc you are specifying the free space in the database anyway.

Separate Maintenance for system databases (on Linux optimized at startup)

System databases are always in use. But it is important to have a copy-style compact for those databases as well.
This can only be done when the server isn't started. That's why I added special pre-startup compact operations to my Linux start script.

I have separate options for log.nsf and for other system databases in the start script. The current configuration has new examples leveraging DBMT.

But this wasn't completely flexible because it was always executed at startup, manual or with a "restartcompact" command.
So I added another option today for one of my customers which might be also useful for your environments.
The customer is doing regular Linux patching and they are rebooting the Linux machines afterwards.

I added an new one-time start compact option for system databases. It is using the already available compact options and is triggered by an extra file in the data directory.
It can be enabled and disabled via a start script command.

New Feature in the next Domino Start Script

Here is what I planned for the next version. The idea is to have a flexible way to control when a start should compact databases.

It can be also automated if you have a central patch management. It's just a file that needs to be created to trigger the compact at start-up.

So this provides a more flexible control over system database compacts and this ensures the OS admin can run those compacts without knowing the exact syntax.

-- snip --

compactnextstart on|off|status


Allows you to configure one time compacting databases at next startup.

This functionality controls a text file 'domino_nextstartcompact' in your data directory.

If this file is present, the compact operations specified via


The 'domino_nextstartcompact' will be deleted at next startup.

This is for example to be intended to be used after a planned OS reboot or OS patch.

And it avoids separate steps executed by the OS level admin.

compactnextstart on  --> enables the compact at next startup

compactnextstart off --> disables the compact at next startup

Specifying no or any other option will show the current settings.

-- snip --

Summary and Recommendations

You should really look into DBMT for normal operations and also system databases!

The command lines for DBMT are quite flexible. I have presented multiple times about those features which have been introduced with Domino 9.

But they are still not used widely. I would really recommend you have a closer look into DBMT.

I have my own tool "nshrun" for years which does what DBMT does and a couple of other more flexible options.

But in general DBMT is a great out of the box optimization for your database maintenance.

I have added extracts from an old presentation below as an attachment for details about all compact and other maintenance options.

There are some parameters to set and there are specify switches to enable threads for compact, update and other options for DBMT.

If you are interested in fragmentation details, check the following abstract as well.

Appendix - File Fragmentation

Fragmentation of file-systems have different effects on Windows and Linux.

Many customers use tools on Windows to reorganize their files.

But as discussed above, it makes more sense to use Domino compact with pre-allocation to create the files with low fragmentation and keep the fragmentation low.

The performance impact is hard to measure. But for sure your backup operations and also some Domino operations will be faster with a well maintained NSF file.

We are looking for maintenance of the data in the NSF file and and also the file itself at the same time.

So I would not use tools to defrag your Domino files today with DBMT available.

But the following abstract gives you an idea how to analyze file fragmentation for Windows and Linux.

The well known tool from SysInternals allows you to analyze and defrag files.

I am using it to just analyze the fragmentation level.


[See Contig reference for more information and download]

You can see that even my local log.nsf on my client is quite fragmented.

D:\notesdata>n:\tools\Contig64.exe log.nsf

Contig v1.8 - Contig

Copyright (C) 2001-2016 Mark Russinovich



  Number of files processed:      1

  Number of files defragmented:   1

  Number unsuccessfully procesed: 0

  Average fragmentation before: 10592 frags/file

  Average fragmentation after : 10592 frags/file

For a Notes client no DBMT is available and once databases have the right ODS level, there is no automatic copy-style compact trigger anyway.

But in this blog post my main focus is on the server side where you could leverage DBMT.


On Linux I never looked into fragmentation before. But there are tools available to analyze fragment levels on Linux as well.

filefrag allows you to see the number of fragments. If you are interested in details run it with filefrag -v on a single NSF.

But I was more interested in seeing the fragmentation of my databases.

The following command-line gets fragments for all NSF files and lists the 40 files with most fragmentation.

On my secondary server it looks quite OK. But I did the same today on a customer server and got databases with thousands of fragments.

I tested on my own mailfile and the DBMT compact did reduce the number of fragments (Only works if you have no blanks in your file-names).

find /local/notesdata -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -40

find -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -20

./domlog.nsf: 594

./nshtraceobj.nsf: 84

./log.nsf: 71

./big/mail10.nsf: 31

./nshmailmon.nsf: 26

./statrep.nsf: 23

./dbdirman.nsf: 17



1Christian Henseler  02.11.2018 21:28:34  Domino Database Maintenance and Fragmentation

To compact system databases in a convenient way, I am using a compact -replica -restart program document with a ind-file for the system databases.

It's timed to finish right before our weekly maintenance slot, so if something goes wrong, we are dealing with it during maintenance.

Advantage is that we don't have to wait for the offline compact of system databases.

The only drawback is that in 95 % of cases, the Agent Manager does not initialize properly and we have to reboot the machine completely. A Domino service stop/start does not help on a Windows machine

2Daniel Nashed  04.11.2018 7:42:56  Domino Database Maintenance and Fragmentation

@Christian, I am personally not a big fan of compact -replica.

It's more a work-around for large ID tables where you have to pull a new replica to get the Note.IDs alinged to fit.

Even this is hopefully not needed any more with Domino 10 and the increased ID-Table limits.

The compact -replica also generates a fragmented file on disk. Only DBMT with pre-alloc does allocate the database in one chunk.

My idea is to add the DBMT compact operations to the maintenance window for an OS update.

This is usually during off-peak times and if you have a cluster it should not make a big difference if the server isn't available for another 5 minutes.

-- Daniel

3Adam Osborne  05.11.2018 6:03:38  Domino Database Maintenance and Fragmentation

Thanks for taking time creating your posts.

In this comment I'd like to present some information that should change your point of view about Domino specific defragmentation products:

1. It's great that the current versions of DBMT can make contiguous databases (IBM added this feature after we blogged about fragmentation issues in 9.0, John Paganetti even used our research at Connect 2014

9.01 attempted to address this by using pre-allocation, but the critical point is DBMT can only create contiguous files if freespace isn't badly fragmented and the chances of that happening on a Windows Domino server that hasn't had active defragmentation running on it is two-fifths of not much. It's basically zero. We blogged about this in April last year see --> { Link }

2. Although on small system the effects of low level fragmentation can be minor, they are still there and can be measured. Things however change dramatically when you involve lots of users, large databases, full-text indexes and time. This describes the majority of the Domino servers we see.

Invariably high levels of fragmentation take hold, and its effects are very very measurable. We've blogged about this a lot, but here is an example at an Australian Coal Mining organisation (where DBMT actually made this worse) with before and after information -->

The only difference here was defragmentation.

We've put 1000's of man hours into understanding and developing solutions in this space. I assure you the effects are real and the benefits are measurable and bankable.

4Daniel Nashed  06.11.2018 18:26:51  Domino Database Maintenance and Fragmentation

Thanks Adam for your reply!

There isn't one size fits one solution and it is complex depending on the platform and the configuration.

I looked into Windows and Linux in my post and I am not aware of any defrag tool for Linux file systems. Still you can see that Linux is also affected by fragmentation.

Also the file-systems work differently and they will only work well, when sufficient space is free!

Of course you need sufficient continuos free space to get one large allocation be in one chunk. But even if it would be a small number of chunks that would be already great.

The pre-allocation with a size larger than the database itself can help. So that the database is extended less often and in one piece until the next compact. And you will have less movement of data on the disk that way.

So I really have to recommend leveraging a DBMT compact. How much defrag is still needed with pre-alloc with like 110-120% really depends on the environment.

Over time if all databases operate that way, the file-system overall will be less fragmented and new pre-allocs will result in less fragments.

In addition I would distinct between .nsf files, .ndx files, .nlo files and specially .ft index. In a perfect world I would put all of them into separate file-systems. And there are settings for all of them. Maybe I should really write up another blog entry for storage optimization in general.

So .nlo files are always allocated in one step. .ndx files grow over time and rebuilding them completely could make sense. But they should be much smaller than the .nsf hopefully in most cases.

.ft files should be re-build once per month because they degenerate over time. DMBT has extra options for re-building them after a certain number of days.

So the fragmentation also depends on those kind of configuration options.

Using your application also offers analysis and a more granular approach optimizing the files on disk directly!

My approach is to have Domino behave well in regard to the file-system allocations and avoid small pre-allocation and let Domino ask for larger chunks of disk space to give the OS the opportunity to behave well.

I am currently planning with a customer their reorganization of the compact operations and we will do an analysis before and after.

The Linux script shows us the top fragmented databases. And I expect a dramatical decrease of the .nsf file fragmentation.

Once I have finished it, I will write up another blog post with our results. But that might take some time until we implement it on a production server. The QA environment doesn't have the same amount of data nor traffic and isn't a real world example.

-- Daniel

5Thorsten  21.11.2018 10:39:43  Domino Database Maintenance and Fragmentation

Hi Daniel,

many thanks for this posting on DBMT. In earlier releases, not all databases which should be handled by DBMT are really compacted even if there is time left regarding the stop or endtime. This is fixed in Domino 10 and there is a backport available for Windows 64bit for Domino 9.0.1FP10 HF361. If IF4 ist installed on Domino 9.0.1FP10, it has to be uninstalled before.

-- Thorsten


  • [IBM Lotus Domino]
  • [Domino on Linux]
  • [Nash!Com]
  • [Daniel Nashed]