Daniel Nashed 1 April 2017 00:42:31Domino 9.0.1 Feature Pack 8 introduced "NIFNSF" which allows to separate the view/folder index into a separate file.
Let me try to summarized my current experience from my tests and from the field.
There are multiple benefits moving the index to a separate file.
1. Backup Storage Reduction
First of all having the index in a separate file reduces the amount of data that you need to backup.
For mail databases the index is around 10%. If you have DAOS enabled from the remaining data it's about 30%.
So the backup time and backup storage in total is reduced.
2. Size Limit of the data above 64 GB
The total size of a NSF is 64 GB. With DAOS enabled you can increase the logical size of a server based database by moving attachments to the DAOS store.
For DAOS you can have external attachments up to 1 TB. Beyond that size the internal counters might overflow.
But in some cases you still need more that 64 GB for NSF data and the view/folder indexes. With NIFNSF the limit is the 64 GB data in the NSF without the view/folder index.
NIFNSF is intended to deliver better performance than having all data in a single NSF file.
There is a current performance issue. For mail databases there should not be big difference.
But for more complex views in applications the performance with NIFNSF might be not as good as without it.
Tests have shown that it can take double the time.
There is a pending fix that might be delivered with an IF for FP8 which should bring back the performance to almost the same as without NIFNSF.
And for FP9 there is optimization planned to have better performance for concurrent operations. Those changes did not make it into FP8.
So for now you might want to wait at least for an IF before enabling NIFNSF for complex applications.
-- Storage Location for NIFNSF --
There are multiple options to configure where to store the .NDX files which store the NIF data.
What you choose depends on your environment,platform and your requirements.
a.) Have NDX files stored next to your NSF files
b.) Have NDX files stored in a separate folder in the data directory
c.) Have NDX outside the data directory on the same disk
d.) Have NDX stored on a separate disk
There are no one size fits all recommendations. It really depends what storage situation and platform you are running on.
If you can for example on Windows I would store NDX files at least outside the data directory.
On Linux often without a new mount point you might not be able to move the NDX files outside the data directory, because often the data directory is a mount.
If you need to increase your storage anyway because the NSF disk is full, having a separate disk (most of the times virtual disk) makes sense.,
This is a good way for a clean new allocation and it will separate the I/O operations.
-- Enabling NIFSNSF on your Server --
First requirement is that you are using transactions logging. Circular translog is perfectly OK for that.
And translog is general recommendation for Domino anyway! For stability, fault recovery and also for performance!
ODS 51 or higher
You will need at least ODS 51 for NIFSNSF. But I would recommend using ODS 52 for all databases on your server.
notes.ini Create_R9_Databases=1 will ensure the ODS is updated the next time you run a copy-style compact.
There are a couple notes.ini settings. The most important setting NIFNSFEnable=1 enables NIFNSF on your server.
To store the NDX files in different locations (see options above) you can leverage NIFBasePath=path depending on your preferences.
In addition if you want all new databases to be NIFNSF enabled there is another notes.ini setting CREATE_NIFNSF_DATABASES=1 which will ensure that all new databases are automatically NIFNSF enabled.
-- Enabling NIFSNSF on a Database --
Once your server is NIFNSF enabled you can start enabling NIFNSF on your databases via compact.
Please take care not to run the compact operation on all databases. We have seen customers who enabled NIFSNSF also on the DAOS catalog -- even the special database has no views.
I would currently start with mail databases only! And you just specify the right mail directory.
The normal recommendation is to use
compact -c -NIFNSF ON mail/
This will enable the feature and also move existing indexes out of the NSF.
But if the database is in use, the copy-style compact will not be possible.
Instead you could enable NIFNSF on databases without copy-style compact and have a copy-style compact later on with either compact -c or leveraging the DBMT tool which you might have configured anyway.
Once the database is on ODS 51 or higher and NIFNSF is enabled new indexes are created in the NDX file.
But only the copy-style compact will move the views to the NDX file.
-- Checking NIFNSF --
You can check which databases are already NIFNSF enabled and there is also a way to see the size of the NDX. But this command shows all databases.
The most useful commands shows all NIFNSF enabled databases.
show dir -nifnsfonly
show only NIFNSF enabled databases
show dir -nifnsf
show all databases with NDX files also
-- Maintaining Databases with NIFNSF enabled --
I have done some tests. Only with copy style compact the NDX will be compacted.
Many customers are still using compact -B for an inplace, space reduction compact.
There are also other reasons to leverage DBMT which is using copy style compacts and does use space pre-allocation to ensure the NSF is not allocated fragmented.
The copy style compact will also shrink the NDX if needed. A compact -B did not free any space from the NDX file in my tests.
However the free space in a NDX file should be still be reused if released from a purged view/folder index during normal runtime.
-- Tuning for NIFNSF --
A NDX file is a NSF file. The index data needs a container. Therefore if you are running a large server you have to make sure you have sufficient dbcache entries, because the NDX file will also need a cache entry.
By default the dbcache handles depend on the size of the NSF Buffer Pool (which is 1024 MB for 64bit). The number of cache entries is around 3 times the buffer pool size in MB.
3000 DbCache entries should be OK for most servers. But if your server is already on the limit you have to increase the limit.
Here are the relevant server statistics from a current customer example:
Database.DbCache.CurrentEntries = 4498
Database.DbCache.HighWaterMark = 4500
Database.DbCache.MaxEntries = 3000
Database.DbCache.OvercrowdingRejections = 15220
Your CurrentEntries and HighWaterMark should be alwass below the MaxEntries.
And the OvercrowdingRejections should be always zero!
So in this case it would make sense to increase the number of cache entries to 6000 via:
- Comments