Current Information about NIFNSF
Daniel Nashed – 1 April 2017 22:42:31
Domino 9.0.1 Feature Pack 8 introduced "NIFNSF" which allows to separate the view/folder index into a separate file. Let me try to summarized my current experience from my tests and from the field.
There are multiple benefits moving the index to a separate file.
1. Backup Storage Reduction
First of all having the index in a separate file reduces the amount of data that you need to backup.
For mail databases the index is around 10%. If you have DAOS enabled from the remaining data it's about 30%.
So the backup time and backup storage in total is reduced.
2. Size Limit of the data above 64 GB
The total size of a NSF is 64 GB. With DAOS enabled you can increase the logical size of a server based database by moving attachments to the DAOS store.
For DAOS you can have external attachments up to 1 TB. Beyond that size the internal counters might overflow.
But in some cases you still need more that 64 GB for NSF data and the view/folder indexes. With NIFNSF the limit is the 64 GB data in the NSF without the view/folder index.
3. Performance
NIFNSF is intended to deliver better performance than having all data in a single NSF file.
There is a current performance issue. For mail databases there should not be big difference.
But for more complex views in applications the performance with NIFNSF might be not as good as without it.
Tests have shown that it can take double the time.
There is a pending fix that might be delivered with an IF for FP8 which should bring back the performance to almost the same as without NIFNSF.
And for FP9 there is optimization planned to have better performance for concurrent operations. Those changes did not make it into FP8.
So for now you might want to wait at least for an IF before enabling NIFNSF for complex applications.
-- Storage Location for NIFNSF --
There are multiple options to configure where to store the .NDX files which store the NIF data.
What you choose depends on your environment,platform and your requirements.
a.) Have NDX files stored next to your NSF files
b.) Have NDX files stored in a separate folder in the data directory
c.) Have NDX outside the data directory on the same disk
d.) Have NDX stored on a separate disk
There are no one size fits all recommendations. It really depends what storage situation and platform you are running on.
If you can for example on Windows I would store NDX files at least outside the data directory.
On Linux often without a new mount point you might not be able to move the NDX files outside the data directory, because often the data directory is a mount.
If you need to increase your storage anyway because the NSF disk is full, having a separate disk (most of the times virtual disk) makes sense.,
This is a good way for a clean new allocation and it will separate the I/O operations.
-- Enabling NIFSNSF on your Server --
Translog
First requirement is that you are using transactions logging. Circular translog is perfectly OK for that.
And translog is general recommendation for Domino anyway! For stability, fault recovery and also for performance!
ODS 51 or higher
You will need at least ODS 51 for NIFSNSF. But I would recommend using ODS 52 for all databases on your server.
notes.ini Create_R9_Databases=1 will ensure the ODS is updated the next time you run a copy-style compact.
Notes.ini Settings
There are a couple notes.ini settings. The most important setting NIFNSFEnable=1 enables NIFNSF on your server.
To store the NDX files in different locations (see options above) you can leverage NIFBasePath=path depending on your preferences.
In addition if you want all new databases to be NIFNSF enabled there is another notes.ini setting CREATE_NIFNSF_DATABASES=1 which will ensure that all new databases are automatically NIFNSF enabled.
-- Enabling NIFSNSF on a Database --
Once your server is NIFNSF enabled you can start enabling NIFNSF on your databases via compact.
Please take care not to run the compact operation on all databases. We have seen customers who enabled NIFSNSF also on the DAOS catalog -- even the special database has no views.
I would currently start with mail databases only! And you just specify the right mail directory.
The normal recommendation is to use
compact -c -NIFNSF ON mail/
This will enable the feature and also move existing indexes out of the NSF.
But if the database is in use, the copy-style compact will not be possible.
Instead you could enable NIFNSF on databases without copy-style compact and have a copy-style compact later on with either compact -c or leveraging the DBMT tool which you might have configured anyway.
Once the database is on ODS 51 or higher and NIFNSF is enabled new indexes are created in the NDX file.
But only the copy-style compact will move the views to the NDX file.
-- Checking NIFNSF --
You can check which databases are already NIFNSF enabled and there is also a way to see the size of the NDX. But this command shows all databases.
The most useful commands shows all NIFNSF enabled databases.
show dir -nifnsfonly
show only NIFNSF enabled databases
show dir -nifnsf
show all databases with NDX files also
-- Maintaining Databases with NIFNSF enabled --
I have done some tests. Only with copy style compact the NDX will be compacted.
Many customers are still using compact -B for an inplace, space reduction compact.
There are also other reasons to leverage DBMT which is using copy style compacts and does use space pre-allocation to ensure the NSF is not allocated fragmented.
The copy style compact will also shrink the NDX if needed. A compact -B did not free any space from the NDX file in my tests.
However the free space in a NDX file should be still be reused if released from a purged view/folder index during normal runtime.
-- Tuning for NIFNSF --
A NDX file is a NSF file. The index data needs a container. Therefore if you are running a large server you have to make sure you have sufficient dbcache entries, because the NDX file will also need a cache entry.
By default the dbcache handles depend on the size of the NSF Buffer Pool (which is 1024 MB for 64bit). The number of cache entries is around 3 times the buffer pool size in MB.
3000 DbCache entries should be OK for most servers. But if your server is already on the limit you have to increase the limit.
Here are the relevant server statistics from a current customer example:
Database.DbCache.CurrentEntries = 4498
Database.DbCache.HighWaterMark = 4500
Database.DbCache.MaxEntries = 3000
Database.DbCache.OvercrowdingRejections = 15220
Your CurrentEntries and HighWaterMark should be alwass below the MaxEntries.
And the OvercrowdingRejections should be always zero!
So in this case it would make sense to increase the number of cache entries to 6000 via:
notes.ini NSF_DbCache_Maxentries=6000
- Comments [21]
1Chris Whisonant 03.04.2017 12:30:06 Current Information about NIFNSF
Thanks for the write-up - always helpful information.
You stated this about DAOS and 1TB "Beyond that size the internal counters might overflow."
Will this cause problems with databases accessing the attachments or with corruption of some sort? We have a customer with a database nearing 1TB in size.
2Vladimir Kulakov 03.04.2017 14:17:16 Current Information about NIFNSF
Great job! Thank you Daniel!
3Jacek Sz. 04.04.2017 8:51:59 Current Information about NIFNSF
"A NDX file is a NSF file."
Word of advise to everybody who feel tempted by above words. Do not open it via client on production server. At least my test server crashes everytime when I try to open any NDX file.
4Daniel Nashed 04.04.2017 11:05:52 Current Information about NIFNSF
@Jacek, it is a kind of NSF file and there is no need to look into it!
Also you should get an error message that you are not authorized to access it.
The crash should only happen if you are using full access admin and we have already reported that crash.
-- Daniel
5Lars Berntrop-Bos 05.04.2017 12:17:29 Current Information about NIFNSF
Another issue: I have tried to stop using the NIFNSF feature by setting
NIFNSFEnable=0
Create_NIFNSF_Databases=0
rebooting and running
load compact -C -nifnsf off
the log states that nifnsf is turned of for the databases processed.
When I enabled NIFNSF, I used the variable NIFBasePath to specify a path for the ndx files.
Lo and behold, the ndx files are still updated...
So how do I truly stop using NIFNSF?
6Lars Berntrop-Bos 06.04.2017 8:30:50 Current Information about NIFNSF
Note: reverting the ODS twice with compact -R (capital needed) put the ods at 48.
I hid the Notes.ini NIFNSF variables by commenting them out.
When upgrading to ODS 52 again, NIFNSF stayed on for the autosave file as_(username).nsf and my local mail file (managed replica). I deleted those, since the first is easily recreated, and the second has an uptodate server replica.
Now, no more ndx files are generated. I'll wait for FP9 before using NIFNSF again. It IS a cool feature!
7Fredrik Norling 10.04.2017 8:05:46 Current Information about NIFNSF
I understand that moving the index out of the database is good for smaller backups and to get more data into a single database.
But has anybody checked this in a virtual environment because I talked to a WMware guy and he said that a virtual Domino on WMware would probably not preform better with the indexes outside of the database.
What is everybody's thoughts about this ?
8Michael Bourak 10.04.2017 11:29:34 Current Information about NIFNSF
FP8 IF1 fixes the problem, now performances are "on par" with classic NSF
Would be interesting to do real benchmarks with ndx files on a separate high speed disk...
9Lars Berntrop-Bos 10.04.2017 11:38:44 Current Information about NIFNSF
has anyone timed if FixPack 8 IF1 for the Notes Client has the same performance fixes for NIFNSF as the Domino FixPack 8 IF1?
10Martijn de Jong 10.04.2017 16:24:36 Current Information about NIFNSF
Checked your statement that indexes take about 10% of the db size on an average mailserver here. Result was that the indexes take up about 1,8% of the disk space of the database.
11Lars Berntrop-Bos 10.04.2017 20:27:05 Current Information about NIFNSF
@Martijn:
This will vary according with the amount of users using their mail file on the server (like users without a managed replica, or iNotes users) or offline, like Traveler users, managed replica users and local replica laptop users. In the latter case it is not uncommon to gradually see the mail files have no indexes at all. In the online case, the indexes used wil be refreshed often enough to never be discarded.
12Martijn de Jong 11.04.2017 9:09:17 Current Information about NIFNSF
@Lars:
Good one. I think many of the users here indeed use a managed replica and iNotes is not really used here. It's good to know though that you can't give a default percentage for the index sizes as it really depends on how mailfiles are used within your company.
13Daniel Nashed 11.04.2017 21:55:45 Current Information about NIFNSF
@Fredrik, once the final performance fix will ship with FP9 the perfomance will be better that before in any environment.
There is nothing special in virtual environments that would not make performance better with NIFNSF.
Usually even in virtual environments the VMware admins tell you, that you should put everything on one file-system because in the backend it is on the same RAID 10 anyway.
But in larger environments it makes sense to split data into separate VMDKs for translog, NSF and maybe NIF and FT in one VMDK. Depending on your environment even DAOS on separate VMDK can make sense.
Some customers but DAOS on a NetApp "share" for optimization.
Number of concurrent I/Os and size can have an effect. So if you have a large server I would tend to put data on separate VMDKs. And in that case NIFNSF can help you to split data for size and concurrent I/Os.
-- Daniel
14Matt 14.04.2017 14:41:17 Current Information about NIFNSF
If the NDX is a notes database, does that mean it is limited to 64 Gb too?
15Daniel Nashed 18.04.2017 21:05:31 Current Information about NIFNSF
The current limit of the NDX file is not yet documented.
My understanding is that the limit is beyond 64 GB. But we need an offical support statement from IBM.
The internal counters and the structure should be able to do more thant 64 GB because NIF works differently than NSF.
Does anyone have a database that could benefit of a NDX limit above 64 GB?
16Alexander Kuntsman 25.04.2017 13:02:00 Current Information about NIFNSF
I made the following test:
1) Create mail database
2) set path c:\domino\data\nif
2) Enable NIFNSF (3 index files has been created in path c:\Domino\data\nif\mail)
3) Delete database
FT index has own folder for every database however for nifnsf all index files keep in the same folder. It impossible to identify relation between database and index files. Only one has the same name as database but two additional files have numeric title. Finally we may have lot of junk if large file will be deleted.
Any idea? May be I done something wrong?
17Pierre Lundqvist 22.03.2019 13:38:48 Current Information about NIFNSF
I would revisit the breaking out of view indexes. the ndx files perfectly correlates to the namings of the applications. When deleting applications the ndx files deletes as well. Opening up applications with external view indexes makes great performancewise for users depending on how much views there are in the applications. The disk savings is not great, especially now when harddrives and diskspace are cheap compared to half a decade ago.
18Alexander Novak 29.01.2020 17:29:32 Current Information about NIFNSF
Hi Daniel, did you ever moved a database with NIFNSF enabled to a new Domino server via file-copy (e.g. switch to new hardware)? And what happens if you restore a database without the NDX file ? Will the database create a new NDX file itself ?
19Daniel Nashed 30.01.2020 11:27:13 Current Information about NIFNSF
Hi Alex,
if you do not have the NDX file it will be recreated if the server has NIFNSF on. If not the server will create NIF in the NSF again.
You don't need to copy the files when you move to a new server.
When you restore a database you are in the same situation. The restore will write a new database.
And if you replaced a database you hopefully have deleted it previously using Domino not a file-system delete. This should als remove the NDX.
I have not done that with a larger server for a customer yet. But this is how it is designed to work.
-- Daniel
20Massimo Nadalin 19.11.2021 14:09:25 Current Information about NIFNSF
Hi
I recently tested it out on Domino V12 on a CentOS system, to try to compare what the difference with-or without NIFNSF is.
Comes out that there's any HUGE difference.
I measured:
- First view index time
- GetDocumentByKey on about 100K look-ups
- GetNextDocument on about 20K documents
Cleaning O/S cache and restarting the Domino server NIFNSF looks a bit faster (5% than NSF).
The GetNextDocument loop took about 8-9% less time than NIFNSF as well.
The 100K look-ups took about the same time with each method.
When the Domino server is running for a while, thus O/S a Domino caches are there to help, the scenario change in favor of NSF: view build time is now 4% faster and the GetDocumentByKey loop 7%. The latter test took almost the same time with both methods.
I was the only user playing on the server, so there might be some (relevant?) difference with concurrent users on it and/or different databases (nr. of documents, nr. of views...).
As a result I would never choose NIFNSF for performance reason, not because it is slower (it's not, not even faster though), but for other advantages depending on the systems and the needs (backup, file size, view indexes on different partitions/drives, etc.).
21Fredrik Norling 17.05.2024 14:30:08 Current Information about NIFNSF
Can you ignore the NIFNSF view totally in the backup or does the daoscat_nsf.ndx file need to be backuped