nshDelLog - Domino Deletion Log Annotation and Backup V 0.9.0
Daniel Nashed – 16 October 2018 17:34:57
Finally I have a first version of my deletion log application. It took a while because I wanted to add some flexibility and make it configurable.
I am not officially releasing it yet to everyone because I want some feedback and some beta testing.
If you want a pre-release version let me know.
Here is the short documentation that I wrote.
This version only works with Domino V10 GA version not the beta version any more because the length of the custom value has been added as extra columns.
So make sure that you remove your deletion-logs from beta testing :-)
-- Daniel
nshDelLog - Domino Deletion Log Annotation and Backup V 0.9.0
Copyright 2018, Nash!Com - Daniel Nashed Communication Systems
Short Documentation Nash!Com Deletion Log Application
Quick Start
- Copy template to server ensure proper template ACL (default uses LocalDomainAdmins)
- Sign Template --> there is a Sign DB Action for current user.id or server.id via AdminP (needs unrestricted agent permissions)
- Create database from template --> suggested default name: nshdellog.nsf but you can chose any name
- Enable Agent via Config Profile (Status enables the agent)
- Review Standard Settings
Introduction
Deletion Logging is a new feature introduced in Domino 10 to track deletion of documents and design elements.
All different types of deletions are stored in a central log file "delete.log" in the IBM_TECHNICAL_SUPPORT directory of your server.
This logging is implemented on lower database level and is activated per database. The settings are stored inside the database and do replicate.
Enable Deletion Logging per Database
Right now the only official way to enable deletion logging is to use the compact servertask with the -dl option (see examples below).
You can add up to 4 additional fields per database that you want to log. Those fields could differ depending on the type of database and the logging purpose.
The log does distinct between HARD deletes and SOFT deletes and also allows you to trace the deletion of design elements.
The compact operation looks like this (example):
load compact mail/nsh.nsf -dl on "$TITLE, Form, Subject, DeliveredDate, PostedDate, FullName"
Tip: You can specify more than 4 fields but only the first 4 items found are logged.
Log file Location and Content
After you enabled the deletion logging, each kind of delete is recorded in a central text file on the server. IBM has chosen a text file for performance reasons.
Here is how the standard log looks like and how it looks with the custom log fields.
In my example I am interested in the Subject, DeliveredDate. And also the Form of the document.
In addition for design elements I am interested in the design element name stored in $TITLE.
Those would be the type of fields I would add for example for a mail-database. The choice of $TITLE for design document delete can be quite helpful. The note class alone might not be sufficient to identify the design element deleted.
The resulting log files are stored in IBM_TECHNICAL_SUPPORT directory and look like this:
"20181016T140343,47+02","del.nsf","C125831B:004BC903","nserver","CN=Daniel Nashed/O=NashCom/CÞ","HARD","0001","08B31722:E41F1971C1258323:0068EF49","Form","4","Memo","Subject","19","Test UNDO Updated2"
The name of the file has the following syntax:
delete_
Like other files the currently active log file has a default name delete.log and the first line contains the name of the file for renaming it when the server is restarted (similar to console.log as shown above).
Here is a short description of the columns. The last columns depend on your configuration and the list is comma separated and has quotes around the values. Quotes are escaped accordingly.
- Timedate in the timezone of the server plus the timezone of the server at the end.
- Database Name
- Replica ID
- Process which deleted the note
- User who deleted the note
- HARD/SOFT Delete, RESTORE for a SoftDelete
- NoteClass of the document (for example 1 for a document, 8 for a view/folder)
- UNID of the document
Custom Log Fields
After those standard fields you see the up to 4 custom fields that you have optionally specified with compact.
The first column always gives you the name of the field. The second column the length of the value. And the the following column gives you the value itself.
The text elements and the total log line is limited. The current limits are 400 bytes per item and 4K for the whole log line.
This should be sufficient in most cases because we only need it to find the document.
The field types that can be used are Text, Text_List, RFC822_Text, or Time.
And those fields have to be present at the database when it is enabled!
The current log file encoding is in LMBCS (Lotus Multi Byte charset) which is the internal representation that Notes/Domino uses since day one to store text.
But it would be difficult outside Notes do read this charset encoding. There are plans for the next update to support other formats. But for now it is LMBCS.
Delete Log Application
Functionality
This deletion log application has the following main functionality:
- Deletion Log Annotation
Periodically read the deletion logs on a server and annotate them into a Notes database on the server
This import collects information from the current delete.log file and also recent delete log files renamed after a server restart
- Manual Import of Delete Log Files
You can also manually import log files into a local or server based log database for annotation by simply selecting the log files.
- Central Backup of finalized Delete Log Files
Collect completed delete log files and stored them on a centrally located Notes database as an attachment for archiving those log files
Once the deletion log file is saved to the database those log files are cleaned up on disk.
Installation Instructions
- Copy the template to your server
- The Config Profile also contains an action menu to sign a database with your current user.id or with the right server.id via Adminp.
In addition there is another button to check the status of the AdminP Request. It will open the first response document of the Adminp request when the Adminp request is already executed.
- Create a new database from template.
- Ensure the agent can be executed by properly signing the application with an ID that has unrestricted agent permissions.
- By default the periodical agent is scheduled every 5 minutes on all servers ( Run on * ) and does not need to be configured.
- The agent is disabled by default. It will be enabled when you set the status in the Config Profile to "on".
Deployment Recommendations
You should install a separate copy with different replicas on different servers.
In addition you could have a central log database for archived delete log files.
The default location for those backup log files is the current database.
But you can change the location in the configuration profile depending on your infrastructure requirements.
Housekeeping for the Delete Logging Database
Deletion logging can generate a lot of log documents. You should think about how you remove those documents after a while.
This can be implemented setting a cut-off delete interval for the database (e.g. 30 days)
You could still manually import backup log files later on in case you need to analyze older log data.
The config profile contains settings to specify cut-off interval and also the cut-off delete flag
Implementation
The application is written in Lotus Script and consists of the following main components
- One script lib with the main logic
- One agent which runs periodically on a server to annotate and collect the log files
- One agent which allows manual log file annotation
- Configuration Profile
- Forms and Views to show the log data
On purpose there is not a navigator for the views to allow that you can easily add new views as needed for your evaluations without dealing with the navigator.
The agent runs periodically on the server to annotate the current delete.log file and also to annotate and backup older log files.
For the current log file "delete.log" the annotation is incremental. The delete.log file is read and annotated and the last position is stored in notes.ini
The notes.ini setting has the following format: "$NSHDELLOG_DELETE_" + log file name and stores the last position.
Example: $NSHDELLOG_DELETE_DOM-ONE_2018_10_06@12_58_30.LOG=18977
The name is taken from the first log line which already contains the right name in the delete.log file to allow one final processing when the log file has been renamed after restart.
After reading a renamed log file the log is centrally archived, deleted from disk and the notes.ini entry will be removed.
Entries that generate an error during parsing are stored in separate documents listed in a separate view.
The application also has separate views for error log information for run-time errors.
And also run-time information like bytes and lines read (mainly to see if the application also works with larger amount of data).
Configuration
You find the configuration in the Config Profile with the following configuration Options.
- Status : On|Off
Enables the annotation and collection of logs on the server.
When you save the profile it will automatically enable/disable the server scheduled agent.
- Log Backup: On|Off
Enables Collection of finished delete log files (all files looking like delete_*.log)
- Log Level: Disable|Enable|Verbose|Debug
Enables logging
- Import Charset: (default LMBC)
The currently used charset is LMBC. It might change in future. This setting changes the charset for the import file.
- Log Backup Server:
Server location for log backup files
- Log Backup Database:
Database location for log backup files
- Remove Log after Backup: On|Off
Determines if finished delete log files are removed from server once they have stored in backup database
- CutOff Interval: (default: 30)
Cutoff-Interval set for the standard log database (current database not the dynamic formula specified databases in the Advanced Configuration)
CuttOff Delete: On|Off (default: On)
Enable Cutoff-Delete for the standard log database
Those settings are automatically updated when the Config Profile is saved
Advanced Configuration
If you have special logging requirements you can use the advanced configuration and specify a formula for the log database.
The result of the formula will be used as database name for dynamic logging.
If the database does not exist, it will be dynamically generated from template. You can specify the location of the template.
You could use different databases for different processes or create a new log database every month (see example below).
Each database can have a different cutoff-interval. The replica cutoff-settings are checked and set every time the agent starts.
Using multiple databases should not lead to a performance impact because the database handles are cached inside the application.
The result of the fomula is computed on the log document before saved is used as follows:
"" (empty string)
Log to current database
This is also the default of the log database cannot be created
"DISCARD"
Log entry will not be written.
Local Database Name string
This string is used as database name. You can specify the name, title, cutoff-interval and also if documents should be cut-off after a certain time
The format used a "|" as the delimiter. You can use any type of formula which could check any type of filed.
DatabaseName|Title|Cut-OffInterval|CutOffDelete
Example:
datestr := @text(@year(@today))+"_"+@text(@month(@today)); "nshdellog_"+ datestr+ ".nsf" + "|Delete Log " + @text(@year(@today)) +" / " + @text(@month(@today)) + "|30|1";
Result:
nshdellog_2018_10.nsf|Delete Log 2018 / 10|30|1
Change History
V 0.9.0 / 16.10.2018
Initial Beta Version
- Comments [1]