Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

    Domino Start Script 3.2.0 with Docker Support

    Daniel Nashed  2 December 2018 21:31:29
    Finally I released start script version 3.2.0. After it was almost done I got a request for add Docker support :-)

    Domino on Docker works a bit different than other Linux implementations.
    The current versions sort of use systemd. But it's not completely implemented.

    So the right way to start and stop Domino is to use the entry point of the Docker container.
    The Start Script contains an entry point script that handles the start and stop of the server.
    It works in combination with the rc_domino_script.

    After the server is started, the entry point script waits for the shutdown in a loop.
    When a container is shutdown a signal SIGTERM is send before s SIGKILL is send.
    For a proper shutdown the default shutdown time needs to be extended from 10 seconds to a longer time like 60 seconds with the --time option for the "docker stop" command.
    The entry point catches those signals and does a proper shutdown.

    In addition to this the new install_script routine is Docker aware and installs the entry point along with the start script.

    All components and the new functionality is documented in the readme contains  all the details about the new functionality.

    The start script is now distributed under Apache 2.0 License to allow to include it in other projects like distributing it with a Docker script ;-)

    You can request the new version on the start script here ->

    -- Daniel

    Domino 9.0.1 FP10 IF5 released

    Daniel Nashed  2 December 2018 15:44:55
    There is a new IF5 which has been just released.
    The fixlist shows 5 fixes and some of them could be important for you.
    There are two NIFNSF fixes which appear to be critical.
    Two others are about issues in special conditions when replicating databases.
    And one important fix for compact.

    In contrast to the previous IF4 which I did not blog about, this one makes sense to be installed even if you are not using NIFNSF.

    -- Daniel

    Fix introduced in release
    Additional Information
    Interim Fix 5
    for Domino 9.0.1 Feature Pack 10
    MSKAB56HC9 Fix frequent crashes in GsKit on 9.01FP10 Windows Only
    WSPRAWTJ5N Fix an issue with multiple replicas processing the same connection doc at the same time  
    ROBEAXPFDP Fix an issue with Pull Replication Failing When Run At The Same Time As A Pull/Push  
    OSAMAXAGGJ Fix a server crash when rmflush access the .ndx file while compact was finishing up  
    PMGYAKWECA Fix crashes affected by the use of NIF/NSF in databases  
    OSAMAVSQQL Fix a long held lock when compact is running  
    Interim Fix 4
    for Domino 9.0.1 Feature Pack 10
    DWONB2HL4Q DST 2018 - Brazil - Fix Calendar entries which are off by one hours for two weeks  
    MKENB3MPFY Memory overwrite found in sgml parser casing http/notes client to crash when trying to render a document  
    MKENB3PEBF Backend lsxbe open stream should check for null file name before passing it to OS routines to prevent crash  

    DNUG IBM Domino Mobile Apps Webcast and Slides

    Daniel Nashed  5 November 2018 13:29:41
    There was a very interesting Webcast hosted by DNUG last week where IBM did present the new iPad client.

    We had just a limited number of participants based on the platform that was used.

    But the slides and also the webcast replay has been made public.

    This is a must see presentation and webcast and shows a lot of examples and many details!

    The webcast is in German but the slides are in English.

    Huge thanks Erik Schwalb!!

    And also thanks for the DNUG Mobile Group (Detlev Poettgen) to organize this webcast.

    -- Daniel


    Webcast Replay:

      Official IBM Domino Mobile Apps Beta finally started

      Daniel Nashed  4 November 2018 14:45:39
      Now that the beta is public we can finally show it and you can have a look on your own.
      I wasn't sure I am allowed to speak about it in public. That's why I did not post anything yet.

      Here is the link to the beta registration -->
      And a link to the beta documentation -->

      When you apply for the beta, you get a TestFlight invitation from IBM which allows you to download and install it.

      Update: The registration changed a bit after the beta went live.
      The pre-registered uses have been queued and we got email notifications.

      When you register today you should follow the link directly from your iPad. Once you filled out the form you are directly redirected to the TestFlight app.

      In the app you don't have to specify a reedem code. You just close the keyboard and scroll down to continue.

      Thomas has a screen print in his blog post to show how it looks like ->

      The client for the iPad is a native application that is intended to run basic client applications which are written in Lotus Script and @Forumlas.

      Just Java code isn't supported because Apple does not allow Java code on iOS devices.

      The application is a separate port based on the client binaries and IBM/HCL are also intending to work on an Android version with the same functionality.

      The app intended to run all your existing apps and brings a rapid application development option for new apps to your iPad!

      Two Apps - Don't be confused!

      There will be two different offerings that will be technically the same app.

      1. IBM Domino Mobile Apps

      This app is intended for customers in maintenance with entitlement to the new client

      2. HCL Nomad

      This app is a separate paid offering from HCL

      Both applications are technically the same. But they will be offered to the market in different ways.

      So the one that is available for beta testing is IBM Domino Mobile Apps.

      Bringing your Notes.ID to the app

      The best way would be to leverage ID Vault. The client is ID Vault enabled and you can deploy your by specifying your name and the correct password.

      If you don't have an ID Vault you have to upload your ID via iTunes which is quite complicated and might only work for admins haven a first look.

      So you should really use ID Vault. In fact I configured ID Vault just for that in my one person production environment (managing the download count manually in my case).

      How does it look & feel

      From the first builds we have seen this looks already great! There is still some functionality still pending like the action/agent menu. But beside that it's already awesome!

      You can replicate applications and you see all the "hidden" features work that are build into @forumula since ever.

      I have played with it and I am accessing some of my more complex applications and it works already very well! Also the newer Notes port encryption has been added which is enabled on my external server.

      You should have a look into it on your own and provide feedback to IBM thru the app.

      There is a menu entry "Report a Defect" which will generate an e-mail.

      Results from some simple Formula-Tests

      Here some interesting results from fom @formulas. This really shows that this is a real client not just a simple app.

      Almost all the @functions I tested worked.


      /var/mobile/Containers/Data/Application/86839C84-8B1A-49D7-ACE9-17EC63747EE6/Library/Preferences/Notes Preferences

      @GetMachineInfo ([EnvVariable]; "Directory")

      /private/var/mobile/Containers/Data/Application/2EC1C196-0F72-4546-ABAA-08120167278C/Library/Application Support

      @GetMachineInfo ([MachineName])


      And you can also try the following two.

      The workspace made it hidden into the app but is not intended to be used right now.
      There are discussions if the new recent applications page or the workspace are better to use. I would say this depends on user's peferences.

      @Command( [WindowWorkspace] )


      Works like expected and returns a database.

      @Prompt([LocalBrowse]; "Select a file"; "1");

      File operations like this ask to take a photo or to select an existing photo.

      In both cases are filename is returned. So there is no direct way to select files on your device, which looks like an Apple restriction.


      You really should have a look into it on your own and get your own impression!

      Everyone has his own view what to test. I am usually trying to figure out how it works in the back-end.
      But in this case I am also interested in how it looks and feels.

      I am interested in your first feedback as well!

      IMHO this is a great new way for application development on an iPad!

      -- Daniel

        Domino Start Script V3.2.0

        Daniel Nashed  4 November 2018 08:38:18

        There wasn't a new release of my Linux/Unix Start Script for a year.
        I still have many download requests but there is not much feedback or feature requests.

        Most of the new functionality added is coming from my own requirements (for example to simplify operations) Or direct customer requests.
        I am always trying to understand the underlying requirement and add it in a flexible way, that many admins can benefit from it.

        The new version I am working on simplifies configuration with systemd. You don't have to change the rc_domino file any more.
        And it comes with a simple installation script that will copy the files for you and sets the right permissions.

        Sadly systemd does not allow to use variables in the way I would need it, to read configuration from a config file.
        So the install script will only work for default configurations where you use the notes user and group with a single partition.
        I wanted to avoid a complex configuration script that replaces variables in the start script files.

        Beside that I also added a couple of commands that make your life easier. You can now list or edit all files involved including the systemd service file.

        And you can also show the log of the server. That's one command that was missing for me.
        Before I already added lastlog where you can show the last n log lines and redirect that for example to a grep command.
        Now you can do whatever you like with the log by specifying an additional parameter behind log (without a parameter vi is used).

        And last but not least I added another way to schedule compact operations.
        You can now configure a one time compact for example after a maintenance boot.

        See detailed feature list below for more information.
        If you have other ideas and requirements that should be in the standard, let me know.

        I am currently testing the new version and will update the start script page soon.

        -- Daniel


        New Features

        New command 'log' -- displays the start script output log.
        This command can be used with additional options like specifying the command to open the log (e.g. 'log more').
        See log command description for details.

        New command 'systemdcfg' to edit the systemd configuration file

        New command 'compactnextstart' which allows you to configure a one time compact at next startup.
        For example after an OS patch day. The new command allows you to enable/disable/display the settings.
        Check the command documentation for details.

        New config variables DOMINO_DOMLOG_DB_DAYS, DOMINO_DOMLOG_DB_BACKUP_DIR which can move domlog.nsf to a backup directory.
        This works like the log.nsf backup/rename introduced earlier.

        New config variable DOMINO_UMASK.
        And also allow to set the umask when starting the server via DOMINO_UMASK.

        Show the umask used in the startup log of the server (along with environment and ulimits etc).

        Separate, simple script 'install_script' that will allow you to install the start script for default configurations.


        Start script is now prepared for systemd without changing rc_domino script (adding the service name).

        Enable the configuration for systemd in the rc_domino script by default and check if systemd is used on the platform.
        This allows to install it on servers with systemd or init.d without changing the rc_domino script.

        Changing deployment from zip to tar format. So you can unpack the files directly on the target machine.
        This also means you can run the install_script without changing the permissions.

        Changed the default location for the Domino PID file for systemd from /local/notesdata/ to /tmp/
        This allows you to change the data directory location without changing the pid file name in domino.service and config.
        But this means for multiple partitions you have to change the name for each of the services.

        I tried to dynamically read parameters from the config file in domino.service.
        There is an EnvironmentFile= statement in systemd services to read configuration files.
        But using variables does only work for parameters passed to a an ExecStart/ExecStop command but not for the name of
        those scripts invoked. Also it is not directly sourcing the parameters but reading them directly.
        So there seems to be no way to read the config of domino.service from the config file.
        And I had to "hardcode" the filenames.

        Domino Database Maintenance and Fragmentation

        Daniel Nashed  2 November 2018 14:19:30

        I am currently looking into this for a customer. And I think most customers are in the same situation.
        What I would assume is that most of you are using storage optimization like compression, DAOS and NIFNSF already.
        Sorry for the long post but I think this might be helpful including the details about file fragmentation at the end of the blog post..
        Also my older but still relevant compact presentation ( dnug2016_compact.pdf
         ) might be helpful. Let me know what you think...

        -- Daniel

        Current situation in most Domino environments

        The current situation in most Domino environments is that the classical compact and maintenance operations are used.
        Classically you have configured something like compact -B -S10.
        Either nightly compacting databases with more than 10% space or just before your backup in case you have archive style translog.

        This should recover free space and reorganizes your databases to some extent.
        But even in-place compact, which also reduces file-sizes, isn't the perfect solution.

        Also if you run a compact -B on system databases or other databases which are highly used, those databases are a lot slower during compact because of extensive locking.

        What Compact -B does

        Compact -B is an in-place compact that allows users to continue to work with the database and also frees up remaining space in the database by truncating free space at the end of the database.
        If the database is in use this compact can be quite slow and will slow down your user/server operations in this database during the compact.
        The compact generates a new DBIID. And the database will need a new backup. This is why these compact operations are usually scheduled before a full backup (in case archive style translog is used).

        What Compact -B doesn't do

        The existing file will be optimized. But only to a certain extent. Also this will not change the file-fragmentation of the database file.

        It would make more sense to use a copy style compact to reorganize the database completely.
        A copy style compact is also what the database team is recommending and what IBM is using in the cloud.

        Why isn't Compact -C the right solution

        The standard copy-style compact does reorganize the database from NSF point of view.
        A copy style compact takes a database off-line, generates a new temporary file and copies notes, objects etc into the new file which will be finally renamed into the original database filename.

        In this type of compact the new file is allocated small on OS level and will be increased step by step until compact has completed the copy action, then renames the .tmp to NSF.

        And this usually leads to a fragmented NSF file on OS level after the compact - especially on NTFS/Windows.
        The OS tries to optimize those small allocations and will end up writing blocks into smaller free spots in the file-system.

        Extra Tip : Make sure .tmp files are entered into your Anti-Virus solution exclusion list.

        DBMT -- Database Maintenance Tool

        The DBMT servertask replaces operations for compact, updall and other tasks and can be used in a variety of ways.
        You can configure it to run without compact every day and every weekend with compact before the backup begins.

        It can be also scheduled in a way that it only compacts databases after n-days and also lets you specify a time window for your compact operations.

        In contrast to the standard compact task, DBMT determines the size of the new compacted database and will allocate space in one large chunk!
        This allows the OS to optimize where the database is allocated and you end up with a very small number of fragments on disk.

        Via notes.ini e.g. DBMT_PREFORMAT_PERCENT=120 you could increase this allocation to have 10-20% more space free in the database.

        This would ensure that you don't need new small allocations in the database for creating a new note or object.

        The extension granularity of the database currently is still quite small. So additionally you end up with a new allocation for most of the document updates.

        Having free space in the database allows faster allocation and less fragmentation.
        Furthermore; if you have DAOS enabled 10-20% for additional NSF disk requirement, isn't a large overhead in comparison to the benefit gained in optimized file allocation.

        I have added some OS specific information about fragmentation for Windows and Linux at the end for details.

        DMBT also has some limitations. For example system-databases are not compacted and that you cannot use the free space option.
        But it offers other flexible options that make sense. You can specify the number of days between compacts (databases are skipped if they have compacted recently).

        And with the pre-alloc you are specifying the free space in the database anyway.

        Separate Maintenance for system databases (on Linux optimized at startup)

        System databases are always in use. But it is important to have a copy-style compact for those databases as well.
        This can only be done when the server isn't started. That's why I added special pre-startup compact operations to my Linux start script.

        I have separate options for log.nsf and for other system databases in the start script. The current configuration has new examples leveraging DBMT.

        But this wasn't completely flexible because it was always executed at startup, manual or with a "restartcompact" command.
        So I added another option today for one of my customers which might be also useful for your environments.
        The customer is doing regular Linux patching and they are rebooting the Linux machines afterwards.

        I added an new one-time start compact option for system databases. It is using the already available compact options and is triggered by an extra file in the data directory.
        It can be enabled and disabled via a start script command.

        New Feature in the next Domino Start Script

        Here is what I planned for the next version. The idea is to have a flexible way to control when a start should compact databases.

        It can be also automated if you have a central patch management. It's just a file that needs to be created to trigger the compact at start-up.

        So this provides a more flexible control over system database compacts and this ensures the OS admin can run those compacts without knowing the exact syntax.

        -- snip --

        compactnextstart on|off|status


        Allows you to configure one time compacting databases at next startup.

        This functionality controls a text file 'domino_nextstartcompact' in your data directory.

        If this file is present, the compact operations specified via


        The 'domino_nextstartcompact' will be deleted at next startup.

        This is for example to be intended to be used after a planned OS reboot or OS patch.

        And it avoids separate steps executed by the OS level admin.

        compactnextstart on  --> enables the compact at next startup

        compactnextstart off --> disables the compact at next startup

        Specifying no or any other option will show the current settings.

        -- snip --

        Summary and Recommendations

        You should really look into DBMT for normal operations and also system databases!

        The command lines for DBMT are quite flexible. I have presented multiple times about those features which have been introduced with Domino 9.

        But they are still not used widely. I would really recommend you have a closer look into DBMT.

        I have my own tool "nshrun" for years which does what DBMT does and a couple of other more flexible options.

        But in general DBMT is a great out of the box optimization for your database maintenance.

        I have added extracts from an old presentation below as an attachment for details about all compact and other maintenance options.

        There are some parameters to set and there are specify switches to enable threads for compact, update and other options for DBMT.

        If you are interested in fragmentation details, check the following abstract as well.

        Appendix - File Fragmentation

        Fragmentation of file-systems have different effects on Windows and Linux.

        Many customers use tools on Windows to reorganize their files.

        But as discussed above, it makes more sense to use Domino compact with pre-allocation to create the files with low fragmentation and keep the fragmentation low.

        The performance impact is hard to measure. But for sure your backup operations and also some Domino operations will be faster with a well maintained NSF file.

        We are looking for maintenance of the data in the NSF file and and also the file itself at the same time.

        So I would not use tools to defrag your Domino files today with DBMT available.

        But the following abstract gives you an idea how to analyze file fragmentation for Windows and Linux.

        The well known tool from SysInternals allows you to analyze and defrag files.

        I am using it to just analyze the fragmentation level.


        [See Contig reference for more information and download]

        You can see that even my local log.nsf on my client is quite fragmented.

        D:\notesdata>n:\tools\Contig64.exe log.nsf

        Contig v1.8 - Contig

        Copyright (C) 2001-2016 Mark Russinovich



          Number of files processed:      1

          Number of files defragmented:   1

          Number unsuccessfully procesed: 0

          Average fragmentation before: 10592 frags/file

          Average fragmentation after : 10592 frags/file

        For a Notes client no DBMT is available and once databases have the right ODS level, there is no automatic copy-style compact trigger anyway.

        But in this blog post my main focus is on the server side where you could leverage DBMT.


        On Linux I never looked into fragmentation before. But there are tools available to analyze fragment levels on Linux as well.

        filefrag allows you to see the number of fragments. If you are interested in details run it with filefrag -v on a single NSF.

        But I was more interested in seeing the fragmentation of my databases.

        The following command-line gets fragments for all NSF files and lists the 40 files with most fragmentation.

        On my secondary server it looks quite OK. But I did the same today on a customer server and got databases with thousands of fragments.

        I tested on my own mailfile and the DBMT compact did reduce the number of fragments (Only works if you have no blanks in your file-names).

        find /local/notesdata -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -40

        find -type f -name "*.nsf" -exec filefrag {} \; | cut -d " " -f 1-2 | sort -t" " -rnk2 | head -20

        ./domlog.nsf: 594

        ./nshtraceobj.nsf: 84

        ./log.nsf: 71

        ./big/mail10.nsf: 31

        ./nshmailmon.nsf: 26

        ./statrep.nsf: 23

        ./dbdirman.nsf: 17


        New DBMT Switches in Domino 10

        Daniel Nashed  26 October 2018 17:44:55
        There are new abbreviated command line switches in Domino V10.
        They are helpful if you have to type them in manually and because the dbmt compact command line has a limit of 128 bytes.

        It was in discussion earlier and finally made it into Domino V10.

        Beside those abbreviated commands there are also 3 new command-line switches are documented below.
        -ods is not yet documented in the dbmt -help command but the other two have been added.

        Additional info:

        -ods does only compact databases which are not on the current ODS. And it will skip databases with the current ODS.

        But databases with the wrong database class (which have been creates with a .ns7 earlier etc like some help databases) still need a compact -upgrade.

        dbmt has one big advantage. databases get their space pre-allocated.

        See this post for details -->

        -- Daniel

        Abbreviated DBMT switches

        -nocompactlimit (-ncl)
        -blacklist (-bl)
        -force (-f)

        -range (-r)
        -ftiNdays (-fnd)
        -ftiThreads (-ft)
        -compactNdays (-cnd)
        -compactThreads (-ct)
        -timeLimit (-tl)
        -updallThreads (-ut)
        -stopTime (-st)

        New Command-Line switches

        -ods -- causes compacted dbs to be upgraded to the current ODS level

        -blackList  specify a .ind file containing databases not to be compacted

        -nounread        do not update unread tables

        DNUG Domino Day 2018 in Düsseldorf November 15

        Daniel Nashed  26 October 2018 13:06:05
        Like every year we have our DNUG Domino Day in Düsseldorf in November.
        This year we are not only having a "Domino 11 JAM" but we also will have all the details about Domino V10.

        Now that Domino V10 is available we will not only speak about the features but also how they really work and what we found out so far.

        I have already started to blog about current features.
        On that day you can expect much more information for all parts of the products.
        There will be also a session from IBM/HCL about what is coming next.
        And we can finally show the Notes client application on iPad!

        IMHO we but up again an interesting agenda. But most of the sessions are in German.
        The day is free for DNUG members but we also have some seats left also for none-members for a moderate conference fee.

        We have already increased the number of seats and it sound like it is going to be a busy day..

        See the agenda and other details here -->

        I am looking forward to see many of you mid of November!

        -- Daniel

        Domino 10 flexible and easy Statistic Collection

        Daniel Nashed  26 October 2018 10:53:33
        Today I had another look into the statistic collection option that is available in Domino 10.
        The idea of my post is to give you an idea what this can do for you.
        Even if you have no central system yet or if it was to complicated to integrate. This sample might be helpful to see how it works out of the box with very simple configuration.

        This new functionality pushes the server statistics per minute to a defined target via HTTP post.

        By default this is prepared to work with NewRelic. But you can configure it to use any kind of target!

        There are notes.ini settings to change the format depending on your target and there are place holders that are filled (like $Name and $Value).

        If the target needs JSON or similar formats, you can change it accordingly and there are examples in admin help.

        There are separate settings for the normal stats and the new per minute delta stats which have been added.

        See this link for details:

        To see how the statistics are published, I have created a simple Notes database with a simple agent that my server can post the data to.

        This allows to see and test different formats in a very easy way. Of course you can push it to any other application. This was just a very simple and easy way for me.

        I added the following notes.ini settings to my Domino 10 server:

        STATPUB_METRIC_FORMAT=Domino.myserver.$Name$ $Value$
        STATPUB_DELTA_METRIC_FORMAT=Domino.myserver.PerMinuteStats.$Name$ $Value$

        The result is a single post to the target containing all the statistics.

        My agent that consumes the post data just dumps everything into a document. Most of it into text fields but the request data is too big to fit into a text field. So I converted it to Richtext.

        See my very basic sample database which can be used on any Domino HTTP server to collect the stats.

        -- Daniel

          nshdellog -- Domino Deletion Log Annoatation and Backup

          Daniel Nashed  23 October 2018 18:55:11
          Let me introduce my Domino 10 Deletion Log Annotation and Backup Application.

          Here is the first version with a short documentation.

          It's available for free and is a full solution to analyze and backup Domino 10 Deletetion Logs.

          To get a copy of the database send a mail to dominodeletelog at with the Subject line "nshdellog". You will receive the template in a zip file.
          If you cannot receive emails with databases attached let me know.

          Let me copy the about document of the database which explains what it does and how it works.

          Enjoy and please provide feedback! It's a first version.

          -- Daniel

          nshDelLog - Domino Deletion Log Annotation and Backup
          Copyright 2018, Nash!Com - Daniel Nashed Communication Systems

          Short Documentation Nash!Com Deletion Log Application

          Quick Start
          • Copy template to server ensure proper template ACL (default uses LocalDomainAdmins)
          • Sign Template --> there is a Sign DB Action for current or via AdminP (needs unrestricted agent permissions)
          • Create database from template --> suggested default name: nshdellog.nsf but you can chose any name
          • Enable Agent via Config Profile (Status enables the agent)
          • Review Standard Settings


          Deletion Logging is a new feature introduced in Domino 10 to track deletion of documents and design elements.
          All different types of deletions are stored in a central log file "delete.log" in the IBM_TECHNICAL_SUPPORT directory of your server.

          This logging is implemented on lower database level and is activated per database. The settings are stored inside the database and do replicate.

          Enable Deletion Logging per Database

          Right now the only official way to enable deletion logging is to use the compact servertask with the -dl option (see examples below).
          You can add up to 4 additional fields per database that you want to log. Those fields could differ depending on the type of database and the logging purpose.
          The log does distinct between HARD deletes and SOFT deletes and also allows you to trace the deletion of design elements.

          The compact operation looks like this (example):
          load compact mail/nsh.nsf -dl on "$TITLE, Form, Subject, DeliveredDate, PostedDate, FullName"

          Tip: You can specify more than 4 fields but only the first 4 items found are logged.

          Log file Location and Content

          After you enabled the deletion logging, each kind of delete is recorded in a central text file on the server.  IBM has chosen a text file for performance reasons.
          Here is how the standard log looks like and how it looks with the custom log fields.

          In my example I am interested in the Subject, DeliveredDate. And also the Form of the document.
          In addition for design elements I am interested in the design element name stored in $TITLE.

          Those would be the type of fields I would add for example for a mail-database. The choice of $TITLE for design document delete can be quite helpful. The note class alone might not be sufficient to identify the design element deleted.

          The resulting log files are stored in IBM_TECHNICAL_SUPPORT directory and look like this:

          "20181016T140343,47+02","del.nsf","C125831B:004BC903","nserver","CN=Daniel Nashed/O=NashCom/CÞ","HARD","0001","08B31722:E41F1971C1258323:0068EF49","Form","4","Memo","Subject","19","Test UNDO  Updated2"

          The name of the file has the following syntax:


          Like other files the currently active log file has a default name delete.log and the first line contains the name of the file for renaming it when the server is restarted (similar to console.log as shown above).

          Here is a short description of the columns. The last columns depend on your configuration and the list is comma separated and has quotes around the values. Quotes are escaped accordingly.

          • Timedate in the timezone of the server plus the timezone of the server at the end.
          • Database Name
          • Replica ID
          • Process which deleted the note
          • User who deleted the note
          • HARD/SOFT Delete, RESTORE for a SoftDelete
          • NoteClass of the document (for example 1 for a document, 8 for a view/folder)
          • UNID of the document

          Custom Log Fields

          After those standard fields you see the up to 4 custom fields that you have optionally specified with compact.
          The first column always gives you the name of the field. The second column the length of the value. And the the following column gives you the value itself.
          The text elements and the total log line is limited. The current limits are  400 bytes per item and 4K for the whole log line.
          This should be sufficient in most cases because we only need it to find the document.

          The field types that can be used are Text, Text_List, RFC822_Text, or Time.
          And those fields have to be present at the database when it is enabled!

          The current log file encoding is in LMBCS (Lotus Multi Byte charset) which is the internal representation that Notes/Domino uses since day one to store text.
          But it would be difficult outside Notes do read this charset encoding. There are plans for the next update to support other formats. But for now it is LMBCS.

          Delete Log Application


          This deletion log application has the following main functionality:
          • Deletion Log Annotation
            Periodically read the deletion logs on a server and annotate them into a Notes database on the server
            This import collects information from the current delete.log file and also recent delete log files renamed after a server restart
          • Manual Import of Delete Log Files
            You can also manually import log files into a local or server based log database for annotation by simply selecting the log files.
          • Central Backup of finalized Delete Log Files
            Collect completed delete log files and stored them on a centrally located Notes database as an attachment for archiving those log files
            Once the deletion log file is saved to the database those log files are cleaned up on disk.

          Installation Instructions
          • Copy the template to your server
          • The Config Profile also contains an action menu to sign a database with your current or with the right via Adminp.
            In addition there is another button to check the status of the AdminP Request. It will open the first response document of the Adminp request when the Adminp request is already executed.
          • Create a new database from template.
          • Ensure the agent can be executed by properly signing the application with an ID that has unrestricted agent permissions.
          • By default the periodical agent is scheduled every 5 minutes on all servers ( Run on * ) and does not need to be configured.
          • The agent is disabled by default. It will be enabled when you set the status in the Config Profile to "on".

          Deployment Recommendations

          You should install a separate copy with different replicas on different servers.
          In addition you could have a central log database for archived delete log files.
          The default location for those backup log files is the current database.
          But you can change the location in the configuration profile depending on your infrastructure requirements.

          Housekeeping for the Delete Logging Database

          Deletion logging can generate a lot of log documents. You should think about how you remove those documents after a while.
          This can be implemented setting a cut-off delete interval for the database (e.g. 30 days)
          You could still manually import backup log files later on in case you need to analyze older log data.
          The config profile contains settings to specify cut-off interval and also the cut-off delete flag


          The application is written in Lotus Script and consists of the following main components
          • One script lib with the main logic
          • One agent which runs periodically on a server to annotate and collect the log files
          • One agent which allows manual log file annotation
          • Configuration Profile
          • Forms and Views to show the log data

          On purpose there is not a navigator for the views to allow that you can easily add new views as needed for your evaluations without dealing with the navigator.
          The agent runs periodically on the server to annotate the current delete.log file and also to annotate and backup older log files.

          For the current log file "delete.log" the annotation is incremental. The delete.log file is read and annotated and the last position is stored in notes.ini
          The notes.ini setting has the following format: "$NSHDELLOG_DELETE_" + log file name and stores the last position.

          Example: $NSHDELLOG_DELETE_DOM-ONE_2018_10_06@12_58_30.LOG=18977

          The name is taken from the first log line which already contains the right name in the delete.log file to allow one final processing when the log file has been renamed after restart.
          After reading a renamed log file the log is centrally archived, deleted from disk and the notes.ini entry will be removed.

          Entries that generate an error during parsing are stored in separate documents listed in a separate view.
          The application also has separate views for error log information for run-time errors.
          And also run-time information like bytes and lines read (mainly to see if the application also works with larger amount of data).


          You find the configuration in the Config Profile with the following configuration Options.

          • Status :  On|Off
            Enables the annotation and collection of logs on the server.
            When you save the profile it will automatically enable/disable the server scheduled agent.
          • Log Backup: On|Off

            Enables Collection of finished delete log files (all files looking like delete_*.log)
          • Log Level: Disable|Enable|Verbose|Debug

            Enables logging
          • Import Charset: (default LMBC)

            The currently used charset is LMBC. It might change in future. This setting changes the charset for the import file.
          • Log Backup Server:

            Server location for log backup files
          • Log Backup Database:

            Database location for log backup files
          • Remove Log after Backup: On|Off

            Determines if finished delete log files are removed from server once they have stored in backup database
          • CutOff Interval: (default: 30)

            Cutoff-Interval set for the standard log database (current database not the dynamic formula specified databases in the Advanced Configuration)

            CuttOff Delete: On|Off (default: On)

            Enable Cutoff-Delete for the standard log database

            Those settings are automatically updated when the Config Profile is saved

          Advanced Configuration

          If you have special logging requirements you can use the advanced configuration and specify a formula for the log database.
          The result of the formula will be used as database name for dynamic logging.

          If the database does not exist, it will be dynamically generated from template. You can specify the location of the template.
          You could use different databases for different processes or create a new log database every month (see example below).
          Each database can have a different cutoff-interval. The replica cutoff-settings are checked and set every time the agent starts.

          Using multiple databases should not lead to a performance impact because the database handles are cached inside the application.

          The result of the fomula is computed on the log document before saved is used as follows:

          ""  (empty string)
          Log to current database
          This is also the default of the log database cannot be created


          Log entry will not be written.

          Local Database Name string

          This string is used as  database name. You can specify the name, title, cutoff-interval and also if documents should be cut-off after a certain time
          The format used a "|" as the delimiter. You can use any type of formula which could check any type of filed.


          datestr := @text(@year(@today))+"_"+@text(@month(@today));  "nshdellog_"+ datestr+ ".nsf" + "|Delete Log " + @text(@year(@today)) +" / " + @text(@month(@today)) + "|30|1";

          nshdellog_2018_10.nsf|Delete Log 2018 / 10|30|1

          Change History

          V 0.9.0 / 16.10.2018

          Initial Beta Version

          V 0.9.1 / 21.10.2018

          Added more configuration options

          - Result Database
          - Separate Log Database

          New Log check for duplicate lines.
          In general the logging should not cause duplicate lines because files are incrementally ready.
          But for double checking a view lookup can be enabled to check.

          The log results show potentially skipped double line. in the view.


          • [IBM Lotus Domino]
          • [Domino on Linux]
          • [Nash!Com]
          • [Daniel Nashed]