Daniel Nashed 28 May 2014 10:18:57We have been asking for this functionality since DAOS was releases and now there is finally a solution. In some cases customers have to either switch of DAOS NLO encryption for a server or enable it later on. Or even want to move from one server.id to another server.id. There are two SPRs (#PMAO9C6R9G / #GFAL9AKKJZ) described in the following technote --> http://www.ibm.com/support/docview.wss?uid=swg21673931. The TN also describes how to use this new functionality. There are a couple of details that you should be aware of. First of all the two SPRs are not included in shipping code and are also not yet listed in the fixlist database. But they have been submitted to the 9.0.1 code-stream as far I understood. The output of the commands are printed to the console (using xprintf which is the equivalent of the internal console write call). I have asked if the output can be written to a file via -o opton in future. But for now you have to use the redirect invoking the daosmgr command. The TN also mentioned this fix-numbers. So if you need this functionality urgently you can try to request a hotfix from IBM. And as described in the TN you should use the migration to either encrypted or unencrypted offline. The move is a major migration. All NLOs will be rewritten most cases. This should be planned for a weekend and should be a one time action only. What are the szenarios and reasons to change the encryption of the NLOs? In many cases NLOs are encrypted because when DAOS was introduced to an environment someone forgot to set the notes.ini parameter to disable DAOS_ENCRYPT_NLO=0. But most customers don't require encryption of NLOs. If the NSF files on your Domino server are not encrypted and the server.id is not protected by a password, it does not make much sense to have the NLOs encrypted. It is even harder to find the right information in a NLO than in a NSF file. And if you copy the NLOs to a different machine including the server.id if it has no password, you can read the NLO anyway. So in most cases not having NLO encryption enabled is a best practice for a couple of reasons and the encryption only adds security when the server.id is protected as well. Encryption adds not that much overhead at runtime but there are a couple of other reasons. First of all if you want to use another cluster member to copy missing NLOs as more simple restore scenario when a NLO is missing this is only possible if NLOs are not encrypted. Second if you have storage like a NetApp where you have enabled block-level deduplication and point multiple DAOS stores to the same NetApp volume you can save a lot of disk storage because the same NLOs will have the same blocks. This does only work if the NLO is not encrypted because the same NLO on different servers will be encrypted with a different key (actually even on the same server when encrypted later the file could be different because of a different "session key"). On top of that some backup solutions support block-level deduplication. And that could save space on the backup side as well if encryption is disabled. With encryption enabled there is amost no block-level deduplication. In addition moving DAOS stored among servers when you switch the server.id is much more simple without encryption. But if you have the new options in the daosmgr you could now re-encrypt NLO files with a new server.id. I would only do this if you really really need it. In normal cases in such a migration scenario I would use the new functionality to disable NLO encryption for the above reasons. IMHO it is still good to have NLO encryption enabled by default to avoid discussions about DAOS security. But in reality in at least 80% customer environments NLO encryption is not required overhead and complexity. I know others think differently about it and that's just my humble opinion... On the other side we also have customers who started without encryption and now need to encrypt all databases, NLOs and also protect the server.id with a password (including the need for a solution to apply the password on server start in a secure way). Thanks to IBM to make this change and have it implemented in a flexible way to do it both ways including a verification options for encryption status of all NLOs. -- Daniel
Daniel Nashed 29 April 2014 07:09:14I got a couple of questions from multiple customer about ODS 52 which has been introduced in 9.0.1.
There is a bit of confusion about the new ODS and there is not much public available information.
First of all the new ODS 52 is optional and you only need it in some special cases.
It is not enabled by default and in the same way that you needed to set the new ODS it will also be implemented in 9.0.1
How to migrate to the new ODS?
You will need to set notes.ini CREATE_R9_DATABASES=1.
And the new ODS is available and important for clients and servers.
There are different ways to move databases to the new ODS on servers and clients.
For clients you will need to set NSF_UpdateODS=1 in combination with CREATE_R9_DATABASES=1 which lets the client convert to the new ODS.
On the server side you will need to set CREATE_R9_DATABASES=1 and use a copy-style compact.
You can either leverage the compact or the preferred method would be to leverage DBMT which would also generate an unfragmented new NSF file by default.
e.g. DBMT –compactThreads 6 –updallThreads 0
Why to migrate to the new ODS?
There are multiple reasons to migrate to the new ODS.
a.) Issue with encrypted databases
The best public available information about it is from John Paganetti's IBM Connect 2014 presentation. Thanks John for sharing those details!
Everthing else I found is either not detailed or not public..
Issue 1: Medium and Strong Encrypted Databases
- Problem – Rare note corruption when updating a note, only occurs with Medium or Strong encrypted databases
- Has existed since Notes/Domino began using Medium and Strong encryption
- Not noticed because vast majority of databases have replicas and fixup would discard the corrupted note and next replication the note would come back in just fine
- Resolution – Best way to maintain backward compatibility and interoperability was to address with a change to the on-disk-structure (ODS)
Issue 2: Medium Encrypted Databases
- Problem – Rare note corruption when updating a note, only occurs with Medium encrypted databases
– Has existed since Notes/Domino began using Medium encryption
– Not noticed because vast majority of databases have replicas and fixup would discard the corrupted note and next replication the note would come back in just fine
Resolution – The fix for this issue would affect the vast majority of the data and hence there were security concerns it could potentially weaken the current Medium encryption strength.
As a work around, Security team recommends customers go to ODS52 and upgrade existing Medium Encrypted databases to Strong
If you are using encrypted databases either on Notes client or on Domino server you should update to the new ODS!
But this requires to be on 9.0.1 code -- also on the client.
You will have more likely encrypted databases on a client than on a server.
IMHO On the server -- unless you have a password on your server.id (and a tool to manage that server.id on server startup) -- you should disable encryption.
Without a password on the server.id there is not much sense encrypting databases (and NLOs).
But in case you need encryption you should update to ODS52 and switch to strong encryption.
There is also another detail that John shows in his presentation.
I have not seen any public information for the overhead that encryption has on CPU utilization. And this information is quite useful.
NRPC run of Win2008 R2 Server 64-Bit @ 4000 Users, mail9 template
| Not Encrypted || 35% CPU |
| Medium Encrypted || 39% CPU |
| Strong Encrypted || 48% CPU|
On a client this is not really much overhead -- unless you are on a Citrix server.
But for a server this can be quite some overhead.
If you don't want that additional overhead there is a fix that helps also with medium encrypted databases.
But you will need to compact the database to the "new" medium encryption with ODS52 as well.
This is clearly more a work-around and the security team recommends to upgrade to strong encryption if you can.
Here is the way to enable the fix:
notes.ini ENABLE_MEDIUM_ENCRYPTION_FIX= FFFFFFFB
- Next copy style compact of existing Medium Encrypted databases will be ODS52 with new Medium Encryption which has fix applied
You can update all your medium encrypted databases to strong encryption leveraging copy style compact.
The notes.ini setting you need for that is COMPACT_UPGRADE_MEDIUM_ENCRYPTION_TO_STRONG=1.
This parameter can be quite helpful because it would be a manual step to migrate to strong encryption without it.
And you should disable the parameter when you are done with upgrading all databases to strong encryption.
On Notes clients databases are usually encrypted by default. The notes.ini setting LOCAL_DB_ENCRYPT_DEFAULT determines which encryption strength to use
(0 = No Encryption, 1 = Simple Encryption, 2 = Medium Encryption, 3 = Strong Encryption)
So you should have enabled the following for new databases that should be encrypted with strong encryption.
Note: In case your workstation uses local disk encryption and/or you are using shared login there is also not much sense in encrypting databases.
a.) Issue with large attachments
There is an issue with attachments larger than 2 GB which is fixed in ODS52 in 9.0.1
Fix for ZXZG85KJRK: Large attachments above 2 GB fail
You need Notes 9.0.1 clients and Domino 9.0.1 servers in combination with ODS 52 to get this completely addressed.
Details are available in the following technote:
This issue is another reason to upgrade to the new ODS even this is an issue that might only hit you in very rare conditions.
There are also settings to log the database encryption used. They will report the current encrpytion level based on the settings the first time a database is opened.
Administrators may now easily identify which databases are currently encrypted and the encryption level, by setting the following notes.ini variable
Utilizes a Bit Mask
1 is “Show Simple”
2 is “Show Medium”
4 is “Show Strong”
To see all Encrypted Databases
Simple, Medium and Strong (1+2+4 = 7)
Set SHOW_ENCRYPTED_DATABASES = 7 in notes.ini
When encrypted databases are opened for the first time - 0 to 1 transition, one of the following messages will be logged
“Current encryption strength: SIMPLE - < absolute file path >”
“Current encryption strength: STRONG - < absolute file path >”
Legacy Medium encrypted database
“Current encryption strength: MEDIUM - < absolute file path >”
New Medium encrypted database with fix (+)
“Current encryption strength: MEDIUM+ - < absolute file path >”
As long as running Release 9.0.1, SHOW_ENCRYPTED_DATABASES works for all database ODS levels
It makes sense to switch to the new ODS in some cases but you don't need to necessarily put it directly into your upgrade path -- at least on server side.
This can be done afterwards with a copy-style compact that you should run once in a while on any database.
DBMT in 9.0.1 helps you to keep databases defragmented -- check one of my recent blog entries for details.
And in the same step you can upgrade the ODS if needed.
On the server side there is most of the times really no reason to use encrypted databases in the first place.
So as not mentioned in other postings about the new ODS52 the most important step is to migrate to the new ODS on client side.
Unless you have users storing 2 GB attachments in their mailfiles...
Daniel Nashed 9 April 2014 21:41:51In case you are wondering. IBM Domino is not affected by the OpenSSL "Heartbleed" issues. Also Traveler (leveraging the Domino HTTP stack) nor the IBM HTTP Stack in Domino 9 on Windows does not use OpenSSL and is not affected. You still have to update your machines to a current OpenSSL package if you are running a 1.0.1 OpenSSL package. Here is the technote from IBM --> http://www.ibm.com/support/docview.wss?uid=swg21669782 And here is some additonal information I got from my ISP --> http://faq.hosteurope.de/index.php?cpid=19463 You have to install a current version. on RHEL/CentOS for example 1.0.1e-16 is not affected any more. After updating the package you have to restart applications using it. -- Daniel
Daniel Nashed 6 April 2014 13:43:43How cool is that new functionality introduced in 8.5.2. Simple but important addition. Looks like this has been implemented for XPages but you can also use it in normal Java and LotusScript. Before you had to save a document before passing the document context to an agent. Now you can just pass a new in-memory document and you don't need to save it at all. This is really useful when passing parameters to and from agents that you invoke. For example if you want output for a Java agent that you need to call -- like in my case right now. Thanks to Michael Gollmick who pointed me to this documentionation! This really made my day. I wasn't aware of this new functionality! -- Daniel Introduction
Release 8.5.2 introduces a new API for Agents to allow them run with a Document context that can be set by the caller, either an outer Agent or an XPage.
The Agent.runWithDocumentContext() API runs an agent and passes a saved or unsaved in-memory document to the DocumentContext property of the called agent:
New Agent.run APIs
The new APIs are :
Getting the In-Memory Document
|Agent.runWithDocumentContext(doc:NotesDocument, noteID:string) : void |
|Java ||public void Agent.runWithDocumentContext(Document doc) |
|public void Agent.runWithDocumentContext(Document doc, String noteID) |
|LotusScript ||NotesAgent.RunWithDocumentContext(doc As NotesDocument, noteID As String) As Integer|
The called agent can access the in-memory document via the existing API for accessing an in-memory document context. For example
public Document AgentContext.getDocumentContext()
Dim doc As NotesDocument
Set doc = NotesSession.DocumentContext
The document can be updated within the agent and when control returns to the XPage the updated values can be read from the document.
Run as Web user
Note: Domino Server-based Agent code must run in an Agent with "Run as Web user" selected on the Security tab under Properties.
Daniel Nashed 13 March 2014 12:48:59 A long time ago I already blogged about the changes IBM introduced for the file-system cache. And I ran into this in customer situations many times. I have described it in my IBM Connect session but because I got questions about it again, I think it makes sense to mention it again. The default settings they implemented might impact you when you add a lot of RAM to your Domino server. We have seen dramatical reduction of read I/O when adding a lot of RAM to the Windows machine because Windows 64 can leverage the 64bit address space for taking all the remaining memory for file-system case. But by default there is a very high physical memory limit for the file-system cache.– It will try to use all memory which can cause Domino Memory to be swapped out On startup of the Domino server the W64 call “SetSystemFileCacheSize()” is used to limit the cache. Since Domino 8 and higher ships a 64bit helper binary “cacheset.exe” to set the cache size for Domino 32bit. Domino 64bit has this call integrated into the core code. When the code is executed the system privilege “SE_INCREASE_QUOTA_NAME” is needed (See TN #1391477 for details). By default the value is set to 30% of memory. That would only work well with a machine with around 8 GB of memory, but even there some tuning might make sense because Domino will usually allocate less than 4 GB of memory. So you can tune the percentage used via notes.ini MEM_FSCachePercentMem=n The settings depends on the RAM and the memory that Domino needs in your environment . Example: 16 GB RAM, 6 GB reserved for Domino/OS = MEM_FSCachePercentMem=65 You can check the current settings with “cacheset.exe -g” Here is the output from a machine with 8 GB RAM without any settings after the Domino Server has been started once. cacheset.exe -g Existing file system cache values are minSizeRead 824488 kb, maxSizeRead 2473264 kb, flags 5 This is really a parameter that you have to look at when you run Domino 32bit/64bit on Windows! -- Daniel
Daniel Nashed 2 February 2014 00:15:28Still on the way back from IBM Connect but I want to give you a quick info... There are important fixes for Blackberry 10 -- specially when you are using the new todos in version 10.2.1 But there are more fixes that are included. Thanks to the Traveler team for all the new information during IBM Connect and for the short cycle of fixpacks responding to customer issues so quickly! -- Daniel
APAR List for 9.0.1 IF3:
|APAR # ||Component ||Abstract |
|LO77998 ||Server ||Read mark for Calendar invitation not synced from Mobile device to Notes Client. |
|LO78245 ||Server ||Temporary loss of event description on Mobile device when event modified on the device. |
|LO78248 ||Server ||Unnecessary error message displayed "Attempt to perform folder operation on non-folder item". |
|LO78299 ||Android ||Unable to reply to mail on Android device if recipient has Apostrophe in name. |
|LO78328 ||Server ||Renamed user may have device records left under old name in admin app. |
|LO78380 ||Server ||User sync gets stuck on mail with large embedded attachment, such as delivery failure or phone message. |
|LO78386 ||Server ||Unable to delete contact e-mail address from Notes or iNotes, value repopulated by mobile device. |
|LO78404 ||Server ||Timing window where Notes Traveler may not detect primary mail server marked as unavailable. |
|LO78416 ||Server ||BB devices may resync all data when not necessary. |
|LO78465 ||Server ||User may get incorrect error message when over quota. |
|LO78474 ||Server ||Security status update may be lost if the user is in process of being load balanced. |
|LO78503 ||Server ||Return receipt document may appear in the users sent folder. |
|LO78524 ||Server ||Modify repeating To Do on device and it may show as over due on the server. |
|LO78577 ||Server ||BB10 removes quotes from display name when replying to e-mail. |
|LO78628 ||Server ||Ensure plain text included when sending mail from mobile device. |
|LO78636 ||Server ||Attachments may be lost when reply to mail from a Windows device. |
|LO78667 ||Android ||Android vibrates on new mail when set for audio alert. |
|LO78692 ||Server ||Maintain time zone name if the offset is the same. |
|LO78700 ||Server ||Notes Traveler cleanup tell command may not complete. |
|LO78728 ||Server ||Unexpected draft document may appear after processing event on mobile device. |
|LO78734 ||Server ||To Do item may be archived sooner than expected. |
|LO78787 ||Server ||Session update or does not exist error in the console when syncing BB or Apple To Dos.|
Daniel Nashed 20 January 2014 07:14:48The question came up a couple of times in the last few days ... Mat posted today --> http://www.matnewman.com/webs/personal/matblog.nsf/dx/and-were-back-the-totally-unofficial-totally-unsupported-ibm-connect-notes-session-database And here is the download link -- > http://www.matnewman.com/webs/personal/matblog.nsf/sphere2014.zip Hope to see many of you soon in Orlando! Huge thanks to the team who did the database it again this year!! -- Daniel
Daniel Nashed 17 January 2014 11:00:38Most of the new functionality in my start scripts is based on my own ideas and requests I get from customer projects.
For each of the request for new functionality I am trying to find out a way to make it as customizable as possible to make it fit for different customer environments. On the other side there are still requests which are very customer specific which I cannot build into a standard script.
But I also would like to keep the script in a maintainable mode where you have only to switch to a newer script (rc_domino_script) without re-adding customization in the code. The first step I did was a call-back functionality where you can add your own scripts before or after a certain event (server start, server stop, ...).
So all kind of customization can be done in your own extension scripts. But there are still cases where customers need their own "commands" added to the start script. So when driving back in the car last night from a customer I had a new idea how it could be more flexible. In the next version of the start script I am planning to have a away to plug-in your own custom commands using separate shell-scripts for each of the commands.
So you would just configure a directory where my start script should look for your own commands and if they are executable the start script will run the command.
The scripts would inherit all variables from the start script. Right now in my first test version the script first checks for build in commands and if the command is not known by the script it checks the extension directory for a script name matching the command.
In theory I could first check the directory and it could be possible to override standard functionality. But I am not sure if I want to go that far. What do you think? In general this new functionality would encapsulate all changes and extensions in separate scripts but you would still have the flexible.
I will build this into the next version anyway. But I am interested in your feedback about details of the implementation and if you want also be able to override standard commands. Here is how I currently have implemented it in my first version. If you have your own customization in the start script and want to still participate in regular updates, I think the plug-in functionality I added earlier and this new functionality might be helpful.
I am really interested in feedback either here or by email. -- Daniel *) if [ -z "$DOMINO_CUSTOM_COMMAND_BASEPATH" ]; then DebugText "Invalid PARAM1:" [$PARAM1] echo echo "Invalid command:" [$PARAM1] usage exit 1 fi DOMINO_CUSTOM_COMMAND_SCRIPT="$DOMINO_CUSTOM_COMMAND_BASEPATH/$PARAM1" DebugText "DOMINO_CUSTOM_COMMAND_SCRIPT:" [$DOMINO_CUSTOM_COMMAND_SCRIPT] if [ -x $DOMINO_CUSTOM_COMMAND_SCRIPT ]; then # execute custom command DebugText "-- before executing custom command" $DOMINO_CUSTOM_COMMAND_SCRIPT "$PARAM2" "$PARAM3" "$PARAM4" "$PARAM5" "$PARAM6" DebugText "-- after executing custom command" else DebugText "Invalid PARAM1:" [$PARAM1] echo echo "Invalid command:" [$PARAM1] usage exit 1 fi ;;
Daniel Nashed 18 December 2013 00:16:55We got questions about this from many customers and there is a technote on the way. The following link already provides the good news we are waiting for. http://www.lotus.com/ldd/fixlist.nsf/8d1c0550e6242b69852570c900549a74/de0329821264ceff85257c130056adda?OpenDocument The same is also supported in 8.5.3 FP6 -- Wow, I did not expect that! That's good news! http://www.lotus.com/ldd/fixlist.nsf/8d1c0550e6242b69852570c900549a74/2ca7aa993e50ba8285257c1d006472bd?OpenDocument Thanks IBM!!!