Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...


Daniel Nashed


Why I think we don’t need to switch to 64bit now

Daniel Nashed  23 April 2009 15:17:00

A couple of customers and partners are asking me very week about native Domino 64bit.
Here is a post I wrote in a non public dicussion database but to avoid answering the question over and over again here is my current take about native Domino 64ibt.

--- in short ---

Using a 64bit OS with Domino 32bit gives you already most of the benefits in current releases.
We will get hopefully a 64bit optimized Domino 64bit version but right now there is not much benefit for 90% Domino environments.

--- long version ---

It is important to start looking into 64bit native Domino for all ISVs to be prepared for the future.
IMHO right now specially on Win64 using the native does not give you much benefit in standard environments. 64bit native Domino only helps if you need a LOT of memory.

We have seen large Domino HTTP servers running out of memory that benefit from native 64bit.
Also LEI servers on native 64bit that gain performance. But for most other servers 32bit Domino has a quite small performance gain.

The future is in the 64bit native application space and Domino on AIX and Windows is really as a platform. I am currently working with multiple ISVs to help porting their applications to Win64 and I have ported some of my applications to 64bit already.

It sounds harder than it is for a plain Domino C-API application if you have applied best practices when you wrote it. The devil is in the detail when you run external libs or other software that is invoked.

The real benefit you gain is thru using a 64bit OS. Specially on Windows you move up from a total memory limit of 2GB for shared and local process memory to a 4GB address limit.

By default even 8.5 does only allow you to have 2GB of shared memory with Domino32 on Win64.
You have to set ConstrainedSHMSizeMB=3072 to allow more memory.

IMPORTANT: But all binaries need to be linked with the /LARGEADDRESSAWARE flag to support more than 2 GB of memory.
I blogged about this a couple of days ago (

So once you are running a 64bit OS you have most benefits already because the OS base allows the application to use more memory.
There is a 64bit / 32bit test result which shows there is not much difference in memory usage, CPU and I/O at least on Windows.

The memory that Domino will utilize even for larger server will most of the time not exceed 3 GB.
So there is not much you currently gain from moving to native 64bit.

Compared to other applications that just throw in more memory to reduce the I/O load Domino 8.0.x and 8.5.x introduces dramatic I/O reduction thru optimization in many areas of the server without increasing the need for more memory. Most effort is currently put into this area which IMHO is currently the right way to go.

Just to name some of the features that reduce I/O
Design Note & Data compression, SCR, OOO integrated into the mail router, on demand collations, streamlined design access, translog optimization, mail router optimization, scan for changed folder optimization, ... and last but most important DAOS

This provides much more TCO reduction than just allowing to throw more memory into a 64bit environment.

I have a detailed Speed Geeking session about 64bit downloadable on my homepage ( for some more details.

IMHO it is important to be prepared for 64bit and start looking into it right now.
But most customers are moving to native 64bit right now to expect a benefit that is not yet there.

That's specially true for Win64 on AIX it looks a bit different.
There is performance improvement as you see in the performance test I referenced and also the memory on AIX is segmented into 256MB blocks which causes some limitations for a 32bit application.
That goes away with native 64bit. So IMHO it is more important to look into native 64bit onAIX than on Windows.

But yes there are customers asking for 64bit and most of this is driven because other vendors use 64bit as a selling argument.
In fact if you read the details they are just throwing more memory into their JetEngine to compensate poor application design -- ok now I am a bit off topic but let me past an interesting article from a Microsoft blog dealing with their approach to I/O to show the difference in the Domino approach.I marked the important sections (I did this a while go when looking into 64bit and put some resources together to be prepared when customers are asking).

Side Note: There is one more interesting 64bit platform. zLinux switched completely to 64bit and there are very promising and interesting performance gains on that platform thru this move.

I am waiting to hear what IBM is planning in the area "64bit native Domino application feature exploitation" when they are done with all the great optimization they have done so far in 8.0.x and 8.5.x.

-- Daniel

Here is what M$ does to improve I/O.
IMHO the Domino way is the better approach.
But at some point Domino should start benefiting from native 64bit.
The good news is that we get better I/O and scalability improvements without the need to push customers to 64bit.

Understanding Exchange Server 2007 I/O improvements from 64 bit

-- extract --
Exchange 2007

A major motivation for Exchange to use 64-bit is not ability to crunch bigger numbers, but to get more memory. In fact, we can access a lot more. Most 64-bit computers on the market can address a few hundred GBs of RAM. As mentioned before, more RAM means we can keep data in memory longer and save repeated trips to disk. But doesn't RAM cost money? Yes it does, but it's much cheaper than disk up to about 32 GB. Based on this, to optimize for IO reduction we recommend about 5MB of Jet database buffer cache for each user plus 2GB. So for 4000 users, you'd want 20GB + 2GB or about 24GB. This would mean a 20GB of jet cache vs. 1GB in Exchange 2000/2003. For our lab tests, we started at 1.0 IOPS and went to .54, entirely in reduction of reads; a MAJOR savings.

Our next bit of magic was to increase the number of storage groups. Moving from having 1 storage group (logs) for 5 databases to having a 1:1 relationship means more transaction logs (but not more files). Overall, there's no net change in bytes (same number of users). In Exchange 2000/2003, large servers typically deployed with 1000 users per storage group and the checkpoint depth was 20MB. This corresponds to 20KB of checkpoint per user. This limited the number of pages that could be delayed. By deploying more storage groups, we can delay more pages and get more batching and optimization. Also, the parts of the database that store views can store more messages on a single page. In our lab test (as listed above) this moved our I/O from .54 IOPS to .43 IOPS, stemming from a drop in write I/Os.

We didn't stop there. Now that the cache was bigger, we also increased the page size from 4KB to 8KB. The page size is the size of 'packets' of data that Jet stores on disk. It is the minimum size Exchange will fetch from the disk. The problem with this is that in some cases we might need all 8K (a message body) and other times we might not (a simple message with no body). Overall each page has twice as much data, but we can only have 1/2 as many pages in the Jet cache. Because of this, 8K pages *could possibly* hurt instead of help. Having a larger cache decreases the chances of this significantly by helping keep useful pages longer (minimizing the risk that we don't have the useful page in memory). The huge positive of 8K pages is that our internal structures in the database (trees) can be shorter. Shorter trees mean less I/Os to get to the pages that store actual user data. We also get the added benefit of storing more in the same place. In Exchange 2000/2003, we stored messages and messages bodies in separate locations, meaning at least 2 disk I/Os. Now, if the message and the body is less than 8K (our data indicates around 75% of messages are less than 8K) we store them in 1 location. This means savings on writes and savings on reads. In our lab tests, this change took us from .43 to .27 IOPS!

1Nathan T. Freeman  24.04.2009 10:05:00  Why I think we don’t need to switch to 64bit now

Daniel, the quoted article blew my mind when I read it in the non-public forum. So I blogged about it myself and ended up with a reply from the original author.

See here: { Link }

2DrAPI  24.04.2009 11:45:34  Why I think we don’t need to switch to 64bit now

The major problerm in migrating C API programs is that they usually depend on other libraries that are not already migrated to x64 so you need to apply tricky things. Even IBM Domino itself didn't migrate their apps to x64. If I were an admin and I would need to migrate to x64 and in this environment I had an antivirus, rightnow I would not do it because one single line of code not properly migrated would crash my domino.. so I would wait for this AV to be tested by other people.

3Daniel Nashed  24.04.2009 11:51:05  Why I think we don’t need to switch to 64bit now

@Nathan, thanks for the info. Some clarification. My main point about the I/O optimization was that IBM and Microsoft are going a different direction. IBM is using a holistic approach trying to reduce the I/O load in all parts of the product. Microsoft seems to tune the back-end engine and puts more memory into it to benefit from the faster memory performance.

The right thing to do is to take both approaches. But you cannot do both properly at the same time.

IBM decided to invest in the backend which brings many more advantages than just reducing the I/O load but with the same hardware.

They are preparing for the 64bit move by laying out the foundation without pushing customers to move now but still giving them already a lot of TCO benefits -- maybe even more than on the other side but that is hard to measure.

The next logical step is to take benefit from more hardware resources available. I am not too worried about the RAM prices if this reduces I/O in a way that we can get cheaper SAN storage. This should most of the time be a good investment. But again you cannot do everything at the same time. But IBM can learn from some low-level optimization Microsoft did with Exchange when it comes to pages etc. On the other side we have to keep in mind that Domino is a cross platform product and changes have be made in a way that they work well for all platforms or at least do not cause issues for other platforms.

You also keep in mind that you get 80% of the benefits from 20% of the resources in most cases. In case of memory and disk this ratio is not always true. Cache works in a 80/20 but some other optimization migth give better effects for large memory.

Personally I think IBM is going the right way reducing the I/O in the way they are moving right now but I hope we get more optimization in Domino 9 for 64bit and larger available physical memory.

-- Daniel

4Bruce Lill  13.07.2009 18:57:14  It may be perception

I have 3 customers that have migrated their mail servers to Win 2003 64 bit and Domino 64. No change in hardware was made as each server had 8gb ram with a quad cpus. These run mail only in a clustered environment with 1000-5000 users.

The migration wfrom win203 to win203 64 then 2 weeks later Domino was upgrade to 64bit. The users were told it was happing and all were happy with the performance after the upgrade.

iNotes users were the happiest.

Was it just that they thought it should be faster or reality, I don't know. We weren't able to do performance testing before and after as management said do it since it was a free upgrade. But they are happy now and that is what really counts.




    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]