Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

Is anyone using Grafana Loki for Domino Logs

Daniel Nashed – 6 February 2026 01:53:52

Grafana Loki can be a helpful tool to collect, search and visualize logs.
Has someone looked into it already? In general? For Notes logs?


I have added an Alloy collector to collect Notes console logs and I am looking if I want to annotate the log lines in some way.
If you are using it, I would be very interested to hear from you.

Beside Domino Logs I have looked into writing NGINX logs in JSON format to get them structured pushed into Loki.


Here is an NGINX configuration example:


log_format loki_json escape=json

  '{'

    '"time":"$time_iso8601",'

    '"remote_addr":"$remote_addr",'

    '"request":"$request",'

    '"status":$status,'

    '"method":"$request_method",'

    '"uri":"$uri",'

    '"bytes_sent":$bytes_sent,'

    '"request_time":$request_time,'

    '"upstream_time":"$upstream_response_time"'

  '}';


  access_log /nginx-logs/access.json loki_json;


And I have uploaded a Alloy collection configuration here -->
https://github.com/nashcom/domino-grafana/blob/main/loki/alloy/nginx_config.alloy
It is using environment variables which are set by my exporter at startup.

I have played with the NGINX logs and I could think of getting http request logs from Domino as well.


Domino log.nsf with event meta data?


But discussing with an admin buddy today I we had another idea which could be interesting.
Instead of reading the console.log we could read from log.nsf and get the event types, severity etc from the log document.


Additional logs?

We could do the same with mail routing logs, replication logs and security logs.
Would this make more sense to get structured data with event type and severity?

So far I am just getting console.log. But we could write out the other log output to json files and collect them.
Eventually have one log file to scrape per log type.

In contrast to Splunk universal forwarder, which has a way to push data to the forwarder with Loki we need a file.
But the same kind of interface could later be used for other integrations.


There is also a C-API way to register for events to retrieve. I would need to have a look into if this might be the better integration.
But I am looking for feedback first what type of logging admins would be interested to push to Loki, Splunk or other tools.

I looked into Splunk earlier. It has a simple to use interface to talk to their universal forwarder.


But I want to establish purpose before action.

a.) Would you be OK with just  log file with every console message?
b.) Or would you want to have more granular, categorized log filtered by the Domino event generation, captured either C-API or read from log.nsf?


Right now I am just using a simple log forwarding.


Image:Is anyone using Grafana Loki for Domino Logs

In addition we could turn message logs and replication logs into Loki data.




Image:Is anyone using Grafana Loki for Domino Logs
Comments

1Detlev Pöttgen  06.02.2026 8:06:26  Is anyone using Grafana Loki for Domino Logs

I would definitely like to use Grafana Loki.

From my perspective, I see the data source more at the file level (Console Log, HTTP Access Logs, NSDs, Traveler Logs, etc.).

Using the Log.nsf would certainly be 'nice,' but as long as not everything is written to it—such as Java stack traces or specific messages—the console log file remains the better source.

Customers who use Splunk typically collect and evaluate all available log sources, not just the items found in the Log.nsf.

It would be possible to categorize / label individual log entries based on the task that generated them and define specific processing rules accordingly. You can use Promtail or the Grafana Agent to extract those task names from the log line and turn them into labels, which makes filtering extremely fast.

Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]