Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...


Daniel Nashed


Sub domain DNS at Digital Ocean for Domino CertMgr DNS-01 requests

Daniel Nashed  6 December 2021 10:59:15
What can you do if your DNS provider does not support a DNS API?
There are a couple of options. And there is one I am using for a while for testing the DNS TXT API at Digital Ocean.

You can delegate a sub domain to Digital Ocean and use DNS challenges for the sub domain.
And you can even redirect ACME request for the main domain via CNAME records to that validation domain.

Digital Ocean is the only provider I found to allow sub domain DNS.

Here is my DNS configuration.I just took the sub domain and delegated.
This is done by name server records for the sub-domain like you see below:

$TTL 600

; SOA Records
@                IN        SOA 2021111801 86400 10800 3600000 3600

; NS Records
@                IN        NS
@                IN        NS
@                IN        NS

acmedns        60        IN        NS

digitalocean                IN        NS
digitalocean                IN        NS
digitalocean                IN        NS

On the Digital Ocean side you setup a free account and add a sub-domain configuration.

You can now create any type of DNS record for the sub domain and this includes also DNS TXT records for ACME DNS-01 challenges.

Once you have created the account you just need to get an API token and can leverage the DNS API integration:

You can download and import the DXL file and start right away.

And you can use the sub domain also for validating any other domain by creating CNAME records pointing to a delegation DNS record in your new sub domain delegated to Digital Ocean.

Here is the official Let's Encrypt documentation -->

I have ready to go configurations for Digital Ocean and Cloudflare to work with delegated DNS-01 challenges.
If you think it makes sense to add them to the GitHub repository, I would add them to a completely separate section to not confuse admins with those special configurations.


The is another way. You can setup a special DNS server just for delegated DNS challenges.
There is a ready to go configuration. And here is the documentation
You can see I am also having a DNS server sub domain delegation for ACME DNS in this lab domain.

CertMgr works with those type of configurations.

There are a couple of other integrations. I wrote one with the command-line interface for AWS-CLI.
And there is one for nativ Bind 9 integration.

I did not hear any requests for those or other DNS providers.

Using DNS-01 challenges is much more convenient then HTTP-01 challenges.
And it provides a lot of flexibility including wild card certificates.

Did you know that CertMgr also supports SANs with different domains at different providers?
And you could even mix DNS-01 and HTTP-01 challenges.

-- Daniel

Image:Sub domain DNS at Digital Ocean for Domino CertMgr DNS-01 requests

One-Touch Domino JSON file validation in Domino 12.0.1

Daniel Nashed  5 December 2021 12:51:05

One-Touch Domino is one of my favorites features in Domino 12 when it comes to automated deployments.

With Domino 12.0.1 there is a new validation tool for JSON files. It makes a lot of sense to validate before starting your server setup.

There are two options

1. Check if the JSON format is valid --> that's something jq could also do for you and I am using this validation in my "domino setup json" start script option
2. Check if the JSON file is valid based on the current scheme! --> That's really very very helpful when writing JSON configuration files.

On Windows you can run the binary directly from the program directory. And most of the times the notes.ini is in the server binary directory.
The validation tool searches the schema in the curent directory.

On Linux this needs some two minor tweaks in beta 2.

1. First of all you need to add a startup link for the servertask in the same way other servertasks which are invoked from command-line.

cd /opt/hcl/domino/bin
ln -s tools/startup validjson

2. Next you have to copy the schema json file to your Notes data directory like that

cp /opt/hcl/domino/notes/latest/linux/dominoOneTouchSetup.schema.json /local/notesdata

Once done, you can run the validation tool.

Because it still needs those two changes, I have not add this check yet to my "domino setup" start script command.
It would need root permissions to create the startup link.
But if only used in my program or a similar controlled environment, I could also set the path accordingly to run the program and just copy the schema json.

So instead of the symbolic link you could also export the lib path and run the binary from the binary directory.

export LD_LIBRARY_PATH=/opt/hcl/domino/notes/latest/linux
/opt/hcl/domino/notes/latest/linux/validjson setup.json

-- Daniel

--- Validation output --

[002434:000002-00007FEC04510740] validates input json file and optionally validates the input file against a JSON schema:
[002434:000002-00007FEC04510740] Usages:
[002434:000002-00007FEC04510740]     validjson fileToValidate.json                  -> validates json without a schema
[002434:000002-00007FEC04510740]     validjson fileToValidate.json -default         -> validates json against default dominoOneTouchSetup.schema.json
[002434:000002-00007FEC04510740]     validjson fileToValidate.json schemaFile.json  -> validates json against the specified schema

/opt/hcl/domino/bin/validjson setup.json
[002849:000002-00007F2B5E92F740] Success - setup.json is valid

/opt/hcl/domino/bin/validjson setup.json  -default
[002795:000002-00007FE83E78F740] Success - setup.json is valid with respect to schema dominoOneTouchSetup.schema.json

Proxmox Virtualization -- Automatically deploying new LXC containers

Daniel Nashed  5 December 2021 11:21:27

Proxmox is a very interesting hypervisor based on open source technology.

It comes with ZFS support out of the box and provides VM virtualization and also LXC containers.
LXC containers are a lightweight way to run Linux servers -- But you should be aware the kernel form the Proxmox host is used in this case!
Domino currently does not support 5.x kernel coming with the underlying Debian 11, which is used by Proxmox!
But for a LAB environment this would be still a great choice and works.

I looked into Proxmox APIs and ended up with the command line options instead.
It is a wild mix between different command-line operations.
Sadly the hook-scripts inside the container did not work. So I run commands into the container to install the OpenSSH server (sshd).

The following script creates new LXC instances with Rocky Linux.

The only command I was able to get the disk size specified was "pct".
And it took a while to get this complete command line working.
So if you are looking into Proxmox and you want automation. This script can be a good starting point for your own ideas...

On my Proxmox 7.1.7 server it takes less than 15 seconds until I can log into the server using an existing SSH key!

-- Daniel


# -------------------------------------------------------





SSH_PUB_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkOzdFq8JOtYINEfBa+TFYTMZu0AmR1201uElLDominoaN nsh@nashcom.lab"

# -------------------------------------------------------

#get next free LXC ID
LXC_ID=$(pvesh get /cluster/nextid)

if [ -n "$1" ]; then

if [ -z "$HOSTNAME" ]; then

if [ -n "$2" ]; then

if [ -z "$IP_ADDR" ]; then

if [ -n "$ROOT_PASSWORD" ]; then

print_delim ()
  echo "--------------------------------------------------------------------------------"

  local count=0

  while [ $count -le 10 ]; do
    ping -c 1 $IP_ADDR > null 2>&1

    if [ "$?" = "0" ]; then
      return 0

    sleep 1
    count=`expr $count + 1`

  return 1


# Create server with predefined specifications and requested IP
pct create $LXC_ID $OS_IMAGE \
--rootfs $DISK_POOL:$DISK_SIZE \
--cores $CPU_CORES --memory $MEMORY --swap $SWAP \
--hostname $HOSTNAME --nameserver $DNS_SERVER --searchdomain $DOMAIN \
--net0 name=eth0,bridge=vmbr0,firewall=0,gw=$GATEWAY,ip=$IP_ADDR/24,type=veth \
--ostype $OS_TYPE --start 1 --onboot 1 --description "auto generated"



lxc-attach -n $LXC_ID  -- yum install -y openssh-server
lxc-attach -n $LXC_ID  -- systemctl enable --now sshd
lxc-attach -n $LXC_ID  -- ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N ""
lxc-attach -n $LXC_ID  -- bash -c "echo $SSH_PUB_KEY > ~/.ssh/authorized_keys"

echo "LXC Container created"
echo "LXC ID:     $LXC_ID"
echo "Host name:  $HOSTNAME"
echo "IP address: $IP_ADDR"

OpenSSL versions software vendors are using

Daniel Nashed  5 December 2021 10:02:08

When looking into Splunk over the weekend I realized again that software vendors often are using quite old versions of important security related software like OpenSSL.
So I looked into the OpenSSL history and found out that OpenSSL 1.0.2 was the last version which was FIPS 140 supported.

After removal of FIPS 140 support in 1.1.0 it is back in OpenSSL 3.0.

So this hopefully open the door for many software vendors to switch to a newer OpenSSL version.
My current Splunk server on Docker uses: "OpenSSL 1.0.2y-fips  16 Feb 2021".

This is quite up to date from security patch point of view.
But when you have a look into the major improvements in more recent OpenSSL versions, it is really time to move.

I have already build my nshcertool on OpenSSL 3.0 and there have been only some minor changes I had to take care of.
It now compiles and works for both versions.  

Notes/Domino is currently using OpenSSL 1.1.1 and now that OpenSSL 3.0 is available switching to OpenSSL 3.0 is the next logical move.

Redhat switched to OpenSSL 3.0 with RHEL/Centos Stream 9 as I mentioned in an earlier post.
So this might be another reason why software vendors might look into OpenSSL 3.0 soon.

On the other hand relying on the OpenSSL version installed on an operating system might not be the best strategy in most cases.
But shipping your own OpenSSL version requires to build it in a way that it can run on the oldest OS version you support.

-- Daniel

Major version releases
Version  Released           Last minor version
1.0.2    22 January 2015    1.0.2u (20 December 2019)
1.1.0    25 August 2016     1.1.0l (10 September 2019)
1.1.1    11 September 2018  ongoing development
3.0.0     7 September 2021  ongoing development

Additional information:

Notes/Domino 12.0.1 arriving before x-mas

Daniel Nashed  3 December 2021 11:40:38
If you wonder when the new Notes/Domino 12.0.1 release will ship ...  And also what is new in Notes/Domino 12.0.1 ...

Here is the launch event -->

Like Domino 12.0 the new 12.0.1 version is a security focused release.

You can expect new and improved features introduced in 12.0.

And I also like the new look of the client with the new icons!

I have done presentations about CertMgr and Domino backup for 12.0.1 already... And also posted a lot during beta time.

My servers are running beta 2 in production and I have customers waiting for 12.0.1 GA to start their updates.

In contrast to some other software, Domino allows in-place updates and you can expect everything that worked before to continue to work!

I have worked with business partners to get their lab environments updated during beta phase.
And specially the Docker based environments took just minutes.

Yes add-on software has to be taken care of. Specially backup and anti-virus tools.

All of my tools are already supported on 12.0.1. And I all worked without any change required.

-- Daniel

Image:Notes/Domino 12.0.1 arriving before x-mas

How to create exportable TLS Credentials with Domino 12.0.1

Daniel Nashed  25 November 2021 07:59:57

CertMgr Domino 12.0.1 introduces export/import functionality. You can import existing PEM, PKCS12 and kyr files.

If you mark them to be exportable during import or create an exportable key, you can export it later.

This works with manual and also ACME (Let's Encrypt) flows.

How does this work?

In Domino 12.0 the certstore.nsf introduced a secure encryption for private keys with the CertMgr server and all servers listed in "Servers with access".

The internal format is PEM. But it can be only decrypted for the configured servers -- The private key cannot be exported!

In Domino 12.0.1 an exportable key can be created or imported.

This private key is stored separately in encrypted PEM format (exportable key field in Security/Keys tab).

The PEM private key is encrypted with AES 256 using a pass phrase.
A secure password/pass phrase is required for all create and export operations.

The export functionality supports PEM and PKCS12. Both are using up to date AES 256 encryption.

Some older applications (specially Java) are not supporting those more modern formats.
There is a notes.ini parameter to switch back to the the older,less secure encryption via notes.ini PKCS12_EXPORT_LEGACY=1.
This notes.ini parameter would be needed on the client for a legacy export.

By the way: OpenSSL 3.0 comand-line also uses the same new encryption by default. And if you try to export/convert with OpenSSL 3.0 you need a -legacy option.


The new functionality is only available if you have a Notes 12.0.1 client installed. The functionality is implemented in the template in combination with a new C-API call in the Notes client.

Let's create a private key and use it in a manual flow.

1. The private key is created and encrypted in internal format to be used by the CertMgr server and the "Servers with access". and also with an encrypted exportable PEM formatted key.

You can create RSA or ECDSA keys in the same way.

Image:How to create exportable TLS Credentials with Domino 12.0.1

2. Now that the key is created in an exportable format, you can either use it in an ACME flow.

Or you can use it in a manual flow as shown below.

When you switch to manual, CertMgr creates a CSR, when you submit the request.

Image:How to create exportable TLS Credentials with Domino 12.0.1

3. Once the operation completes, you find the CSR in the TLS Credentials document.

A "Copy CSR" action shows up to allow to copy the CSR to the CA of your choice.

4. When your CA signed the request, you can paste the certificate using the "Paste Certificate" action and "submit" the request again.

Before submitting the request make sure to select the servers which should be able to use the new TLS Credentials document.


The pasted certificate should contain the leaf certificate and all intermediate or root certificates, which are not in the certstore.nsf trust roots list.

In case the root and intermediate certificates are among the trusted roots, the chain is automatically completed!

This also works if your certificate list is not sorted in "leaf" to "root" order!
Also duplicated or not matching intermediate certs are filtered out automatically.

The logic finds the leaf certificate for your private key and can handle the certificate chain for you.

This auto chain completion and sorting functionality is already available in Domino 12.0 and now also works with the new export/import functionality.

Image:How to create exportable TLS Credentials with Domino 12.0.1

After submitting the request your ready to use TLS Credentials document should look like this.

Image:How to create exportable TLS Credentials with Domino 12.0.1

The TLS Cache introduced in Domino 12.0 will automatically detect new or updated TLS Credentials and reloads the cache.

Additional tip: In Domino 12.0.1 you can check loaded TLS Credentials via

load certmgr -showcerts


The export/import functionality completes the existing functionality and is a great improvement in Domino 12.0.1.

The C-API call used in the template is in a script lib and could be used for own integration projects.

The call also uses the auto chain completion and auto chain sorting!

Revisiting Anti-Virus for Domino - Do you have feedback?

Daniel Nashed  21 November 2021 16:50:47

ICAP Interface -- But I did not find the right vendor to use yet

I have been looking for a good anti-virus solution for my own environment running on Linux.

There is a ICAP interface used by many big vendors. But most of them offer it for OEM solutions only.
And it would be some effort to implement the ICAP interface. There is an open source project, which includes a client component I could use.
But then still the challenge is to get an anti-virus solution for a smaller environment running on Linux.

ClamAV integration
So I came back and looked again into the ClamAV interface I wrote some time ago, to directly integrate with clamd on TCP/IP.
It turned out that ClamAV is now owned by Cisco -- I wasn't aware of that!

And it is still the open source anti-virus implementation out there -->

But I still don't know how good the detection rate is compared to others.
When I looked at it first, the detection rate wasn't what I expected.
As an additional component to scan databases, ClamAV would be for sure an option!
Maybe they improved in the last year with help from Cisco?

Scanning databases periodically

I added some scan options, just to scan modified document since n-days to permanently search for current attachments.
This could run incrementally during the week and have a full scan -- or a scan for a year over the weekend.
This would ensure you are also catching viruses which are not detected when they came in.

Running ClamAV in a Docker image

When I first used it, I had to install it on my own on Windows and Linux.

It can run on the same or another machine and communicates over TCP/IP.

I looked into the Docker image and it is very easy to install and use.
You just run it and can connect to it from your Domino server running on the same machine.

Feedback on ClamAV?

I would be very interested in your current experience with ClamAV or what you would be interested in.
And I would personally be interested to have someone test ClamAV against an existing quarantine database to see how good ClamAV detection rate is today.

This would be a local scan. No data would leave the network.

VirusTotal integration

Another component which could be interesting is VirusTotal and other cloud based services.
What I already implemented is adding a SHA1 for all attachments and there is a @Formula to allow direct lookups from every mail to VirusTotal by querying the hash of the file.

I also looked into their API and also other vendor's API. They are all REST based and easy to integrate leveraging LibCurl. In face I already have code for another project (the Tika benchmarking tool already sends attachments from Notes to Tika).

But I am not sure if I would like to send attachments to VirusTotal or other services. But there might be other local applications from different vendors supporting similar lookups on prem.

The hash lookup could be an interesting option and it would not expose much data. If the hash is not known on the other side, VirusTotal cannot do much with your hash.

VirtusTotal is only free for very low number of requests per day. And their license terms for the free offering have to be respected.

That's why I have only a user driven @Formula integration so far.

Feeback on VirusTotal and other services?

What do you think about it? Do you use VirusTotal or similar tools?

What about ClamAV? Does anyone want to benchmark it against the scanner you use?

-- Daniel

Image:Revisiting Anti-Virus for Domino - Do you have feedback?

Get peer IP address and DNS name from OpenSSL BIO

Daniel Nashed  21 November 2021 15:29:19

This took me a while to figure out today.

I needed the IP address for OpenSSL BIO holding the TLS session for an incoming server connection.

Find finding the right function was already an adventure. Stack overflow and friends have not been a big help here.

But once you know that the file descriptor connected to the BIO is used to get information about the peer's (client) IP address, the next challenge is how to get the IP address from it.

The function getpeername() is the key, but requires always a buffer of sufficient size also for a IPv6 address -- Even you might just want to support only IPv4 addresses.

Once you know how it works, adding full IPv6 support isn't much extra effort knowing which structures to fill and read ..
It's not obvious and maybe this can safe the next developer googling it some time.

Combining it with a DNS dual IP stack aware lookup routine, adds support for IPv4, IPv6 and hybrid mode to my current project.

See the code below.. for Linux and Windows.

-- Daniel

int GetHostFromBio (BIO *pBio, char *pszHostName, int MaxLenHostName)


 int ret       = 0;

 int sock_fd   = -1;

 const char *p = NULL;

 /* Ensure size is sufficient also for IPv6 */

 struct sockaddr_in6 addr_in6 = {0};

 /* Use a pointer to access the IPv4 structure */

 struct sockaddr *pAddr = (struct sockaddr *)&addr_in6;

 /* addr_len is input and output parameter for getpeername() */

 socklen_t addr_len = sizeof (addr_in6);

 ret = BIO_get_fd (pBio, &sock_fd);

 if (-1 == ret)


     LogError ("Cannot get hostname - Invalid FD from BIO\n");

     goto Done;


 ret = getpeername (sock_fd, pAddr, &addr_len);

 if (0 != ret)


     LogError ("Cannot get hostname - Invalid peer name returned\n");

     goto Done;


 if (AF_INET == pAddr->sa_family)


     p = inet_ntop (AF_INET, &((struct sockaddr_in *)pAddr)->sin_addr, pszHostName, MaxLenHostName);


 else if (AF_INET6 == pAddr->sa_family)


     p = inet_ntop (AF_INET6, &((struct sockaddr_in6 *)pAddr)->sin6_addr, pszHostName, MaxLenHostName);




     LogError ("Cannot get hostname - Unknown address family\n");

     goto Done;


 if (NULL == p)


     LogError ("Cannot get hostname - Error converting address\n");

     ret = 1;



 return ret;


int GetDnsName (char *IpAddress, char *HostName, int HostNameLen)


 int     socket_error       = 0;

 char    error_string[128]  = {0};

 struct  sockaddr_in  sin   = {0};

 struct  sockaddr_in6 sin6  = {0};

 strcpy (HostName, "");

 strcpy (error_string, "");

 if (strstr (IpAddress, ":"))


     /* IPv6 */

     sin6.sin6_family = AF_INET6;

     inet_pton (AF_INET6, IpAddress, &sin6.sin6_addr);

     sin6.sin6_port = htons (0);

     socket_error = getnameinfo ((struct sockaddr *) &sin6, sizeof (sin6), HostName, HostNameLen, NULL, 0, NI_NAMEREQD);




     /* IPv4 */

     sin.sin_family = AF_INET;

     sin.sin_addr.s_addr = inet_addr (IpAddress);

     sin.sin_port = htons (0);

     socket_error = getnameinfo ((struct sockaddr *) &sin, sizeof (sin), HostName, HostNameLen, NULL, 0, NI_NAMEREQD);


 if (socket_error)


     strcpy (HostName, "");

     goto Done;



 return socket_error;


    Weekend fun project - OpenSSL based MiniCA in C

    Daniel Nashed  20 November 2021 22:57:25
    Finally I found the time to look into adding a simple web-server component to my OpenSSL based tool written in C.

    The tool is my personal Swiss army knife for certificate conversion and many other options.
    This includes a MicroCA I am using to generate RSA and ECDSA based certs.

    The missing component was request option. Now I can post a CSR and get a certificate including intermediate back.
    And because all other components are already written with OpenSSL and the C interface, I added very basic web server component with TLS and client certificate authentication.

    After testing with locally I put it up on the web and checked with ssllabs.
    Compiled and running on CentOS 8 Stream and OpenSSL 3.0 with a wildcard ECDSA key/cert created via Domino CertMgr with Let's Encrypt.

    nshcertool isn't really available and more my test tool I use in different projects.
    I wrote this also as a sandbox for all kind of OpenSSL functionality.
    It's less cryptic then the OpenSSL command line, but still a complex command-line tool.

    It took a while to find out all the different options and functions needed on OpenSSL for a mini web-server.
    But once figured it out, it is kind of cool also to understand how web-servers use TLS.

    -- Daniel

    Image:Weekend fun project - OpenSSL based MiniCA in C

    Updating OpenSSH client and server on Windows

    Daniel Nashed  20 November 2021 11:00:20

    I ran into this when working on a project integrating Domino and Veeam.

    The restore operation needs to issue a mount command from the Domino server OS to the Veeam server invoking a PowerShell script.

    In error situations the PowerShell commands could not write their error messages to STDERR -- no matter how much I tried to redirect the output via 2>&1 or similar methods.

    STDERR output worked well on my Win2022 machine, but failed on Win2019.

    The limitation is fixed in newer OpenSSH versions.

    It turned out that Microsoft is not updating the OpenSSH server installed with Windows to later versions automatically.

    You have to download and install/update it manually to get a current version of SSH and the OpenSSH server.

    By the way, a never version will also allow to use more modern key types like ED25519.

    And it is really advisable to use current OpenSSH and OpenSSL versions in general -- also for other security fixes and new features improving your security.

    Here are the versions installed by default in Windows (with a current patch level).

    And I have a link for your, to update those versions with a PowerShell based installer shipped with it.

    The installer would also install the OpenSSH Service automatically if not yet installed.

    Both the SSH client and server are included in one package -- in contrast Windows splits it in client and server -- the SSH client is installed by default.

    Windows 2019

    OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5 (05.04.2018)

    Windows 10 / Windows11 / Windows 2022

    OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2 (18.12.2019)

    Current Version

    OpenSSH_for_Windows_8.6p1, LibreSSL 3.3.3 (26.05.2021)

    You can see that beside Windows 2019 all other Windows versions have a never OpenSSH and SSL version.

    And there is a more up to date version provided by Microsoft in their PowerShell/Win32-OpenSSH project.

    The download page has all the information and details:


    By the way Microsoft's OpenSSH implementation is not based on OpenSSL.

    They are using a project, which has been forked a while ago -->

    Donwload of a more current version

    The download comes with an install Powershell script creates which can create the OpenSSH server service.
    But it only works if no OpenSSH is installed.

    The version shipping with more current Windows version is perfectly OK to use and on a level most other Linux distributions are using.

    You can see below that CentOS 7 ships even an older version than Windows 2019 with a quite old OpenSSL version.

    On Linux switching to a later OpenSSL version isn't that simple. The distributions update their OpenSSL major releases only with major releases of their OS.

    So CentOS Stream 9 and RHEL 9 are the first Linux distributions I have made the switch to OpenSSL 3.0.

    And even Linux versions like CentOS 7 are still supported and maintained, you cannot expect the latest packages for important security packages like OpenSSL and OpenSSH.

    Those older versions are still security patched, but they don't provide all features you might want like using more modern key types etc.

    -- Daniel

    Linux version list OpenSSH

    CentOS 7

    OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017

    CentOS Stream 8

    OpenSSH_8.0p1, OpenSSL 1.1.1k  FIPS 25 Mar 2021

    SUSE Leap 15.2

    OpenSSH_8.1p1, OpenSSL 1.1.1d  10 Sep 2019

    SUSE Leap 15.3

    OpenSSH_8.4p1, OpenSSL 1.1.1d  10 Sep 2019

    CentOS Stream 9

    OpenSSH_8.7p1, OpenSSL 3.0.0 7 sep 2021


    Official Micosoft documentation

    Official Microsoft project

    Project documentation



      • [IBM Lotus Domino]
      • [Domino on Linux]
      • [Nash!Com]
      • [Daniel Nashed]