Symas OpenLDAP Knowledge Base

Delta-Syncrepl MMR Configuration Example

This article walks through a complete OpenLDAP Multi-Master Replication (MMR) installation using delta-syncrepl. This is a prototype for a useful configuration for production use. A two-way MMR cluster is often an adequate high-availability LDAP service. These servers are designed to be used for login authentication by other servers. We should be able to login and use SSH based on these servers.

We use TLS for the replication connections. We use the Password Policy overlay, and the Monitor overlay, both generally used in production servers..

Target

Here is what we want to have at the end :

  • 2 servers
  • Multi-Provider replication
  • Delta-Syncrepl
  • Replication done through a secure channel (TLS)
  • Monitor and PasswordPolicy overlays installed and configured
  • A minimum level of protection using ACLs
  • LDAP as the unique protocol (no LDAPS, startTLS will be used instead) TODO REVIEW
  • MDB as the database backend for the LDAP data and accessLog
  • Logs being written in syslog
  • LogLevel set to stats and syncrepl
  • ‘ldapRoot’ user for running the server

Step-by-step guide

Here are the basic steps we will follow

  1. Install the server software
  2. Create your certificates
  3. Configure the slapd.conf file
  4. Create the database directories
  5. Add the required entries
  6. Platform(OS) Side Considerations
  7. Start the servers
  8. Resulting directory layout
  9. Test the servers

Install OpenLDAP

You can install OpenLDAP by going to Symas’s Software Repository Web Site and following the installation for your choice of OpenLDAP Release and Linux Distribution (and level). The site contains advice on upgrading and what packages are available. Generally, only “symas-openldap-server” and “symas-openldap-client” are required. “Client” isn’t mandatory but we recommend installing it anyway. TODO TRUTH-CHECK

Full config file

For the record, the full slapd.conf file used in this page can be downloaded from install.conf

Configuring the server

To configure the server we create one file, /opt/symas/etc/slapd.conf. A default file is provided as a suggested starting point, slapd.conf.default. We copy that to create our working copy.

# sudo cp /opt/symas/etc/openldap/slapd.conf.default /opt/symas/etc/openldap/slapd.conf

We will walk through the slapd.conf file, block by block.

slapd.conf order

It’s critical to respect the order of each parameter in this file. Moving the blocks around, or even moving single lines around is likely to make the server fail to start.

Server ID

The first block contains the identification of the server. As we may have many servers in a replicated architecture, we have to identify each one of them. Here, we will have 2 servers, so we will attribute a different number to each of them : 1 and 2

#-----------------------------------------------------------------------
# Main configuration
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#-----------------------------------------------------------------------
serverid        1   # The other server will have a serverid set to 2

 

Schemas

 We use many schemas, so we need to define those we are going to load in the existing list we find in the default configuration file. Here is the selected configuration :

Schemas

#-----------------------------------------------------------------------
# SCHEMA INCLUDES
# Use only those you need and leave the rest commented out.
include         /opt/symas/etc/openldap/schema/core.schema
include         /opt/symas/etc/openldap/schema/ppolicy.schema
include         /opt/symas/etc/openldap/schema/cosine.schema
include         /opt/symas/etc/openldap/schema/inetorgperson.schema
include         /opt/symas/etc/openldap/schema/krb5-kdc.schema
include        /opt/symas/etc/openldap/schema/rfc2307bis.schema

The corecosineinetorgperson schemas are base schema files generally used. Theppolicy schema contains definitions needed to configure the  Password Policy overlay. We add the rfc2307bis schema to manage unix users.

PID declaration

We define the place on disk where the process ID (PID) file and the file containing the startup arguments, if any, will be stored.

#-----------------------------------------------------------------------
# PID / STARTUP ARGS
# Files in which to store the process id and startup arguments.
# These files are needed by the init scripts, so only change
# these if you are prepared to edit those scripts as well.
pidfile                 /var/symas/run/slapd.pid
argsfile                /var/symas/run/slapd.args

TLS configuration

This part is really important. Be sure it’s correct, ie, check that the files are present and at the right place with the correct rights, otherwise it will not be possible to connect to the servers using TLS, and replication will  not work.

#-----------------------------------------------------------------------
# TLS Setup Section
#
TLSCACertificateFile            /opt/symas/ssl/cacert.pem
TLSCertificateFile              /opt/symas/etc/openldap/slapdcert.pem
TLSCertificateKeyFile           /opt/symas/etc/openldap/slapdkey.pem
TLSCipherSuite HIGH:MEDIUM
TLSVerifyClient try
TLSProtocolMin 3.1

# This is the user that does the replication.

authz-regexp “email=XXXXX,cn=([^,]*),YYYYYY” “cn=replicator,dc=my-domain,dc=com”


Beside the definition of the CA Certificate file, and the Server certificate and key, we have two important parameters :

-   TLSVerifyClient allow : this is used for replication, it tells the server to check the incoming client certificate. If it's incorrect, the session will be closed.

-   TLSProtocolMin : defines the minimal TLS version we support.
-   authz-regexp : it associates a name provided with the SASL bind with an existing entry in the DIT. This is again used for replication, as replication uses SASL EXTERNAL mechanism for the Bind operation. Here, we need to store the certificate's CN.

     

Replicator CN

The consumer's syncrepl configuration does not define a user DN for replication. That means the producer has to find a way to authenticate the incoming requests from the consumers. It uses the cerficate's `CommonName` for that purpose, but this `CommonName` is usually not a full DN. This is the reason we have the `authz-regex` line in the TLS setup part. We will convert the `CommonName` to a DN that will be the replication's user.

One more thing (à la Steve Jobs) : the associated entry (ie `cn=replicator,dc=my-domain,dc=com` in this example) must exist beforehand.

Also note that the certificate has been created with `cn=elecharny` as a common name. Not necessarily the best idea, as there is another `cn=elecharny` elsewhere... Actually, a better idea is to set the `CommonName` to the consumer server's name.

 

That means we will have to create the `cn=replicator,dc=my-domain,dc=com` entry later on on each servers.

### Modules

The list of defined module is given in the next block :

`Modules`

#———————————————————————– # OpenLDAP supports threaded slapadd. This is only useful if running # slapadd on a multi-cpu box. Generally, assign 1 thread per # cpu, so if it is a 4 cpu box, use tool-threads 4. This # specifically affects the creation of index databases, so if # your database has fewer indices than CPUs, set it to the # number of indices. #tool-threads 2

#———————————————————————– # MODULE PATH # Choose the directory for loadable modules. modulepath /opt/symas/lib64/openldap

Uncomment the moduleloads as needed to enable additional

functionalityi when configured. NOTE: We package many

more modules options than those found below.

moduleload back_mdb.la moduleload back_monitor.la moduleload ppolicy.la moduleload syncprov.la moduleload accesslog.la moduleload pw-sha2


 

We define `mdb` the database backend, `monitor` for statistics, `ppolicy` for the Password Policy, `syncprov` and `accesslog` for replication, and `pw-sha2` to be able to use strongest hash mechanisms (ie, `TODO-FIX`, for instance. 

### ACIs

We don't do a lot beside the default protection. More will come later...

`ACIs`

#———————————————————————– # Sample access control policy: #———————————————————————–

grant read access to the RootDSE to everyone

access to dn=“” by * read

The replicator user has write access to the while DIT

The users authenticated through a secured connection can read the DIT

access to dn.subtree=“dc=my-domain,dc=com” by dn.exact=“cn=replicator,dc=my-domain,dc=com” write by * tls_ssf=128 read

Allow self write access

Allow authenticated users read access

Allow anonymous users to authenticate

access to * by self write by users read by anonymous auth


### Log

Straight : we set the log level to `stats` and `sync`, to have informations about the basic operations and the replication operations. Atm, we are not interested in more, that would be too verbose. We store the logs into a dedicated file

 

`logs`

#———————————————————————– # LOGGING loglevel stats sync


### Config database

This defines the `cn=config` backend. (note that I'm not sure it's mandatory when using slapd.conf : to be double-checked...)

`config backend`

config database

database config rootdn “cn=admin,cn=config” rootpw {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw==


 

We simply define a `DN` that has access to this backend, with a hashed password.

Hashing the password is done using this command :

`Password hash`

/opt/symas/bin/slappasswd -h {SSHA256} -o module-load=pw-sha2 -s

{SSHA256}gWPaanFlpoKboFcqYGXJcdo8zMCqGPJ7KbEy67tLjlKdLfZcztq9xQ== #


 

Note that with the current OpenLDAP version, it's not possible to use stronger mechanism, like `SHA512`, OpenLDAP 2.4.4.3 build will solve this issue.

In the current configuration, we use one single password, which is hashed using the salted mechanism `SSHA256`.

### LDAP Database

This is where we define the Database that will contain the application data. This database is defined in many segments, we will describe them segment by segment. Enough said that we have 7 segments to describe :

1.  global definition
2.  indexes
3.  backend configuration
4.  syncrepl definitions
5.  password policy overlay definition
6.  syncprov overlay
7.  accesslog overlay

 

#### Symas global definition

This is the very base definition where we give a name to the database, and associate a root user :

`Symas base`

LMDB database definitions

database mdb suffix “dc=my-domain,dc=com” rootdn “dc=my-domain,dc=com” rootpw {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw==

limits dn.exact=“cn=replicator,dc=my-domain,dc=com” time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited


The suffix will exist, but not be visible on the server until we create the context entry, which will be done later on.

The `rootdn` is the special user that will have full access to the Symas database. It `MUST` be defined `AFTER` the suffix.

We set the limit to unlimited for the replicator user

#### Symas indexes

At this point, we don't know too much about the needed indexes. We just add a few indexes to the default list, as we know we are going to use the server as a central repository to unix users, so we add the `uid`, `mail`, `memberUID`, and `uniqueMember` indexes.

`Indexes`

Indices to maintain

index default eq index objectClass index cn eq,sub index memberUID index givenName eq,sub index uniqueMember index mail eq,sub index entryUUID eq index entryCSN eq index uid eq,sub


Nothing special, most of the indexes are defined using the `eq` value, some also use the `sub` value.

This list might be tuned later on.

 

#### Backend configuration

Here, we define the max size that the backend can reach, and the directory that will contain the data :

`Backend configuration`

directory /var/symas/openldap-data/symas maxsize 1073741824


We set the max size to 10Mb atm.

The database directory `must` exist before the server is launched, otherwise a failure will ensue. The following command will create those directories :

\# sudo mkdir /var/symas/openldap-data/symas

 

#### Syncrepl definitions

This is also a critical part. Any error in this segment and the replication is likely not to work...

Let's first see what we have set :

`syncrepl`

#———————————————————————– # SYNCREPL [LDAP1, brie] syncrepl rid=1 provider=ldap://brie.rb.symas.net bindmethod=sasl saslmech=external starttls=yes tls_cacert=/opt/symas/ssl/cacert.pem tls_cert=/opt/symas/etc/openldap/slapdcert.pem tls_key=/opt/symas/etc/openldap/slapdkey.pem tls_reqcert=demand type=refreshAndPersist searchbase=“dc=my-domain,dc=com” filter=“(objectclass=*)” scope=sub schemachecking=on retry=“5 10 60 +” logbase=“cn=accesslog” logfilter=“(&(objectClass=auditWriteObject)(reqResult=0))” syncdata=accesslog sizeLimit=unlimited timelimit=unlimited

SYNCREPL [LDAP2, cantal]

syncrepl rid=2 provider=ldap://cantal.rb.symas.net bindmethod=sasl saslmech=external starttls=yes tls_cacert=/opt/symas/ssl/cacert.pem tls_cert=/opt/symas/etc/openldap/slapdcert.pem tls_key=/opt/symas/etc/openldap/slapdkey.pem tls_reqcert=demand type=refreshAndPersist searchbase=“dc=my-domain,dc=com” filter=“(objectclass=*)” scope=sub schemachecking=on  retry=“5 10 60 +” logbase=“cn=accesslog” logfilter=“(&(objectClass=auditWriteObject)(reqResult=0))” syncdata=accesslog sizeLimit=unlimited timelimit=unlimited

ENABLE MIRROR MODE

mirrormode TRUE


 

So we have 2 `syncrepl` defined, including one that is for the current server. This is just to ease the creation of a second server, as we will just have to copy the full `slapd.conf` file, changing only the `ServerID` parameter, to have a working replica. This is specificlaly useful if one is going to use something like `Puppet` to provision new servers in a Multi-Provider replication topology.

Each `syncrepl` directive is for one of the two servers (`brie.rb.symas.net`, which IP is `10.2.0.47` and `cantal.rb.symas.net` which IP is `10.2.0.48`).

FQDN

When using TLS, it's critical to use the `FQDN` for the `provider` parameter in the `syncrepl` definition. Using the IP address will not work, as the name is supposed to be the common name used in the certificate in use.

 

As we have two `syncrpel` directives, we have to distinguish of from the other. We use the `rid` parameter for that. Each `syncrepl` directive has a different `rid`.

The `TLS` part is where we define the way each server is going to 'tallk' to the remote peer. Here, we use the `SASL` `EXTERNAL` mechanism - using the certificate to authenticate the remote peer.

We set the `sizelimit` and `timelimit` to unlimited so that the number of entries that can be exchanged aren't limited by the default 500 limit.

The `mirrorMode` flag is necessary for the multi-provider replication, and defining the `logbase` and `logfilter` parameters activate the `Delta-Syncrepl` replication (updates will be read from the `cn=accesslog` database).

 

Otherwise, we blindly replicate the whole content of the `symas` database (this may be tuned later)

 

#### Password policy overlay

Nothing fancy at the moment. We just point to the entry taht will contain the `PPolicy` rules (see later for the injection of this entry)

 

`PPolicy`

#———————————————————————– # Load an instance of the ppolicy overlay for the current database: overlay ppolicy

Specify the default password policy subentry to use when none is

specified in an account’s entry

ppolicy_default “cn=default,ou=Policies,dc=my-domain,dc=com”


 

#### syncprov overlay

This overlay manage the provider side.

`syncprov`

#———————————————————————– # Symas database overlays (syncprov and accesslog) # OVERLAY [SYNCPROV] overlay syncprov syncprov-checkpoint 100 10 syncprov-sessionlog 10000 syncprov-reloadhint TRUE


We just tell the overlay to update the `contextCSN` attribute every 100 updates or every 10 minutes (this attribute is stored at the top level of the database, as an operational attribute, so in the `dc=my-domain,dc=org` entry).

The next parameter, `syncprov-sessionlog`, is set to the maximum number of updates we store in the database. Here, we set it to 10 000.

-   Last, not least, we make the server accept requests froma  client to do a full refresh on demand by setting the `syncprov-reloadhint` flag.

 

#### accesslog overlay

Last thing we must define in the application database is the `accesslog` overlay. It stores the updates for the replicas to catch on when they reconnect.

`accesslog`

#———————————————————————– # OVERLAY [ACCESSLOG] overlay accesslog logdb cn=accesslog logops writes logsuccess TRUE logpurge 24:00 01+00:00

limits dn.exact=“cn=replicator,dc=my-domain,dc=com” time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited


We just store the `write` updates, we don't need to keep a track of any search or bind operation. We also only keep the successful writes. In any case, we don't keep updates older than one day, and we check for old updates every hour.

### AccessLog database

This database stores updates done on the server. It's used by the replication mechanism.

#### Base definition

Nothing special in this definition.

`AccessLog database`

AccessLog database

database mdb directory /var/symas/openldap-data/accesslog maxsize 5120000 suffix “cn=accesslog” index default eq index objectClass,entryCSN,entryUUID,reqEnd,reqResult,reqStart

limits dn.exact=“cn=replicator,dc=my-domain,dc=com” time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited


We have set a small size for this database, as we will keep only 10 000 updates max, so 5Mb should do it.

The defined index are necessary to speed up the search in the database.

 

We also set the limits to unlimited for the replicator user.

#### Syncprov overlay

We also have to add a `synprov` overlay to this database, with some specific configuration :

`AccessLog syncprov`

#———————————————————————– # AccessLog overlay (syncprov) overlay syncprov syncprov-nopresent TRUE syncprov-reloadhint TRUE syncprov-checkpoint 100 10 syncprov-sessionlog 10000


Here, we have an additional flag, the `syncprov-nopresent` which inform the `accesslog` database to skeep the `Syncrepl` present phase (

### Monitor database

There is nothing to configure here, the declaration is enough to set the monitoring.

`Monitor`

Monitor database

database monitor


It's most necessary to add some `ACL` to protect the access to this database (see later).

 

### Checking the configuration

Before going any further, it's necessary to check that the configuration is correct. This is done using the `slaptest` command :

`slaptest`

/opt/symas/bin/slaptest -f /opt/symas/etc/openldap/slapd.conf -u


The `-u` flag is necessary at this point because we haven't yet created the database.

If everything is fine, we can go on, otherwise we need to fix what's wrong in the `slapd.conf` file...

Creating the database directories
---------------------------------

At this point, we need to create the directories that will contain the databases (`symas` and `accesslog`) otherwise they won't start. It's as simple as :

`Database Directory Creation`

sudo mkdir /var/symas/openldap-data/symas

sudo mkdir /var/symas/openldap-data/accesslog


We are all set ! We can also make them protected so that no other user can read or write them, but the database files will already be protected against access.

Adding the required entries
---------------------------

Now that we have configured the servers, it's necessary to add a few entries :

-   the `symas` database context entry
-   the `replicator` entry
-   the `password policy` entries

Those entries can be added after having started the server, but better do it right away, using the `slapadd` command line.

Here is the `LDIF` file that we will use (name it `init.ldif`) :

`Entry addition`

dn: dc=my-domain,dc=com objectClass: domain objectClass: top dc: symas

dn: cn=replicator,dc=my-domain,dc=com objectClass: person objectClass: top cn: replicator sn: Replicator user

dn: ou=policies,dc=my-domain,dc=com objectClass: organizationalUnit ou: policies description: The Password Policies branch

dn: cn=default,ou=policies,dc=my-domain,dc=com objectClass: pwdPolicy objectClass: person cn: default pwdAttribute: userPassword sn: password policy pwdAllowUserChange: TRUE pwdCheckQuality: 2 pwdFailureCountInterval: 30 pwdLockout: TRUE pwdLockoutDuration: 300 pwdMaxFailure: 6 pwdMinLength: 8 pwdMustChange: TRUE pwdSafeModify: FALSE


LDIF

Copying this text into a init.ldif file might well be problematic, with some hidden characters being added (new lines or hidden spaces), resulting on a failure to inject it into the server. Double check the content of your file !

 

 

As we can see, we declare the `symas` database context entry, the `replicator` user and the `password policy` entries.

To inject those entries, we use this command :

`LDIF injection`

sudo /opt/symas/bin/slapadd -b “dc=my-domain,dc=com” -f /opt/symas/etc/openldap/slapd.conf -w -v -l init.ldif

added: “dc=my-domain,dc=com” (00000001) added: “cn=replicator,dc=my-domain,dc=com” (00000002) added: “ou=policies,dc=my-domain,dc=com” (00000003) added: “cn=default,ou=policies,dc=my-domain,dc=com” (00000004) _#################### 100.00% eta none elapsed none fast!
modified: “(null)” (00000001) Closing DB…


Your OpenLDAP server is now ready to be started.

Starting the servers
--------------------

Straightforward : launch the service.

`Starting the server`

sudo su ldap /etc/init.d/solserver start


You should get back an `\[OK\]`.

You can also check that the server is running from the `syslog` file content. You should see those lines in it :

`ldap start`

Nov 30 01:31:27 cantal slapd[26021]: @(#) $OpenLDAP: slapd 2.4.42 (Oct 28 2015 17:21:35) $#012#011matth@dhimay.rb.symas.net:/home/build/sold-2.4.42.2/build/openldap.x86_64/servers/slapd Nov 30 01:31:27 cantal slapd[26022]: slapd starting … ```

The resulting directories layout

Here is the current layout when the server has been started :

Testing the server

Now that both servers are configured and started, you can test that they are up and running, using Ldap Studio, for instance. Creating a new connection on both server, using startTLS, you should be able to see the dc=my-domain,dc=com data, and if you create an entry under this suffix on one of the server, it should be immediately replicated on the other.

 

Attachments:

directories-full.graphml (application/x-upload-data) directories-full.png (image/png) directories-install.graphml (application/x-upload-data) directories-install.png (image/png) install.conf (application/octet-stream)