Symas OpenLDAP Knowledge Base

Provider/Consumer Syncrepl Configuration Example

This page describes a complete OpenLDAP installation. The example uses 2 servers, replicated using syncrepl on a Provider-Consumer topology. We use TLS for the communication between the servers during the replication. We add the Password Policy overlay, and the Monitor overlay.

These servers are also intended to be used as a main source of login information for other servers : we should be able to login, use SSH, all based on these two LDAP server.

The idea is to expose what could be a standard customer setup.

Here is the architecture schema for this configuration :

             m1.ldap.example.com
   +------------+
   | Master     |
   | (Producer) |
   +------------+
         |
         V   r1.ldap.example.com
   +------------+
   | Replica    |
   | (Consumer) |
   +------------+

Master servers (Producers of Replication streams) perform all updates to their copies of the LDAP database and send them to replica servers (Consumers of replication streams) to bring them into sync with the latest changes. In our simple example, there is only one Master server. When such a simple cluster is enhanced, the next step is usually to add an additional Master server to provide for higher availability. At that point, the discussion or Master/Producer and Replica gets ambiguous.

Often, administrators think about which are Configured to be Masters capable of Producing replication streams. And servers that are configured to only be Consumers (replicas). However, in an optimal configuration, only one of the Producer-capable servers should be functioning in the true “Master Mode” (taking updates and replicating them out to the other servers in the cluster). This is all done with Load balancer settings.

Even though we are setting up a Single-Master Replication (SMR) cluster, we will do some harmless configuration in preparation for the future high-availability Multi-Master Replication cluster.  

Target

Here is what we want to have at the end :

  • 2 servers
  • Provider-Consumer replication
  • Full Delta-Syncrepl
  • Replication done through a secure channel (TLS)
  • Monitor and PasswordPolicy overlays installed and configured
  • A minimum level of protection using ACLs
  • LDAP as the unique protocol (no LDAPS, startTLS will be used instead)
  • MDB as a default backend for the data
  • Logs being written in syslog
  • LogLevel set to stats and syncrepl
  • ‘ldap’ user for running the server

Step-by-step guide

Here are the basic steps we will follow

  1. Install the server
  2. Create certificates
  3. Configure the slapd.conf file
  4. Create the database directories
  5. Add the required entries
  6. OS Side Considerations
  7. Start the servers
  8. Resulting directory layout
  9. Test the servers
  10. Optionally convert from static configuration (slapd.conf) to dynamic (cn=config or slapd.d).

Full config file

For the record, the full slapd.conf files used in this page can be downloaded from Provider slapd.conf and Consumer slapd.conf.

Configuring the server

First we copy the default configuration file to a working copy:

# cd /opt/example/etc/openldap
# sudo cp slapd.conf.default slapd.conf

Nex let’s review the content of the newly created file, block by block.

slapd.conf order

It’s critical to respect the order of each parameter in this file. Moving the block around, or even moving a single line around is likely to make the server fail to start.

Schemas

We use many schemas. We need to list them in the configuration file.

#-----------------------------------------------------------------------
# SCHEMA INCLUDES
# Use only those you need and leave the rest commented out.
include         /opt/example/etc/openldap/schema/core.schema
include         /opt/example/etc/openldap/schema/ppolicy.schema
include         /opt/example/etc/openldap/schema/cosine.schema
include         /opt/example/etc/openldap/schema/inetorgperson.schema
include         /opt/example/etc/openldap/schema/rfc2307bis.schema

We need the core, cosine, inetorgperson schemas, as a list of base schema. We also add the ppolicy schema to be able to configure the  Password Policy overlay. We also add the rfc2307bis schema for being able to manage Linux/BSD/UNIX users.

PID declaration

 Nothing special here. We just define the place on disk where the PID file will be stored, and the file containing the startup arguments, if any.

#-----------------------------------------------------------------------
# PID / STARTUP ARGS
# Files in which to store the process id and startup arguments.
# These files are needed by the init scripts, so only change
# these if you are prepared to edit those scripts as well.
pidfile                 /var/example/run/slapd.pid
argsfile                /var/example/run/slapd.args

TLS configuration

This part is really important. Be sure it’s correct, ie, check that the files are present and at the right place with the correct rights, otherwise it will not be possible to connect to the servers using TLS, and replication will  not work.

#-----------------------------------------------------------------------
# TLS Setup Section
#
TLSCACertificateFile            /opt/example/ssl/cacert.pem
TLSCertificateFile              /opt/example/etc/openldap/slapdcert.pem
TLSCertificateKeyFile           /opt/example/etc/openldap/slapdkey.pem
TLSCipherSuite HIGH:MEDIUM
TLSVerifyClient try
TLSProtocolMin 3.1
security ssf=128 tls=128

# This is the user that do the replication
authz-regexp "email=XXXXX,cn=([^,]*),YYYYYY" "cn=replicator,dc=example,dc=com"

Beside the definition of the CA Certificate file, and the Server certificate and key, we have two important parameters :

  • TLSVerifyClient allow : this is used for replication, it tells the server to check the incoming client certificate. If it’s incorrect, the session will be closed.

  • TLSProtocolMin : defines the minimal TLS version we support.

  • authz-regexp : it associates a name provided with the SASL bind with an existing entry in the DIT. This is again used for replication, as replication uses SASL EXTERNAL mechanism for the Bind operation. Here, we need to store the certificate’s CN.

  • security : defines the strenght factor for SASL and the Transport layer. With a value of 128, we require that the encryption key length is at least 128 bits long.

Replicator CN

THIS IS IMPORTANT !!!

The consumer’s syncrepl configuration will not define any user DN for replication. That means the provider has to find a way to authenticate the incoming requests from the consumers. It uses the cerficate’s CommonName for that purpose, but this CommonName is usually not a full DN. This is the reason we have the authz-regex line in the TLS setup part. We will convert the CommonName to a DN that will be the replication’s user.

Also note that the certificate has been created with cn=elecharny as a common name. Not necessarily the best idea, as there is another cn=elecharny elsewhere… Actually, a better idea is to set the CommonName to the consumer server’s name.

We don’t need a replicator entry on any server, as we use TLS : the certificate will be used as a mean to manage the access to the data.

Modules

The list of defined module is given in the next block. They differ between the Provider and the Consumer :

Modules for the Provider (Master)

#-----------------------------------------------------------------------
# Example OpenLDAP supports threaded slapadd.  This is only useful if running
# slapadd on a multi-cpu box.  Generally, assign 1 thread per
# cpu, so if it is a 4 cpu box, use tool-threads 4.  This
# specifically affects the creation of index databases, so if
# your database has fewer indices than CPUs, set it to the
# number of indices.
#tool-threads 2


#-----------------------------------------------------------------------
# MODULE PATH
# Choose the directory for loadable modules.
modulepath      /opt/example/lib64/openldap

# Uncomment the moduleloads as needed to enable additional
# functionalityi when configured. NOTE: We package many 
# more modules options than those found below. 
moduleload      back_mdb.la
moduleload      back_monitor.la
moduleload      ppolicy.la
moduleload      pw-sha2
## Provider only...
moduleload      syncprov.la

Modules for the Consumer (Replica)

#-----------------------------------------------------------------------
# Example OpenLDAP supports threaded slapadd.  This is only useful if running
# slapadd on a multi-cpu box.  Generally, assign 1 thread per
# cpu, so if it is a 4 cpu box, use tool-threads 4.  This
# specifically affects the creation of index databases, so if
# your database has fewer indices than CPUs, set it to the
# number of indices.
#tool-threads 2


#-----------------------------------------------------------------------
# MODULE PATH
# Choose the directory for loadable modules.
modulepath      /opt/example/lib64/openldap

# Uncomment the moduleloads as needed to enable additional
# functionalityi when configured. NOTE: We package many 
# more modules options than those found below. 
moduleload      back_mdb.la
moduleload      back_monitor.la
moduleload      ppolicy.la
moduleload      pw-sha2
## Provider only...
# moduleload      syncprov.la

back-mdb is the database backend for LMDB, OpenLDAP’s in-memory database engine. back-monitor provides a database of statistics on what’s going on in the server. ppolicy implements the Password Policy function. syncprov implements Provider-side replication function. TODO FIX THIS! pw-sha2 enables use of the strongest hash mechanisms (ie, SSHA256, for instance. The default configuration only support SHA1, which is considered as weak, as of today).

ACLs

We will define a few global ACLs. Typically, we want to give access to the RootDSE and to the cn=Subschema content (but only across a TLS established connection, and on read only). These operational data contain no security-sensitive information and let users find out about capabilities applications may need to verify for proper operation.

#-----------------------------------------------------------------------
# Sample access control policy:
#-----------------------------------------------------------------------
#       Allow read access of root DSE
access to dn="" 
    by * read

# Access to the subschema
access to dn.subtree="cn=subschema"
    by * tls_ssf=128 read

Log

Set the log level to stats and sync, to log informations about basic and replication operations. Using more than that level of logging makes logs much larger. Logging is costly and can impact performance so the level of detail logged should be minimized. But these two levels are the minimum that provide for diagnosis of performance and other operational problems. It is the best balance for production.

#-----------------------------------------------------------------------
# LOGGING
loglevel     stats sync

We recommend use of the names as shown in the example and not any of the numeric forms. The next person responsible for the cluster may have no idea what the numbers mean.

Config database

This defines the dynamic configuration (cn=config) backend. Even though we’re not converting our configuration permanently to dynamic, OpenLDAP (slapd) ALWAYS converts our static configuration into that more efficient internal format. You probably realized we didn’t define a load module for cn=config above. It is hard-coded internally.

Here we establish an administrative user and password for cn=config. This will let us change logging levels or other configuration settings during development and testing. Such changes are lost when we restart but it is nice to have the option to save time bringing the server down, making a trivial change, and bringing it back up.

#######################################################################
# config database
#######################################################################
database     config
rootdn       "cn=admin,cn=config"
rootpw       {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw==

We simply define a DN that has access to this backend, with a hashed password.

Hash the password with this command :

# /opt/example/bin/slappasswd -h {SSHA256} -o module-load=pw-sha2 -s <the password to hash>

We also set some ACL on this database, to allow users to read it :

access to dn.subtree="cn=config"
        by * tls_ssf=128 read

  This lets anyone who can BIND (login to the directory) read anything in cn=config.

The LDAP Database

This is where we define the Database that will contain the application data. This database is defined in many segments, we will describe them segment by segment. Enough said that we have 6 segments to describe :

  1. global definition
  2. indexes
  3. backend configuration
  4. syncrepl definitions
  5. password policy overlay definition
  6. syncprov overlay

 

Global definition

This is the very base definition where we give a name to the database, and associate a root user :

#######################################################################
# Example LMDB database definitions
#######################################################################
database        mdb
suffix          "dc=example,dc=com"
rootdn          "cn=superadmin,dc=example,dc=com"
rootpw          {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw==

limits dn.exact="cn=replicator,dc=example,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited

The suffix will exist, but not be visible on the server until we create the context entry, which will be done later on.

The rootdn is the special user that will have full access to the Example database. It MUST be defined AFTER the suffix.

We set the limit to unlimited for the replicator user. This allows the replicator user to handle heavy replication loads without bumping into the limits.

ACLs

We want to protect the database against unwanted access. The Replicator user will be allowed to update the database, any authenticated user will be able to read data, a entry owner will be able to update it, unauthenticated users will have to authenticate and otherwise, all other access are denied. Also note that we have a different configuration on the Provider and the Consumer, because the Consumer is read-only : on the Consumer, the by self write directive has been removed.

access to dn.subtree="dc=example,dc=com"
    by dn.exact="cn=replicator,dc=example,dc=com" write
    by * tls_ssf=128 read
    by self write
    by anonymous auth
    by * none

and on the Consumer :

access to dn.subtree="dc=example,dc=com"
    by dn.exact="cn=replicator,dc=example,dc=com" write
    by * tls_ssf=128 read
    by anonymous auth
    by * none

It may seems a bit tight, it’s up to the administrator to add the rules that fit the need.

Example indexes

At this point, we don’t know too much about the needed indexes. We just add a few indexes to the default list, as we know we are going to use the server as a central repository to unix users, so we add the uid, mail, memberUID, and uniqueMember indexes.

Indexes

# Indices to maintain
index       default    eq
index       objectClass
index       cn eq,sub
index       memberUID
index       givenName eq,sub
index       uniqueMember
index       mail eq,sub
index       entryUUID eq
index       entryCSN eq
index       uid eq,sub

Nothing special, most of the indexes are defined using the eq value, some also use the sub value.

This list should be tuned later on. When a search is performed that uses an unindexed attribute in its “filter”, an entry is put into the stats level logging. That makes it easy to figure out what additional indexes are needed.

Backend configuration

Here, we define the max size that the backend can reach, and the directory that will contain the data :

Backend configuration

directory       /var/example/openldap-data/example
maxsize 1073741824

We set the max size to 10Gb here.

The database directory must exist before the server is launched, otherwise a failure will ensue. The following command will create those directories :

# sudo mkdir /var/example/openldap-data/example

Provider specific configuration

The Provider will have some configuration added for the syncprov overlay :

Provider Syncprov Overlay

#-----------------------------------------------------------------------
# Example database overlay (syncprov)
# OVERLAY [SYNCPROV]
overlay             syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 10000

Here, we configure the overlay to write the ContextCSN attribute on disk every 100 updates, or every 10 minutes. We also keep a back-log in memory for 10 000 updates.

There is no syncrepl definition on the Provider.

Consumer specific configuration

The Consumer does not have a syncprov overlay (because it is not a provider), but has a syncrepl directive configured :

Consumer syncrepl definition

#-----------------------------------------------------------------------
# SYNCREPL [LDAP1, brie]
syncrepl
        rid=1
        provider=ldap://brie.rb.example.net:4389
        bindmethod=sasl
        saslmech=external
        starttls=yes
        tls_cacert=/opt/example/ssl/ms-delta/cacert.pem
        tls_cert=/opt/example/ssl/ms-delta/slapdcert.pem
        tls_key=/opt/example/ssl/ms-delta/slapdkey.pem
        tls_reqcert=demand
        type=refreshAndPersist
        searchbase="dc=example,dc=com"
        filter="(objectclass=*)"
        scope=sub
        schemachecking=on
        retry="5 10 60 +"
        sizeLimit=unlimited
        timelimit=unlimited

# Redirect updates to the Master server TODO FIX
updateref    ldap://brie.rb.example.net:4389

FQDN

When using TLS, it’s critical to use the FQDN for the provider parameter in the syncrepl definition. Using the IP address will not work, as the name is supposed to be the common name used in the certificate in use.

 

The TLS part is where we define the way each server is going to ‘tallk’ to the remote peer. Here, we use the SASL EXTERNAL mechanism - using the certificate to authenticate the remote peer.

We set the sizelimit and timelimit to unlimited so that the number of entries that can be exchanged aren’t limited by the default 500 limit.

All the updates are redirected to the Provider server ( brie ) by the updateref directive (because the Consumer is read only)

Password policy overlay

Nothing fancy at the moment. We just point to the entry taht will contain the PPolicy rules (see later for the injection of this entry)

#-----------------------------------------------------------------------
# Load an instance of the ppolicy overlay for the current database:
overlay ppolicy

# Specify the default password policy subentry to use when none is
# specified in an account's entry
ppolicy_default "cn=default,ou=Policies,dc=example,dc=com"

Monitor database

There is nothing to configure here, the declaration is enough to set the monitoring.

Monitor

#######################################################################
# Monitor database
#######################################################################
database        monitor

access to dn.subtree="cn=monitor"
        by * tls_ssf=128 read

It’s most necessary to add some ACL to protect the access to this database. We have added the minimum one, which allows access to the content of this database to any aunthenticated user.

Checking the configuration

Before going any further, it’s necessary to check that the configuration is correct. This is done using the slaptest command :

TODO REDUNDANT?????

# /opt/example/bin/slaptest -f /opt/example/etc/openldap/slapd.conf -u

The -u flag is necessary at this point because we haven’t yet created the database.

If everything is fine, we can go on, otherwise we need to fix what’s wrong in the slapd.conf file…

Creating the database directories

At this point, we need to create the directories that will contain the database (example) otherwise they won’t start. It’s as simple as :

Database Directory Creation

# sudo mkdir /var/example/openldap-data/example

We are all set ! We can also make them protected so that no other user can read or write them, but the database files will already be protected against access.

Adding the required entries

Now that we have configured the servers, it’s necessary to add a few entries :

  • the example database context entry
  • the password policy entries

Those entries can be added after having started the server, but better do it right away, using the slapadd command line.

TODO - put a copy to download onto the site Here is the LDIF file that we use (name it init.ldif) :

Entry addition

dn: dc=example,dc=com
objectClass: domain
objectClass: top
dc: example

dn: ou=policies,dc=example,dc=com
objectClass: organizationalUnit
ou: policies
description: The Password Policies branch

dn: cn=default,ou=policies,dc=example,dc=com
objectClass: pwdPolicy
objectClass: person
cn: default
pwdAttribute: userPassword
sn: password policy
pwdAllowUserChange: TRUE
pwdCheckQuality: 2
pwdFailureCountInterval: 30
pwdLockout: TRUE
pwdLockoutDuration: 300
pwdMaxFailure: 6
pwdMinLength: 8
pwdMustChange: TRUE
pwdSafeModify: FALSE

LDIF

Copying this text into a init.ldif file might well be problematic, with some hidden characters being added (new lines or hidden spaces), resulting on a failure to inject it into the server. Double check the content of your file !

As we can see, we declare the example database context entry, and the password policy entries.

To inject those entries, we use this command :

# sudo /opt/example/bin/slapadd -b "dc=example,dc=com" -f /opt/example/etc/openldap/slapd.conf -w -v -l init.ldif
added: "dc=example,dc=com" (00000001)
added: "ou=policies,dc=example,dc=com" (00000002)
added: "cn=default,ou=policies,dc=example,dc=com" (00000003)
_#################### 100.00% eta   none elapsed            none fast!         
modified: "(null)" (00000001)
Closing DB...

Your OpenLDAP server is now ready to be started.

Starting the servers

Straightforward : launch the service.

Starting the server

# sudo su ldap /etc/init.d/solserver start

You should get back an [OK].

You can also check that the server is running from the syslog file content. You should see those lines in it :

ldap start

Nov 30 01:31:27 cantal slapd[26021]: @(#) $OpenLDAP: slapd 2.4.42 (Oct 28 2015 17:21:35) $#012#011matth@dhimay.rb.example.net:/home/build/sold-2.4.42.2/build/openldap.x86_64/servers/slapd
Nov 30 01:31:27 cantal slapd[26022]: slapd starting
...

The resulting directories layout

Here is the current layout when the server has been started :

Testing the server

Now that both servers are configured and started, you can test that they are up and running, using Ldap Studio, for instance. Creating a new connection on both server, using startTLS, you should be able to see the dc=example,dc=com data, and if you create an entry under this suffix on one of the server, it should be immediately replicated on the other.

Attachments:

Provider-Consumer-syncrepl.png (image/png) Provider-Consumer-syncrepl.graphml (application/x-upload-data) directories-full-no-accesslog.png (image/png) directories-full-no-accesslog.graphml (application/x-upload-data)