Symas OpenLDAP Knowledge Base

Multimaster Setup with Delta-Syncrepl

(slapd.conf inserts are in italics)

Configure the consumer part and the producer part on each server

  1. Give each Master/Producer server a unique ID
# ServerID parameter 
        serverid        1 
# The first server will be 1 and the second will be 2 
  1. Enable the syncprov and accesslog modules to Master/Producers
# These two are mandatory 
  1. Add chaining module for Replica/Consumers and demoted Providers only
  1. Add loglevel entry to all servers
        logfile /tmp/slapd_log.log 
        loglevel       stats, sync 

View Debug log levels

        ~/symas/lib64/slapd -d?  

Once the server is up, review the log output. The easiest way is to take a slice off the tail (the latest records) of the log:

       tail -n 10000 /tmp/slapd_log.log > log_slice.log

For information about the contents of the log see How To Read a slapd lognull.

  1. Create Primary database Entry on Producers and Consumers

Application database

        database    mdb 
        suffix      "dc=symas,dc=com" 
        rootdn      "dc=symas,dc=com" 
        rootpw      {SSHA256}nB3qRLx5bz2X4FJNUvF2/9toLiVufv4vScQG2t+85sIES6WywCuVFw== 

Indices to maintain

        index       default    eq 
        index       objectClass 
        index       cn eq,sub 
        index       givenName eq,sub 
        index       uniqueMember 
        index       mail eq,sub 
        index       entryUUID eq 
        index       entryCSN eq 
        index       uid eq,sub 
        directory   /var/symas/openldap-data/symas 
        maxsize     1073741824 

Create (mkdir) the directories: /var/symas/openldap-data/symas/ and /var/symas/openldap-data/symas/accesslog.

       mkdir -p /var/symas/openldap/data/symas/accesslog

The -p option will create /var/symas/openldap/data/symas if it doesn’t already exist.

  1. Add chaining overlay (Replicas/Consumers and demoted Providers only)
        overlay    chain 
        chain-uri  "ldaps://fwapldap01" 
        chain-return-error      TRUE 

The chain-uri should point to “the master” in a single-master configuration. In a multi-master cluster, we recommend that applications known to do “writes” (additions, modifications, and deletions) be pointed at one of the masters. The best way to do that is to have the load-balancer set up with (at least) virtual IP addresses (“VIPs”), one for write-heavy traffic and the other(s) to distribute the more query-heavy traffic. The first should generally “prefer” one of the masters but direct traffic to another if the that server is offline. This minimizes the replication activity for efficiency and overall performance.

  1. Define the server(s) from which consumers will get replication updates. In a multi-master configuration, the masters should all be configured to take replication traffic from each other.

Use syncrepl

For delta-syncrepl set syncrepl to use the accesslog database


Filters the accesslog database, getting only Write operations (ie updates) which were successful (ie, reqresult =0)

Define as many syncrepl as there are Master/Producer servers in the replication including the server where the app database is stored (Syncrepl sections and RID order must match on Masters)


Example: 3 servers A, B and C,

A could define sync(B), sync(C),
B could define sync(A), sync(C), C could define sync(A), sync(B)

Simply configure sync(A), sync(B) sync(C) on all the servers

  1. Setup the syncrepl parameters

Each syncrepl parameter instructs the server where to send requests for updates

    retry="5 10 6 +" 
    retry="60 +" 

NOTE: the credentials must always be plaintext. The servers will hash them automatically when connecting, but if the password is already hash, syncrepl will hash the hash and the connection will fail unless TLS certs are used in place of binddn/credentials.

  1. For two Masters Add the MirrorMode flag set to on (restricted to 2 Masters in the same location) mirrormode on

  2. If TLS not used, add the binddn and credentials for the syncrepl entry

  3. Set UpdateRef (Enable only on Replica/Consumer and Demoted Providers)

Updateref: Forwards all writes (ADD/MOD/DEL) to the specified provider

    updateref  "ldaps://desired Master/Provider server" 
  1. Define the accesslog and syncprov overlays on Masters/Producers

    overlay       syncprov 
    syncprov-checkpoint    100 10 
    syncprov-sessionlog    10000 
    syncprov-reloadhint    TRUE 
    overlay accesslog 
    logdb cn=accesslog 
    logops writes 
    logsuccess TRUE 
    logpurge 24:00 01+00:00 
  2. Create the AccessLog Database Entry (Master/Producer Only)

Accesslog database

    database     mdb 
    rootdn   "cn=config" 
    directory    /var/symas/openldap-data/accesslog 
    maxsize      5120000 
    suffix       "cn=accesslog" 
    index        default eq 
    index        objectClass,entryCSN,entryUUID,reqEnd,reqResult,reqStart 

Define only the syncprov overlay 

    overlay             syncprov 
    syncprov-nopresent  TRUE 
    syncprov-reloadhint TRUE 
    syncprov-checkpoint 100 10 
    syncprov-sessionlog 10000 

On the Producer make the directory /var/symas/openldap-data/accesslog

  1. Load the servers with all data before starting slapd

Use slapcat to backup database from first master Use slapadd to copy database to second master


In Multi-Master Replication each server is a consumer of the other master and each server is a producer for the other master

Various topologies:

  • With only 2 servers, this is really A <—> B
  • Both are masters, both are producers, both are consumers
  • With 3-4 servers, this is A <—> B <—> C (<—> D)
  • A and B are both masters, producers and consumers
  • B and C (and D) are masters, producers and consumers
  • Updating A will update B which will update C (which will update D)
  • A and C are only connected via B unless A is specifically connected to C
  • Then updating A will update B and C
    • 2 servers : 1 connection (A <-> B)
    • 3 servers : 3 connections (A <-> B, A <-> C, B <-> C)
    • 4 servers : 6 connections (A <-> B, A <-> C, A <-> D, B <-> C, B <-> D, C <-> D)

NOTE: When you have more than 4 servers, this becomes problematic since the number of connections is growing very fast as described below:

* N servers : (N -1) + ( N-2) + ... + 2 + 1 = N x (N - 1 )/2. For N=10, this is 45 connection

That also means an entry being modified is going to be transmitted as many times as you have connections.