new blog 2.0


0x07. [LPIC-301] LDAP - Directory replication

Our directory is being used all the time for some specific purpose, namely authentication. For that reason security had to be taken into account, nevertheless, it's stability now is of no less importance. Also, decreasing load on a (master) and moving it over to slaves increases performance of the whole directory instance. It is also important, that you come up with a good backup strategy. For all that OpenLDAP 2.3 and below provide slurpd daemon, which is responsible for directory replication. In OpenLDAP 2.4 slurpd was completely replaced with Syncrepl for a number of reasons.

slurpd logic
slapd and slurpd run on the same server, which from now on will be called master. Following this there are several slave servers running only slapd deamon. DNS for our ldap service should be configured with RoundRobin (in my case it's simply porta.tux, but ldap.porta.tux would be more appropriate), so when clients connect to the directory the load is distributed among the slave servers. They simply read the directory from a slave server. Now let's assume that a client wants to change password entry in the directory. This process is more complicated and, according to OpenLDAP admin guide, includes the following steps:
  1. The LDAP client submits an LDAP modify operation to the slave slapd.
  2. The slave slapd returns a referral to the LDAP client referring the client to the master slapd.
  3. The LDAP client submits the LDAP modify operation to the master slapd.
  4. The master slapd performs the modify operation, writes out the change to its replication log file and returns a success code to the client.
  5. The slurpd process notices that a new entry has been appended to the replication log file, reads the replication log entry, and sends the change to the slave slapd via LDAP.
  6. The slave slapd performs the modify operation and returns a success code to the slurpd process.
Say I have 3 slave slapd servers and one master. A client wants to perform a password change modification. The following happens: 1) client sends ldap modify request, 2) slave slapd server points the client to the master server, 3) client resends ldap modify request to the master server, 4) master server confirms success, 5) master server informs slurpd about the changes in the directory via changelog. 6) slurpd replicates the changes to all slave servers
A slave server which the ldap data is pushed to is called a replica server.

How to configure a replica?

Kick off with building a copy of your master ldap server on another computer. It is important to secure it as much as you do the master server, as slurpd on the master will update slaves with LDAP protocol, and you don't want sensitive data wandering over network unencrypted. Second step is migrating the data from the master server. Theoretically, you can copy the database files from the master to the slave, but make sure that the database software is compatible on both machines and that CPU architecture does not prevent compatibility. If you want to avoid these and possibly other factors there is a universal solution: slapcat. This will dump the database content in an LDIF format. Move this file over to the replica and slapadd it into the new directory. Once that is done we can get down to the configuration files... I presume that slapd.conf on both servers are fully functional.

Replica slapd.conf
We need to change the rootdn/rootpw for the replica server. Master's slurpd will use these credentials for modifying entries. If you decide not to use rootdn as the updating user it is good to doublecheck that this particular user has sufficient access.

# New rootdn/rootpw on the slave
# -- this is only an example, in real life rootpw should be hashed
rootdn "cn=replica,dc=porta,dc=tux"
rootpw "replicaPasswd"

# -- Now the most important part. updatedn (user responsible
# for updates on the slave) is associated with rootdn entry.
# slave's slapd will point users willing to write to updateref
updatedn "cn=replica,dc=porta,dc=tux"
updateref "ldaps://porta.tux"

Master slapd.conf
Analogically, master should know the credentials for the slave and the hostname to refer to. The following directive accomplishes that:

# to add on the master slapd.conf
replogfile /var/lib/openldap-slurp/replica/slapd.replog
replica uri=ldaps://princess.tux

In the line above we defined the replog file. Slapd communicates with slurpd via replog. So what's it content?
#- replog
replica: porta.tux:636
time: 1198985069
dn: uid=spitfire,ou=People,dc=porta,dc=tux
changetype: modify
replace: userPassword
userPassword:: cXdTcffzZEZ6eG2N
replace: entryCSN
entryCSN: 20071230032429Z#000000#00#000000
replace: modifiersName
modifiersName: uid=spitfire,ou=People,dc=porta,dc=tux
replace: modifyTimestamp
modifyTimestamp: 20071230032429Z
#- end of replog

The snippet from the logfile shows that spitfire was changing password. Slurpd reads it and pushes the changes to further the slave servers. If it is unsucessful, it writes an error replog ( a .rej file) with exactly the same syntax as replog, but it adds ERROR at the beginning. (Example taken from OpenLDAP admin guide 2.3)
       ERROR: No such attribute
time: 809618633
dn: uid=bjensen,dc=example,dc=com
changetype: modify
replace: description
description: A dreamer...
replace: modifiersName
modifiersName: uid=bjensen,dc=example,dc=com
replace: modifyTimestamp
modifyTimestamp: 20000805073308Z

slurpd in One-shot mode and reject files
If you want to correct an error from error log, you needn't do it by hand. You can run slurpd in one-shot mode: (-o for one-shot mode and -r for file name of the replog to be processed)
slurpd -r -o
After successful processing of the file slurpd exits instead of going into daemon mode.

slurpd operates in push mode only (master pushes the changes to the slaves)

syncrepl logic

A preferred alternative to slurpd is syncrepl. It is a consumer-side replication engine. Syncrepl uses LDAP Content Synchronization Protocol for keeping date up-to-date and supports both push- and pull-based replication. It is synchronizing automatically with the provider database.

The LDAP Content Synchronization Protocol supports two types of operation: refreshOnly (polling) and refreshAndPersist (listening).

Consumer server is synchronized during the polling and disconnects when finished (refreshOnly).
When refresh|AndPersist policy is selected, the consumer server remains connected and updates all the newly changed entries on the fly.

refreshOnly and the "refresh" part of refreshAndPersist can be performed in one of two phases: present or delete.

In the present phase the server slapd sends some information to client slapd, that is...
a) all the entries with their attributes that have been changed since the last synchronization
b) changed attributes in these entries with new values
c) unchanged attributes without values but being marked as "present" on the server
d) entries that haven't been mentioned do not exist, thus are removed from the client slapd

In the delete phase the information sent by the server slapd is as follows:
a) all the entries with their attributes that have been changed since the last synchronization
b) changed attributes/entries with new values
c) unchanged attributes are not mentioned
d) removed entries without values but being marked as "deleted" on the server

The refreshOnly operation is finished with LDAP Sync sending a cookie to the LDAP Sync client. The client will present this cookie to the server before priot to the next synchronization in order to check if something has changed on the server since the last sync.

The difference between refreshOnly and refreshAndPersist is that in the latter the connection between sync servers is not terminated and the state cookie can be updated anytime the servers request it.

You can find out more refering to

configuring syncrepl
You can start with undoing changes we did during slurpd configuration :)
As mentioned above, syncrepl is a LDAP Sync Client (consumer) based type of replication, that's why there is not to much to do on the LDAP Sync Server's (Provider's) side.
   # --
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
# --
the first directive defines the syncprov overlay.
Second line is the providers checkpoint limit. In the example above a checkpoint will be performed every 100 LDAP operations or every 10 minutes, whatever comes first.

On the client LDAP Sync side things are getting more interesting:
# -- snippet from slapd.conf on princess-pc
syncrepl rid=1
# --

So our princess-pc will connect to porta.tux every hour and will perform refreshOnce operation. It uses rootdn of the master provider server (what from the security point of view is not the smartest thing to do)

ldbm backend does not support refreshAndPersist

No comments: