CTDB (resource agent)

Introduction
This resource agent manages CTDB, allowing one to use Clustered Samba in a Linux-HA/Pacemaker cluster.

Those familiar with CTDB will be aware that it can handle node failover, and includes event scripts for managing services other than Samba (e.g. NFS, HTTPD, etc.). This is fine if you want CTDB to manage the cluster, but these features are not appropriate to use in a Linux-HA/Pacemaker cluster, because these abilities are already present in Pacemaker and the underlying messaging layer. To configure a system such that some HA resources are managed by CTDB, and some are managed by Pacemaker starts to become confusing, to say the least.

This CTDB RA will, by default, start and stop CTDBD only. Samba and Winbind should be configured as separate resources, colocated with and ordered after the CTDB resource (or put in a cloned group). Currently (2010-11-30) we don't have Samba or Winbind OCF resource agents, so it is still possible to configure the CTDB RA such that CTDB will in turn manage Samba and Winbind. This mode will be deprecated in future.

Any other resources you need (IP addresses, clustered filesystems, NFS serving, HTTPD, etc.) must be configured within Pacemaker as usual, with appropriate ordering and colocation constraints.

Usage
192.168.101.14 192.168.101.15 [myshare] path = /shared-fs/myshare ctdb_recovery_lock="/shared-fs/samba/ctdb.lock" \ ctdb_manages_samba="yes" \ ctdb_manages_winbind="yes" \ op monitor timeout=20 interval=10 meta globally-unique="false" interleave="true"
 * Configure a shared filesysem (e.g. OCFS2). For the purposes of these instructions, we assume it's mounted at /shared-fs.
 * Ensure ctdb, smb, nmb and winbind services are disabled (chkconfig service off or similar).
 * Make directory for CTDB lock on the shared filesystem:
 * 1) mkdir -p /shared-fs/samba
 * Create /etc/ctdb/nodes on all nodes, containing a list of the private IP addresses of each node in the cluster, e.g.:
 * 1) cat /etc/ctdb/nodes
 * Add a share (or shares) to /etc/samba/smb.conf on all nodes:
 * 1) ...other options here, e.g.: read only = no etc...
 * Add a CTDB resource to the cluster (assuming clustered filesystem clone is named fs-clone</tt>):
 * 1) crm configure
 * 2) primitive ctdb ocf:heartbeat:CTDB params \
 * 1) clone ctdb-clone ctdb \
 * 1) colocation ctdb-with-fs inf: ctdb-clone fs-clone
 * 2) order start-ctdb-after-fs inf: fs-clone ctdb-clone
 * 3) commit

IP Addresses
There are at least two ways to do this:

Clustered IP Address
clusterip_hash="sourceip-sourceport" op monitor interval=60s ... Clone Set: dlm-clone Started: [ node-0 node-1 ] Clone Set: o2cb-clone Started: [ node-0 node-1 ] Clone Set: fs-clone Started: [ node-0 node-1 ] Clone Set: ctdb-clone Started: [ node-0 node-1 ] Clone Set: ip-clone (unique) ip:0      (ocf::heartbeat:IPaddr2):       Started node-0 ip:1      (ocf::heartbeat:IPaddr2):       Started node-1
 * Add a clustered IP address:
 * 1) crm configure
 * 2) primitive ip ocf:heartbeat:IPaddr2 params ip=192.168.100.222 \
 * 1) clone ip-clone ip meta globally-unique="true"
 * 2) colocation ip-with-ctdb inf: ip-clone ctdb-clone
 * 3) order start-ip-after-ctdb inf: ctdb-clone ip-clone
 * 4) commit
 * Result should be something like:
 * 1) crm status

This will give a single IP address, connections to which will be handled by one of the nodes in the cluster.

One or More Distinct IP Addresses
Configure multiple separate IPaddr2 resources (non-cloned). It should be possible to combine these with the Tickle ACK feature newly added to the portblock RA. Primitives should be similar to:

# primitive block-1 ocf:heartbeat:portblock \ params ip="192.168.100.201" protocol="tcp" \ portno="137,138,139,445" action="block" # primitive ip-1 ocf:heartbeat:IPaddr2 \ params ip="192.168.100.201" \ op monitor interval="60s" # primitive unblock-1 ocf:heartbeat:portblock \ params ip="192.168.100.201" protocol="tcp" \ portno="137,138,139,445" action="unblock" \ tickle_dir="/shared-fs/tickle"              <--- ideally on shared storage op monitor interval="10s"

Then, constraints need to be configured such that the resources start in the following order:

block-1 -> ip-1 -> ctdb -> unblock-1

(Obviously, this still requires some more fleshing-out)

Available RA Instance Parameters
<dl> <dt>OCF_RESKEY_ctdb_recovery_lock<dd>required, location of lock file on shared storage <dt>OCF_RESKEY_ctdb_manages_samba<dd>optional, default=no (will be deprecated in future) <dt>OCF_RESKEY_ctdb_manages_winbind<dd>optional, default=no (will be deprecated in future) <dt>OCF_RESKEY_ctdb_service_smb<dd>optional, will usually be auto-detected, only necessary if CTDB is managing Samba <dt>OCF_RESKEY_ctdb_service_nmb<dd>optional, will usually be auto-detected, only necessary if CTDB is managing Samba <dt>OCF_RESKEY_ctdb_service_winbind<dd>optional, will usually be auto-detected, only necessary if CTDB is managing Winbind <dt>OCF_RESKEY_ctdb_samba_skip_share_check<dd>optional, default=yes <dt>OCF_RESKEY_ctdb_monitor_free_memory<dd>optional, default=100 <dt>OCF_RESKEY_ctdb_start_as_disabled<dd>optional, default=yes <dt>OCF_RESKEY_ctdb_config_dir<dd>optional, default=/etc/ctdb <dt>OCF_RESKEY_ctdb_binary<dd>optional, default=/usr/bin/ctdb <dt>OCF_RESKEY_ctdbd_binary<dd>optional, default=/usr/sbin/ctdbd <dt>OCF_RESKEY_ctdb_socket<dd>optional, default=/var/lib/ctdb/ctdb.socket <dt>OCF_RESKEY_ctdb_dbdir<dd>optional, default=/var/lib/ctdb <dt>OCF_RESKEY_ctdb_logfile<dd>optional, default=/var/log/ctdb/log.ctdb <dt>OCF_RESKEY_ctdb_debuglevel<dd>optional, default=2 <dt>OCF_RESKEY_smb_conf<dd>optional, default=/etc/samba/smb.conf <dt>OCF_RESKEY_smb_private_dir<dd>optional, directory for smbpasswd, secrets.tdb, etc. (deprecated - do not use with CTDB > 1.0.50) <dt>OCF_RESKEY_smb_passdb_backend<dd>optional, default=tdbsam, only used if CTDB is managing Samba <dt>OCF_RESKEY_smb_idmap_backend<dd>optional, default=tdb2, only used if CTDB is managing Samba </dl>