Hey folks,
I'm working on building a highly available file server cluster with a pair of CentOS 6.5 servers and a slice of SAN storage. I used LINBIT's guide for NFS on RHEL 6 as a blueprint and achieved a configuration that allows for failover - graceful or non-graceful, but breaks file-transfers during graceful failovers. Below is the pacemaker configuration I'm using. Any pointers or guidance on a better way to approach this?
Thanks!
Cluster Name: fs00 Corosync Nodes: Pacemaker Nodes: fs00a fs00b Resources: Clone: clvmd-clone Meta Attrs: interleave=false Resource: clvmd (class=lsb type=clvmd) Operations: start interval=0s timeout=60s (clvmd-start-interval-0s) stop interval=0s timeout=120s (clvmd-stop-interval-0s) monitor interval=60s timeout=45s (clvmd-monitor-interval-60s) Clone: fs-clone Group: fs Meta Attrs: target-role=Stopped Resource: nfs_fs (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/vg_data/lv_nfs directory=/mnt/nfs fstype=gfs2 options=defaults Operations: start interval=0s timeout=30s (nfs_fs-start-interval-0s) stop interval=0s timeout=60s (nfs_fs-stop-interval-0s) monitor interval=30s timeout=10s (nfs_fs-monitor-interval-30s) Resource: exports_fs (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/vg_data/lv_exports directory=/exports fstype=gfs2 options=defaults Operations: start interval=0s timeout=30s (exports_fs-start-interval-0s) stop interval=0s timeout=60s (exports_fs-stop-interval-0s) monitor interval=30s timeout=10s (exports_fs-monitor-interval-30s) Group: nfs_server Meta Attrs: target-role=Stopped Resource: vip (class=ocf provider=heartbeat type=IPaddr) Attributes: ip=16.245.9.226 Operations: monitor interval=30 timeout=10 (vip-monitor-interval-30) Resource: nfs (class=ocf provider=heartbeat type=nfsserver) Attributes: nfs_init_script=/etc/init.d/nfs nfs_notify_foreground=true nfs_shared_infodir=/mnt/nfs nfs_ip=16.245.9.226 Operations: monitor interval=30 timeout=10 (nfs-monitor-interval-30) start interval=0 timeout=60 (nfs-start-interval-0) stop interval=0 timeout=120 (nfs-stop-interval-0) Resource: exportfs_root (class=ocf provider=heartbeat type=exportfs) Attributes: fsid=0 directory=/exports options=rw,sync,crossmnt clientspec=16.245.0.0/17 wait_for_leasetime_on_stop=false unlock_on_stop=true rmtab_backup=none Operations: monitor interval=30 timeout=10 (exportfs_root-monitor-interval-30) start interval=0 timeout=60 (exportfs_root-start-interval-0) stop interval=0 timeout=120 (exportfs_root-stop-interval-0) Resource: exportfs_test (class=ocf provider=heartbeat type=exportfs) Attributes: fsid=1 directory=/exports/testdir options=rw,sync,no_root_squash,mountpoint clientspec=16.245.0.0/17 wait_for_leasetime_on_stop=false unlock_on_stop=true rmtab_backup=none Operations: monitor interval=30 timeout=10 (exportfs_test-monitor-interval-30) start interval=0 timeout=60 (exportfs_test-start-interval-0) stop interval=0 timeout=120 (exportfs_test-stop-interval-0) Stonith Devices: Resource: fence_fs00a (class=stonith type=fence_hpblade) Attributes: pcmk_host_list=fs00a action=reboot ipaddr=enc00 login=fencer passwd=password cmd_prompt=enc00> secure=true port=11 power_wait=20 Operations: monitor interval=2h (fence_fs00a-monitor-interval-2h) Resource: fence_fs00b (class=stonith type=fence_hpblade) Attributes: pcmk_host_list=fs00b action=reboot ipaddr=enc01 login=fencer passwd=password cmd_prompt=enc01> secure=true port=11 power_wait=20 Operations: monitor interval=2h (fence_fs00b-monitor-interval-2h) Fencing Levels: Location Constraints: Resource: fence_fs00a Disabled on: fs00a (score:-INFINITY) (id:location-fence_fs00a-fs00a--INFINITY) Resource: fence_fs00b Disabled on: fs00b (score:-INFINITY) (id:location-fence_fs00b-fs00b--INFINITY) Ordering Constraints: start clvmd-clone then start fs-clone (Mandatory) (id:order-clvmd-clone-fs-clone-mandatory) Colocation Constraints: Cluster Properties: cluster-infrastructure: cman dc-version: 1.1.10-14.el6_5.3-368c726 last-lrm-refresh: 1412622584 no-quorum-policy: ignore stonith-enabled: true
[link][24 comments]