How To Setup Fiber Channel Multipath in Linux

Introduction

When utilizing Linux as an initiator for fiber / fibre channel access to a VPSA, this article explains the required settings for multipath.conf to ensure IO is sent to the correct paths and failovers are handled properly.

Zadara VPSAs have two controllers, an active, and a standby (which can be seen in the "Controllers" section of the GUI).  Each controller has two fiber channel paths to the fiber channel switches - one to each switch in the redundant pair.  This means all initiators to the Zadara VPSA will have 4 paths - 2 to the active controller, and two to the standby.

All IO must be sent only to the active controller, as it is the only controller that can service IO requests.  Paths to the standby should be connected, but in a standby state should a failover occur.

 

Multipath Configuration

The following configuration should be included in your system's multipath configuration file - generally found at /etc/multipath.conf.

 

Ubuntu/Redhat

 

# Use the blacklist section to exclude local disks from being handled
# by multipath-tools. It is possible to blacklist by vendor/product
# (with regular expressions), devnode (with regular expressions), WWID.
# Below is an example, for more info see "man multipath.conf" blacklist { device { vendor "QEMU" product "*" } } # The below section will handle Zadara volumes devices { device { vendor "Zadara" product "VPSA" path_grouping_policy group_by_prio path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate no_path_retry 0 rr_min_io 1 } }

 

SuSE

# Use the blacklist section to exclude local disks from being handled
# by multipath-tools. It is possible to blacklist by vendor/product
# (with regular expressions), devnode (with regular expressions), WWID.
# Below is an example, for more info see "man multipath.conf" blacklist { device { vendor "QEMU" product "*" } } # The below section will handle Zadara volumes devices { device { vendor "Zadara" product "VPSA" path_grouping_policy group_by_prio path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate no_path_retry 10 rr_min_io 1 } }

 

Sample Output

After applying the above configuration to the multipath.conf file and reloading the multipath service, running the command "multipath -ll" should show output similar to the following:

multipath -ll
2346631646263392d dm-4 Zadara,VPSA
size=4.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:4 sdf 8:80  active ready running
| `- 2:0:0:4 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:1:4 sdk 8:160 active ready running
  `- 2:0:1:4 sdu 65:64 active ready running
26164303262306538 dm-2 Zadara,VPSA
size=6.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:2 sdd 8:48  active ready running
| `- 2:0:0:2 sdn 8:208 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:1:2 sdi 8:128 active ready running
  `- 2:0:1:2 sds 65:32 active ready running
26265613032383966 dm-1 Zadara,VPSA
size=6.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32  active ready running
| `- 2:0:0:1 sdm 8:192 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:1:1 sdh 8:112 active ready running
  `- 2:0:1:1 sdr 65:16 active ready running
26630326134393433 dm-3 Zadara,VPSA
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:3 sde 8:64  active ready running
| `- 2:0:0:3 sdo 8:224 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:1:3 sdj 8:144 active ready running
  `- 2:0:1:3 sdt 65:48 active ready running
23861343738303036 dm-0 Zadara,VPSA
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:0 sdb 8:16  active ready running
| `- 2:0:0:0 sdl 8:176 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 1:0:1:0 sdg 8:96  active ready running
  `- 2:0:1:0 sdq 65:0  active ready running


Please note that for each mounted multipath volume, there should be two paths that are
"status=active" and two that are "status=enabled".  The "status=active" are the paths to
the active virtual controller, and the "status=enabled" are the paths to the standby.  IO
will only go to the "active" paths - and should a failover occur, the ALUA mechanism will
handle the switch to the other controller automatically.

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.