How To setup Multiple iSCSI sessions and MultiPath on your Linux Cloud Server

Setting up Multiple iSCSI Sessions with open-iscsi

 

Version 2.0-873 and up of open-iscsi is required. If the distribution comes with an older version, it is required to manually compile and install the new version:

$ wget http://www.open-iscsi.org/bits/open-iscsi-2.0-873.tar.gz

$ tar xzvf open-iscsi-2.0-873.tar.gz

$ cd open-iscsi-2.0-873

$ make

(You may be asked to install make/gcc - do this according to your distribution)

$ make install

At this point it is required to restart iSCSI services:

$ /etc/init.d/open-iscsi restart

This will logout all the existing sessions.

If you had configured your iSCSI nodes in /etc/iscsi to perform automatic startup (by issuing iscsiadm -m node -T <target name> -p <IP> --op update -n node.startup -v automatic), sessions will be re-established automatically. Otherwise, it is required to re-login manually (iscsiadm -m node -T <target name> -p <IP> --login).

In any case, after re-login happens, check that your applications use correct block devices, because on log-out the previous block device disappear.

Note: if the re-login fails, it is required to delete the iSCSI configuration and re-create it. Use

$ iscsiadm --mode node --op delete

to delete the existing iSCSI nodes, and then re-create the nodes and re-login as described in "Connecting Linux servers to VPSA using CHAP auth"

At this point, you can add additional iSCSI sessions to existing ones. Issue:

$ iscsiadm --mode session

You will see the list of sessions, for example:

tcp: [1] 170.70.2.112:3260,1 iqn.2011-04.com.zadarastorage:vsa-00000018:1

tcp: [2] 170.70.2.113:3260,1 iqn.2011-04.com.zadarastorage:vsa-00000019:1

Select the VPSA target you want to establish additional sessions to, and note the appropriate session id: [1] or [2] in the above example. Issue:

$ iscsiadm --mode session -r 1 --op new

This will create an additional session to the same target. Repeat for any additional session you need to open. If you want to create those multiple sessions automatically on machine reboot or iscsi daemon restart, issue:

$ iscsiadm -m node -T <target name> -p <IP> --op update -n node.session.nr_sessions -v <num sessions>

 

Setting up multipath-tools for Multiple iSCSI Sessions

Without multipath-tools, each new session to the same target will spawn a new block device per each volume on the target. So if we have N sessions and M volumes, we will see NxM block devices on the host. multipath-tools is used to create virtual block devices, that consolidate the underlying block devices for the same volume.

To install multipath-tools:

$ apt-get install multipath-tools

At this point, it is required to create and edit the configuration file in /etc/multipath.conf. Make it look like follows:

# Use the blacklist section to exclude local disks from being handled by multipath-tools.
# It is possible to blacklist by vendor/product (with regular expressions), devnode (with regular expressions), WWID.
# Below is an example, for more info see "man multipath.conf"

blacklist {
    device {
        vendor "QEMU"
        product "*"
    }
}

# The below section will handle Zadara volumes

devices {

    device {

        vendor "Zadara"
# pre 14.11 VPSAs used 'zdr*'
# product "zdr*"
# post 14.11 VPSAs use 'VPSA' product "VPSA" path_grouping_policy multibus # You can try different path selectors # path_selector "round-robin 0" path_selector "queue-length 0" # path_selector "service-time 0" failback manual rr_min_io 1 } }

After editing the file, issue:

$ /etc/init.d/multipath-tools reload

This will reload the configuration.

To check the current multipath setup, issue:

$ multipath -ll

The output will look like following:

23766623835303731 dm-1 Zadara,zdr-18-2
size=6.0G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
|- 28:0:0:1 sdv 65:80 active ready running
|- 30:0:0:1 sdt 65:48 active ready running
|- 31:0:0:1 sdu 65:64 active ready running
`- 29:0:0:1 sdw 65:96 active ready running
23462356666386538 dm-2 Zadara,zdr-18-1
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
|- 30:0:0:0 sdm 8:192 active ready running
|- 31:0:0:0 sdr 65:16 active ready running
|- 28:0:0:0 sdq 65:0 active ready running
`- 29:0:0:0 sds 65:32 active ready running
For each volume there will be a single group, with a number of paths equal to the number of iSCSI sessions. 

Each Zadara volume is identified by its unique WWID (persistent, as long as volume exists). In the above exampe, the WWIDs are 23766623835303731 and 23462356666386538. Appropriate block devices created by multipath will be:

/dev/mapper/23766623835303731

/dev/mapper/23462356666386538

Those should be used by your application. It is also possible, if needed, to assign more user-friendly names by using "user_friendly_names" and/or "alias" keywords in multipath.conf (man page for more details).

Once you are happy with the multipath setup, issue:

update-initramfs -u

This will update the multipath configuration in your initramfs.

Notes:

  • Adding volumes/sessions is handled automatically by multipath-tools

     

  • If you delete/detach a volume from the server, the appropriate iSCSI block device on the server still exists (but returns IO errors). If you want to remove it, then locate the block device (following the link in /dev/disk/by-path) and issue:
         $ echo 1 > /sys/block/<name>/device/delete
    Be careful not to delete the wrong device!
  • When the last block device for the volume is deleted, multipath will remove the virtual block device.
Have more questions? Submit a request

1 Comments

  • 0
    Avatar
    Vladimir

    Analogue for RedHat EL 6.3:

    1. Installing iSCSI Initiator and setting number of sessions to 4:

    yum install iscsi-initiator-utils

    iscsiadm -m node --op update -n node.session.nr_sessions -v 4

    service iscsi start

     

    2. Installing multipath and auto-starting it on boot

    yum install device-mapper-multipath

    edit  /etc/multipath.conf (use config file example from above)

    service multipathd start

    chkconfig multipathd on

Please sign in to leave a comment.