iSCSI Install and Configuration from Scratch
Alternatively, the VPSA does offer an automatic configuration script for download. The script will cover steps 2-3, 5-8. You would still need to perform step 4 and 9.
1) Install iscsi utilities for iscsiadm
# apt-get install open-iscsi
# yum install iscsi-initiator-utils
2) Obtain the host's IQN
# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:4aa7d2f25ea5
3) VPSA GUI: Create the appropriate server record within the VPSA, acquire CHAP credentials
4) VPSA GUI: "Attach" the server record to an iSCSI Volume
5) Create an iscsiadm "iface" for the VPSA
# iscsiadm -m iface -I zadara_<VPSA_IP> --op new
New interface zadara_<VPSA_IP> added
6) Map the iSCSI "iface" to the network interface that can reach the VPSA
# iscsiadm -m iface -I zadara_<VPSA_IP> --op update -n iface.net_ifacename -v eth0
zadara_<VPSA_IP> updated.
7) Discover the VPSA via iSCSI, this will return the VPSA IQN, which is also available in the VPSA GUI.
# iscsiadm -m discovery -t sendtargets -p <VPSA_IP>:3260 -I zadara_<VPSA_IP>
<VPSA_IP>:3260,1 iqn.2011-04.com.zadarastorage:vsa-00009628:1
8) Login to the discovered iSCSI target using CHAP
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> --op update -n node.session.auth.authmethod -v CHAP
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> --op update -n node.session.auth.username -v <SERVER_RECORD_CHAP_USER>
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> --op update -n node.session.auth.password -v <SERVER_RECORD_CHAP_SECRET>
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> --op update -n node.startup -v automatic
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> -l -I zadara_<VPSA_IP>
9) If the LUN configuration has been changed in the VPSA GUI after the host has been configured, you may need to "Rescan your LUNs".
# iscsiadm -m session --rescan
If you stop here, you have successfully configured a single session iSCSI connection to your VPSA. You should now be able to access the LUN via your favorite partitioning or filesystem formatting utility.
iSCSI MultiSession Setup
1) Obtain the current iSCSI sessions
# iscsiadm -m session
tcp: [1] <VPSA_IP>:3260,1 iqn.2011-04.com.zadarastorage:vsa-00006804:1 (non-flash)
tcp: [2] <VPSA_IP>:3260,1 iqn.2011-04.com.zadarastorage:vsa-00009628:1 (non-flash)
The [X] part of the results is the session ID number. In this case there are two VPSAs connected, each with 1 session.
2) To immediately add an additional session (Does not persist reboot)
# iscsiadm --mode session -r <SESSION_ID> --op new
3) To configure the total number of sessions to initiate on reboot
# iscsiadm -m node -T <VPSA_IQN> -p <VPSA_IP> --op update -n node.session.nr_sessions -v <TOTAL_SESSIONS>
Unfortunately, in most cases this is not enough by itself. The additional sessions will generate additional block devices on your system. So if you have 2x LUNs on the VPSA, and establish 8x sessions, you will end up with 16 block devices. The system is still operable as normal, but you won't yet be taking advantage of the parallel sessions.
iSCSI Multipath setup
dm-multipath is a utility that can find matching disk UUIDs, build a converged block device that will allow you to take advantage of the multiple sessions. This is the piece that permits IO through multiple paths over the multiple sessions.
1) Install dm-multipath tools
# apt-get install multipath-tools
# yum install device-mapper-multipath
2) Configure dm-multipath by creating and populating /etc/multipath.conf
defaults {
checker_timeout 600
}
blacklist {
device {
vendor "QEMU"
product '*'
}
}
# The below section will handle Zadara volumes
devices {
device {
vendor "Zadara"
product "VPSA"
path_grouping_policy multibus
path_checker tur
# You can try different path selectors
path_selector "round-robin 0"
# path_selector "queue-length 0"
# path_selector "service-time 0"
failback manual
rr_min_io 1
no_path_retry 20
}
}
3) Enable and Start/Reload multipathd
## Systemd environments
# systemctl enable multipathd # Enable on boot
# systemctl start multipathd # Start immediately
# systemctl reload multipathd # Reload the config file and activate changes if any
4) Verify dm-multipath has identified and merged the iSCSI sessions
# multipath -ll
23462306361363362 dm-0 Zadara ,VPSA
size=100T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
|- 2:0:0:0 sdb 8:16 active ready running
|- 6:0:0:0 sda 8:0 active ready running
|- 3:0:0:0 sde 8:64 active ready running
|- 4:0:0:0 sdd 8:48 active ready running
|- 5:0:0:0 sdc 8:32 active ready running
|- 8:0:0:0 sdf 8:80 active ready running
|- 9:0:0:0 sdh 8:112 active ready running
`- 7:0:0:0 sdg 8:96 active ready running
6) Update mounts and fstab from original /dev/sdX[0-9] path to "/dev/mapper/XXXXXXp[0-9]"
# umount /dev/sdh1
# mount /dev/mapper/23462306361363362p1
7) For systemd environments, the boot process is a bit trickier to ensure certain services begin in a certain order. Adding the iSCSI or mapper device to /etc/fstab before confirming all services startup in the proper order can result in killing your compute on reboot if you do not have KVM access to it.
This can be eased by adding "nofail" to the /etc/fstab configuration until you are certain your systemd start order is configured appropriately.
/dev/mapper/23462306361363362p1 /mnt ext4 defaults,nofail 0 0
8) Many guides indicate it is necessary to rebuild initramfs, this should be to ensure any additional kernel modules are present to protect the start sequence. This may not apply to all environments, but some notes on that are here for reference.
## Debian/Ubuntu
# update-initramfs -u
## CentOS/RedHat
# dracut -f