ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
As part of our series covering Windows 2003 High Availability Solutions, we most recently focused on storage techniques that can be incorporated into Server Clustering. So far, we discussed the two most common choices — direct-attached SCSI (with popularity resulting from its long-lasting, wide-spread commercial presence and low pricing) and Fibre Channel storage-area networks (FC SANs), which are frequently chosen because of their superior performance and reliability.
iSCSI delivers the benefits of direct-attached SCSI and Fibre Channel SANs while avoiding their biggest drawbacks. This article examines the general principles of iSCSI storage and describes its clustering characteristics on Windows 2003 Server.
Unfortunately, the cost associated with FC SAN deployments is prohibitive for most smaller or less-critical environments, whose requirements cannot be satisfied with parallel SCSI because of its performance and scalability limitations. The introduction of iSCSI resolves this dilemma by combining the benefits of both technologies and at the same time avoiding their biggest drawbacks. This article will overview of the general principles of iSCSI storage and describe its clustering characteristics on the Windows 2003 Server platform.
iSCSI is an acronym derived from the term Internet SCSI, which succinctly summarizes its basic premise. iSCSI uses IP packets to carry SCSI commands, status signals, and data between storage devices and hosts over standard networks. This approach offers tremendous advantage by leveraging existing hardware and cabling (as well as expertise). Although iSCSI frequently uses Gigabit Ethernet, with enterprise class switches and specialized network adapters (containing firmware that processes iSCSI-related traffic, offloading it from host CPUs), its overall cost is lower than equivalent Fibre Channel deployments. At the same rate, however, features, such as addressing or automatic device discovery built into FC SAN infrastructure, must be incorporated into iSCSI specifications and implemented in its components.
iSCSI communication is carried over a TCP session between an iSCSI initiator (for which functionality is provided in Windows 2003 in the form of software or a mix of HBA firmware and Storport miniport driver) and an iSCSI target (such as a storage device), established following a logon sequence, during which session security and transport parameters are negotiated. These sessions can be made persistent so they are automatically restored after host reboots.
On the network level, both initiator and target get assigned unique IP addresses, which allow for node identification. With node identification, the target is actually accessed by a combination of IP address and port number, which is referred to as portal. In the iSCSI protocol, addressing is typically handled with iSCSI Qualified Name (IQN) convention. Its format consists of the type identifier (i.e., “iqn.”), registration date field (in the month-year notation) followed by the period and domain in which the name is registered (in reversed sequence), the semicolon, and the host (or device) name, which can be either autogenerated (as is the case with Microsoft implementation, where it is derived from the computer name), preassigned, or chosen arbitrarily, serving as a descriptor providing such information as device model, location, purpose, or LUN.
Targets are located either by statically configuring software initiator, by specifying target portal parameters (and corresponding logon credentials), by leveraging functionality built into HBAs on the host, or discovered automatically, using information stored on an Internet Storage Name Server (iSNS). This server offers a centralized database of iSCSI resources, where iSCSI storage devices are able to register parameters and status, which subsequently can be referenced by initiators. Access to individual records can be restricted based on discovery domains, serving a purpose similar to FC SAN zoning.
In a typical Microsoft iSCSI implementation, the initiator software (currently in version 2.02, downloadable from the Microsoft Web site and supported in Windows 2000 SP3 or later, Windows XP Professional SP1 or later, and Windows 2003 Server) running on a Windows host server (with a compatible NIC or an HBA that supports Microsoft iSCSI driver interface), is used to mount storage volumes located on iSCSI targets and registered with iSNS server (which Microsoft implementation — currently in version 3.0 supported on Windows 2000 Server SP4 and Windows 2003 Server — is also available as a free download).
Installation of the initiator includes iSNS client and administrative features, in the form of iSCSI Initiator applet in the Control Panel and Windows Management Instrumentation and iSCSI Command Line interface (iSCSICLI). The software-based initiator lacks some of the functionality that might be available with hardware-based solutions (such as support for dynamic volumes or booting from iSCSI disks).
To provide a sufficient level of security and segregation, consider isolating iSCSI infrastructure to a dedicated storage network (or separating the shared environment with VLANs), as well as applying authentication and encryption methods. With Microsoft implementation, authentication (as well as segregation of storage) is handled with Challenge Handshake Authentication Protocol (CHAP), relying on a password shared between an initiator and a target, providing that the latter supports it. Communication can be encrypted directly on end devices, using built-in features of high-end iSCSI HBAs, third-party encryption methods, or Microsoft’s version of IPSec.
Although network teaming is not supported on iSCSI interfaces, it is possible to enable communication between an initiator and a target via redundant network paths that accommodates setup with multiple local NICs or HBAs and separate interconnects for each. This can be done by implementing multiple connections per session (MCS), which leverage a single iSCSI session, or with Microsoft Multipath I/O (MPIO), which creates multiple sessions. The distribution of I/O across connections (applied to all LUNs involved in the same session) or sessions (referencing individual LUNs), for MSC and MPIO, (respectively), depends on Load Balance Policies configured by assigning Active or Passive type to each of network paths. This results in one of the following arrangements:
- Fail Over Only uses a single active path as the primary and treats all others as secondaries, which are attempted in round-robin fashion in case the primary fails. The first available one found becomes the primary.
- Round Robin distributes iSCSI communication evenly to all paths in round-robin fashion.
- Round Robin with Subset functions with one set of paths in the Active mode and the other remaining Passive. The traffic is distributed according to the round robin algorithm across all active paths.
- Weighted Path selects a single active path by picking the lowest value of arbitrarily assigned weight parameter.
- Least Queue Depth, available only with MCS, sends traffic to the path with the fewest number of requests.
The multipathing solution selected depends on a number of factors, such as support on the target side, required level of granularity of Load Balance Policy (individual LUN or session level), and hardware components (MCS is recommended in cases where a software-based initiator — without presence of specialized HBAs on the host side — is used). Regardless of your decision, take advantage of this functionality as part of your clustering deployment to increase the level of redundancy.
When incorporating iSCSI storage into your Windows 2003 Server cluster implementation (note that Microsoft does not support it on Windows 2000), also ensure that components on the host side fully comply with iSCSI device logo program specifications and basic clustering principles. Take into account information presented in earlier articles of this series as well as domain and network dependencies. Also bear in mind that besides SCSI RESERVE and RELEASE commands (which provide basic functionality), iSCSI targets must support SCSI PERSISTENT RESERVE and PERSISTENT RELEASE to allow for all of the Load Balance policies and persistent logons.
The latter requires a persistent reservation key be configured on all cluster nodes. This is done by choosing an arbitrary 8-byte value, with the first 6 bytes unique to each cluster and the remaining 2 bytes varying between its nodes. Data is entered in the PersistentReservationKey REG_BINARY entry of the HKLMSystemCurrentControlSetServicesMSiSCDSMPersistentReservation registry key on each cluster member. In addition, the UsePersistentReservation entry of REG_DWORD type is set to 1 in the same registry location. You should also enable Bind Volumes Initiator Setting (in the Properties dialog box of the iSCSI Initiator Control Panel applet), which ensures all iSCSI hosted volumes are mounted before the Cluster Service attempts to bring them online.
To avoid network congestion-related issues, consider setting up dedicated Gigabit Ethernet network or implementing VLANs with non-blocking switches supporting Quality of Service. Optimize bandwidth utilization, by implementing Jumbo frames and increasing value of Maximum Transmission Unit.
In a previous post, I’ve shown how to set up a Linux based SAN solution, based upon Fedora Core 6 and the free iSCSI Enterprise Target tool. In this post, I’ll show how to connect to the SAN from Windows
First, make sure you have the Microsoft iSCSI initiatior software loaded onto your system. The components are already available in Longhorn and Vista. Windows 2003 server and XP will require some packages to be installed (which can be found on the Microsoft website). In my example, I’m using Windows Longhorn beta3, but the procedure should be more or less the same from Vista, 2003 server or XP.
Open the Microsoft iSCSI Initiator tool and go to the «discovery» tabsheet. Click «Add portal» and enter the IP address of the Linux box running the SAN emulator.
After clicking OK, you should see the target under the «Targets» tabsheet.
In the «targets» tabsheet, select the target and click «Log on». Make sure the «Automatically restore this connection when the computer starts» checkbox is enabled. This will ensure that the server will connect to the target at system boot time. If the server has a dedicated NIC to the SAN network, you may want to click the «Advanced» button and specify the correct interfaces for this connection.
(Note : this page would also be the place where you can specify CHAP authentication parameters and IPSec tunnel parameters. Be aware though, that I have not tested those on iSCSI Target ànd that using IPSec will put additional load on the network (iSCSI SAN) traffic.
When closing the window, you should see that the status of the connection has now changed to «Connected». Now you are ready to create a volume and mount it as a disk under Windows.
Click «OK» to save and close the Microsoft iSCSI initiator configuration tool.
Go to the Disk Management MMC. You should be prompted about a new Logical Disk that was found on the system.
Choose «MBR» if you’re not sure and press OK. Now you can create a volume, format the disk, and assign a drive letter. Depending on the speed of your network, formatting might take a while. Once the drive is formatted, you can use it as a regular disk in the server.
On the Linux SAN, you should see that it is connected :
user@machine ~]# cat /proc/net/iet/volume tid:1 name:iqn.2001-04.be.corelan:san1 lun:0 state:0 iotype:fileio iomode:wt path:/dev/hda3 [user@machine ~]# cat /proc/net/iet/session tid:1 name:iqn.2001-04.be.corelan:san1 sid:281475899523136 initiator:iqn.1991-05.com.microsoft:longhorn.corelan.be cid:1 ip:192.168.0.3 state:active hd:none dd:none |
© 2007 – 2021, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved.
Launch the Microsoft iSCSI Initiator.
Switch to the Discovery tab.
Press the Add button, the Add Target
Portal dialog shows.
Specify Server IP address and press
the OK button to continue.
Switch to the Targets tab.
Select a target from Targets list
box, and press the Log On.. button. The
Log On to Target dialog shows.
When you use CHAP, press the Advanced…
button, otherwise press the Ok button to continue.
Select CHAP logon information.
Type User name.
Type Target secret.
Press the OK button to continue.
Then press the OK button in the
Log On to Target dialog.
Now, the client is connected to the target.
We have to initialize the disk before we use it.
Click Start->Computer->Manage->Disk Management,
the Initialize and Convert Disk Wizard
shows.
Press Next button
to continue.
Select Disks to Initialize.
Select Disk 1.
Press the Next button to continue.
Select Disks to Convert to Dynamic Disk.
Do not select the iSCSI disk to convert.
Press the Next button to continue.
If all parameters are correct, press the Finish
button to continue, otherwise, press the Back
button to modify settings.
Partition the disk.
Right click on the disk and select New
Partition… menu item. The New
Partition Wizard shows.
Press the Next button to continue.
Select Partition Type.
Choose Primary partition.
Press the Next button to continue.
Specify Partition Size.
Specify the partition size in MB.
Press the Next button to continue.
Assign Drive Letter.
Select a drive letter.
Press the Next button to continue.
Format Partition.
Select NTS in the File system.
Select Allocation unit size.
Specify Volume Label.
Choose Perform a quick format to
save time.
Press the Next button to continue.
If all parameters are correct, press the Finish
button to format the disk, otherwise, press the Back
button to modify settings.
Right click on the disk, and then select
Mark Partition as Active to set the disk is
active (Note, inactive disk can’t
be used as boot device).
In this document, we’ll see how we can create an iSCSI Target with our Windows operating system, No third-party apps, we can create or assign virtual disks to the iSCSI Target to later use them in shared storage, To set up a cluster… What we will need is a compatible Operating System, or a Windows Unified Data Storage Server, It’s an OS. OEM, This is, that are already pre-installed with the equipment when we purchase it from our manufacturer. If we have a Windows Server 2003 o Windows Server 2008, we should update it to these editions, I will comment on this in another document, to be able to have OEM operating systems on virtual machines (For example).
Simply inform what my network scheme is. I have a WUDSS server (Windows Unified Data Storage Server 2003), with two network cards, one for the LAN (192.168.2.0/24) and the other for the iSCSI network (192.168.4.0/24) and separate traffic, since LAN traffic should not collapse with the data traveling between the computers that connect to the iSCSI Target. The iSCSI Target computer is called CervezaDuff and an iSCSI initiator will be Patty or Selma. The document is separated into two parts, one will be the configuration by the iSCSI storage server (Configuring the Target iSCSI) and the other the configuration that we will make on the servers that will be connected to said storage (Configuring the iSCSI Initiator):
Configuring the Target iSCSI,
We open the WUDSS administration console (Windows Unified Data Storage Server 2003), We’re going to “Microsoft iSCSI Software Target” > “iSCSI Targets” and right-click create a new one “Create iSCSI Target”
A short setup wizard will come out, “Next”,
We indicate the name of the iSCSI Target and a description, “Next”,
Now we need to indicate the iSCSI initiators, This is, The computers that will access this computer to use their shared storage, Click on “Browse…”,
And we select our iSCSI initiators from our iSCSI network, If we don’t have them, it may be because we have not configured the iSCSI initiator of the nodes, So this step doesn’t need to be set up immediately, but we must indicate who are the ones who are going to connect, We could mark ourselves’ indicating the local IQN, “OK”,
“Next”,
And “Finish” to create the iSCSI target, later we will continue with part of its configuration.
The first thing that comes to mind is to create or present a disk to this iSCSI Target, this uses VHD format virtual disks (Virtual Hard Disk), So if we don’t have any, We created it, envelope “Devices” > Right Button > “Create Virtual Disk”,
We run the wizard to create a virtual disk “Next”,
We select the location for our HD and continue,
We must establish the space we want to give to the virtual hard drive, in this example case I will give you 200Mb, we will take into account that the space we indicate will be reserved immediately and this will be the disk that we will present to the iSCSI Target for the initiators to connect to it.
We indicate a description that will help us to know the content it will have, “Next”,
We can skip this step now if we haven’t generated the iSCSI Target, In our case how we have it, We added it, “Add…”
All Target iSCSI will be released, We select the one we want to present the album to & “OK”,
“Next”,
and “Finish”, so we will have created the iSCSI Target and presented a disk.
One thing I recommend doing, is to assemble the virtual disk and give it the format that interests us, so that it appears immediately in the iSCSI initiator. To do this,, About the Virtual Disk, Right-click > “Disk Access” > “Mount ReadWrite”,
It gives us a message indicating that the virtual disk has been mounted on the computer, “OK”,
So let’s go to our iSCSI server, We start it and partition it, About Right-Click Equipment > “Manage”,
We’re going to “Storage” > “Disk Management” and the wizard will automatically appear to initialize the disks or convert them into dynamic ones, “Next”,
Select the disk to initialize & “Next”,
We don’t mark the disc, since we don’t have the need to make it dynamic & “Next”,
“Finish”,
And once we have it, we must create a partition in it, it will be an MBR partition and not GPT as I think I remember for clusters they are not supported… So right-click on the disc & “New Partition…”
We would follow the partitioning wizard until it is complete… we will format the partition as NTFS (Not required, but recommended),
Vale, ready, There we have our virtual disk ready.
Now we dismantle it, So just like before, over the virtual disk from the WUDSS console > “Microsoft iSCSI Software Target” > “iSCSI Targets” > “Devices” > right click on the vHD > “Disk Access” > “Dismount”.
It will ask us if we are sure to disassemble it, We indicate that “Yes”, We will take into account that all the files on the disk are closed, for a clean closure.
“OK”,
Now on the properties of the Target iSCSI we will be able to see some properties that we did not see during the creation wizard,
We will have several tabs, in principle they are the ones that have come up during the wizard when creating it, in addition to being able to configure an authentication on our iSCSI Target if we were interested, depends on our network, whether we want to configure it or not, would be in “Authentication”, in this case I must add everyone I want to connect to my iSCSI Target, so in “iSCSI Initiators” I add (As I said, If they do not appear on this list, it’s because we haven’t set up the launcher on each of them, This is seen below), Click on “Add…”,
IQN identifier type and click on “Browse…” to add all of our iSCSI initiators,
We select as many as we have & “OK”,
“OK”,
GOOD, in principle those computers will be able to connect to my iSCSI Target. “OK”,
Configuring the iSCSI Initiator,
We will do it on Windows Server 2008, that already have the launcher installed by default, if we have Windows Server 2003, we should download the iSCSI initiator from the Microsoft download website (HTTP://www.microsoft.com/downloads).
This process must be carried out as many times as we want to be connected to the storage, In my case I will have to do it twice, now about ‘Patty’ and then on ‘Selma’. Let’s go to the “Control Panel” > “iSCSI Initiator”. By the way, if we have Windows Firewall enabled, We will have to allow passage through port 3260TCP.
About the tab “Detection” Click on “Add Portal…” to add to the Target iSCSI.
Enter the IP address of the iSCSI Target, and it will be the IP address of the private network that these computers have with each other, the iSCSI network that must go to 1Gb to be supported, the port we leave by default the one that is 3260 and click on “Advanced options…”,
In “Local adapter” Mark “Microsoft iSCSI Initiator”, and in source IP the IP that our team has, which must be of the same network range as the iSCSI network, In principle how I didn’t set up authentication, We leave it as it is and click on “Accept”,
Accept,
And now, Let’s go to the tab “Destinations” and if everything is correct, you should check that there is already a connection to the Target iSCSI, we must connect to it from “Sign in…”,
We mark the check of “Restore this connection automatically when your computer starts” and click on “Advanced options…”
Same as before, in “Local adapter” Mark “Microsoft iSCSI Initiator”, and in source IP the IP that our team has, which must be of the same network range as the iSCSI network, In principle how I didn’t set up authentication, We leave it as it is; in “Destination Portal” we select our iSCSI Target, Click on “Accept”,
Accept,
And we will be able to check that the status already says ‘Connected’, it means that everything has gone OK, we accept and check if this is the case.
If we go on Equipment and right-click “Administer”,
Envelope “Storage” > “Disk Management” and refresh the screen or examine the discs again, we’ll see that we have a new HD, we just put it online and you’d be ready to work on it.
List!