-->

How to Setup a Virtualization Lab (III)

Failover Cluster Networking



The first step in the setup of a failover cluster is the creation of an AD domain because all the cluster nodes have to belong to the same domain. But before doing so, I changed the networks settings again in order to adjust them for this purpose.

LAB-DC:IP: 192.168.1.10
Gateway: 192.168.1.1 (Physical Router)
DNS: 127.0.0.1
Alternate DNS: 192.168.1.1

LAB-NODE1:
IP: 192.168.1.11
Gateway: 192.168.1.1
DNS: 192.168.1.10 (DC)
Alternate DNS: 192.168.1.1 (Physical Router)

LAB-NODE2:IP: 192.168.1.12
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

LAB-NODE3:
IP: 192.168.1.13
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

LAB-STORAGE:IP: 192.168.1.14
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

Therefore, I created a domain comprised of 5 machines; a DC and two member servers as Hyper-V VMs, a member server as a VMware VM and another member server as a VirtualBox VM.

So far I have demonstrated the possibility of integrating in the same logical infrastructure virtualized servers running on different platforms using different virtualization techniques; in this case we have VMs running in a Type 1 hypervisor (Hyper-V) and in two distinct Type 2 hypervisors (VMware Workstation and VirtualBox).

The option to create a network with static IP addresses is as valid as the alternative of using DHCP. Later on I plan to explore the several options provided by the cluster networking in Windows 2008 but for the time being I kept my network in a simple and basic configuration in order to proceed with the lab installation.


Windows Server 2008 Failover Cluster networking features

Windows Server 2008 Failover Clustering introduces new networking capabilities that are a major shift away from the way things have been done in legacy clusters. The new features include:

New cluster network driver architecture


The legacy cluster network driver (clusnet.sys) has been replaced with a new NDIS level driver called the Microsoft Failover Cluster Virtual Adapter (netft.sys). Whereas the legacy cluster network driver was listed as a Non-Plug and Play Driver, the new fault tolerant adapter actually appears as a network adapter when hidden devices are displayed in the Device Manager snap-in.

Cluster Driver

Ability to locate cluster nodes on different, routed networks in support of multi-site (link) clusters


Beginning with Windows Server 2008 failover clustering, individual cluster nodes can be located on separate, routed networks.

Support for DHCP assigned IP addresses


Beginning with Windows Server 2008 Failover Clustering, cluster IP address resources can obtain their addressing from DHCP servers as well as via static entries. If the cluster nodes themselves have at least one NIC that is configured to obtain an IP addresses from a DHCP server, then the default behavior will be to obtain an IP address automatically for all cluster IP address resources.

Improvements to the cluster health monitoring (heartbeat) mechanism


The cluster ‘heartbeat’, or health checking mechanism, has changed in Windows Server 2008. While still using port 3343, it is no longer a broadcast communication. It is now unicast in nature and uses a Request-Reply type process. This provides for higher security and more reliable packet accountability.

Support for IPv6


Since the Windows Server 2008 OS will be supporting IPv6, the cluster service needs to support this functionality as well. This includes being able to support IPv6 IP Address resources and IPv4 IP Address resources either alone or in combination in a cluster. Intra-node cluster communications by default use IPv6.

Failover cluster network design


Failover Clustering reliability and stability is strongly dependent on the underlying networking design. In Windows 2008 there is no more the hard requirement like in Windows 2003 based clusters (MSCS) for a dedicated heartbeat network but there are other particular needs.

Cluster intra-communication (heartbeat traffic) will now go over each cluster network per default except you disable for cluster usage like in case of ISCSI. It is a well known best practice to disable cluster communication for ISCSI networks, these should be dedicated for ISCSI traffic only!

The golden rule for a general Failover Cluster, is to have at minimum 2 redundant (link) network paths between the cluster nodes . But often you want to use more than the minimum recommended as you want to have additional redundancy (and/or performance) in your network connectivity (a.e. NIC Teaming) or you will use features like Hyper-V (CSV, LM) which will bring his own network requirements.

Depending on the used workloads on top of Failover Clustering, the number of required physical NICs can grow fast. In a Hyper-V Failover Clustering with using Live Migration and ISCSI for VM guests the recommended number is roughly at minimum 4 physical NICs, of course more are required, when using NIC teaming technologies for redundancy and or performance objective. Generally, the cluster service – “NETFT” network fault tolerant – will automatically discover each network based on their subnet and add it to the cluster as a cluster network.

Obviously, even in a lab scenario, we can simulate the existence of several networks by creating multiple VLANs dedicated to the different kinds of traffic involved. So for my option has always been to keep thing simple trying only to interconnect the VMs running on distinct platforms. Once I achieve this I will add some additional complexity.

Failover Cluster Storage


The next step in the the failover cluster setup process is the creation of the shared storage to be used by the services and as witness in the quorum mechanism. For this purpose I used the Storage Server to install  the needed iSCSI storage.

Creating iSCSI targets with Starwind


I used Starwind on the Windows Storage Server VM following these steps:

Starwind01

After the installation, I added the local machine LAB-STORAGE as a host :

Starwind02

Then connected Starwind to the newly created host:
 Starwind03
And started the creation of the first iSCSI Target:

Starwind04

I named it and chose the following options:
 Starwind05

I made the choice to create a Image File as a virtual disk to export as iSCSI Target:
 Starwind06

Now I selected the location for my virtual disk and its size. I had previously added a 20 Gb .vhd file as the E: drive to the LAB STORAGE VM and that was the chosen location where I placed all my iSCSI targets. The witness disk doesn’t have to be too big so I created it with only 1 Gb. However, the wrong choice in the following options will prevent the usage of these virtual disks in a cluster.

Starwind07
Finally, the caching options:
 Starwind08
After repeating the same basic procedures twice, I ended up with three virtual disks to be used as iSCSI targets in my cluster. The last two were created to be used as shared cluster storage so I sized them to be 5 Gb each.

Starwind09

Installing the shared storage in the cluster nodes


Keep in mind that you have to open the firewalls to iSCSI traffic. Apart from that, I had to make sure that Starwind was on the firewall exception list.
 Firewall
I selected iSCSI initiator from Administrative Tools. Being the first time using iSCSI initiator on the machine, Windows required to start the iSCSI service.

ISCSI Service
 iSCSI01

Now on each cluster node, I connected to the iSCSI software Target on the storage server, using iSCSI Initiator.
 iSCSI02

Choosing the option to add the connection to the Favorite Target list will ensure that the node will try to reconnect every time it starts .

iSCSI03
iSCSI04

After this all those iSCSI disks will show up in each node's Disk Management console (diskMgmg.msc):

iSCSI05

While in there, we need to right click on each disk and bring them Online. Then initialize all the disks and assign them Drive letters. This has to be done on all nodes that are going to be joined to the cluster but the formatting has to be done only once.

iSCSI06

Now Starwind has two connected sessions to each disk; one from each node. I did nor add the third node in VMware because, although it can communicate with the Hyper-V VMs it cannot communicate with the Virtual Box. The reason for this is related to the fact of being using two simultaneous bridged connections over the same physical connection and the drivers don’t get along very well. Maybe I will install a second NIC on the desktop e show that this is possible too but, for now, I simply created a two node cluster with the Hyper-V VMs.

iSCSI07

Cluster Setup


Now that all the nodes are joined to the Domain and can see all the iSCSI shared disks, we need to install Failover Clustering Feature (From Server Manager Console) on each node. (No Reboot required):

Clustering Feature

After installing the Role, I used the most awesome feature of new failover clustering in Windows Server 2008 and Windows Server 2008 R2 that is Cluster Validation Wizard. I just went to Failover Cluster Manager console from Administrative Tools and launched Validate a Configuration Wizard;

Cluster1
Them selected the nodes to include:
Cluster2

All the test were executed:

Cluster3

The network warnings were related to fact of using only one network,  therefore lacking the desired redundancy:

Cluster4
I  named the cluster and gave it an IP address:

Cluster5

The cluster was created in the Node and Disk Majority mode, the most suitable for a two node cluster:

Cluster6
The witness disk being the smallest:

Cluster7

And then I added two services: MS Distributed Transaction Coordinator e SQL Server 2008 R2:

Cluster8

Now I had everything I needed to test the cluster potential, see the failover mechanism in action, configure the failback options, change the quorum mode, etc.

In the next posts I will cover some of the possible variations on this basic design.

1 comment:

kalpna said...
This comment has been removed by the author.