Notice: This website is an unofficial Microsoft Knowledge Base (hereinafter KB) archive and is intended to provide a reliable access to deleted content from Microsoft KB. All KB articles are owned by Microsoft Corporation. Read full disclaimer for more details.

How to configure Windows clustering groups for hot spare support


View products that this article applies to.

Summary

This article describes the ability of Windows Server 2003 and Windows Server 2008 to configure Windows Clustering for hot spare support.

↑ Back to the top


More information

With the arrival of larger cluster sizes (four and eight nodes), the "A+Hs" topologies become important when the cluster has a set of "A" nodes that are currently active and a set of "Hs" nodes that are currently passive or in hot Standby mode. The larger clusters enhance the support (active and/or passive) configurations such as the preceding configuration because the configurations can reduce the cost of one or more standby nodes across a larger set of active nodes.

For example, with a two-node cluster, the cost of an active and/or passive configuration requires twice the hardware for the same capacity, with eight nodes running as seven active and one passive. The additional hardware increases the cost by only fifteen percent.

Windows Clustering makes no distinction between the nodes during a failover. Windows Clustering does not change the failover policy, which is based on the load or the programs that are running (and where these programs are running). This behavior can make it very difficult to manage a hot spare node that can take up the load when a failure occurs. The only way to influence the failover policy is by changing the possible node lists. Because this behavior is performed outside of the Cluster service in an asynchronous manner to other cluster events (such as, node failures), there is no guarantee that a program can successfully ensure that the spare node can be chosen in the event of a failover.

There are programs where this behavior is essential, for example, Microsoft Exchange 2000, where the back-end Exchange database can be partitioned and spread across a number of cluster nodes. However, Exchange can place such a load on the network that in the event of a node failure, it is not recommended to fail over one partition to a node that is already hosting a different partition of the database. The purpose of this enhancement is to ensure that the Cluster service can modify its failover policy to ensure that if there is a node that does not have a partition currently hosted on it, the passive node can be considered before any active nodes that are already hosting a partition. To ensure high availability of the service, if there are no spares, or if for some other reason (such as, multiple failures) there are no spare nodes, the failover policy can revert back to the default. In other words, the failover policy does not fail services if there is no spare node that is available.

The AntiAffinityClassNames Property

The Windows Clustering group has a new public property: AntiAffinityClassNames. This property can contain an arbitrary string of characters. In the event of a failover, if a group that is being failed over has a string that is not empty in the AntiAffinityClassNames property, the failover manager can check all other nodes. If there are any nodes (that are in the possible owners list for the resource) that are not hosting a group with the same value in AntiAffinityClassNames, those nodes are considered a preferred target for failover. This value can take higher priority over the Preferred Owners list.

The following two scenarios demonstrate how this property can be used:
  • In an "A+Hs" cluster that is running a single program. For example, a cluster that is running Exchange. In this case, Exchange should set up each group that is supporting a partition with the AntiAffinityClassNames property set to some Exchange-specific value (the same value for each group), for example, "Exchange". In the event of a failure, the failover manager can attempt to keep the partitions apart by selecting nodes that are not hosting groups with the same AntiAffinityClassNames value of "Exchange."
  • In a server consolidation where there are multiple programs that should be kept apart, if possible. In these cases, the groups that are representing the various programs should be manually modified with the same value in the AntiAffinityClassNames property.
Group affinity can only be configured by using the command-line tool, Cluster.exe. An example of the proper syntax for the example that is listed from the first preceding scenario is:
cluster . group "Cluster Group" /prop AntiAffinityClassNames="Microsoft Exchange Virtual Server"
This syntax can create the following Reg_Multi_SZ registry key:
HKEY_LOCAL_MACHINE\Cluster\Groups\Guid\AntiAffinityClassNames
Note You can use the following example Cluster.exe command to clear the AntiAffinityClassNames value and resort back to default behavior:
cluster . group "Cluster Group" /prop AntiAffinityClassNames=""
For additional information, click the following article number to view the article in the Microsoft Knowledge Base:
299631 Failover behavior on clusters of three or more nodes
Or, search for "Server Cluster" in the Windows Help file.

↑ Back to the top


Keywords: KB296799, kbinfo, kbenv

↑ Back to the top

Article Info
Article ID : 296799
Revision : 8
Created on : 6/10/2009
Published on : 6/10/2009
Exists online : False
Views : 800