Notice: This website is an unofficial Microsoft Knowledge Base (hereinafter KB) archive and is intended to provide a reliable access to deleted content from Microsoft KB. All KB articles are owned by Microsoft Corporation. Read full disclaimer for more details.

Description of SP2 for HPC Pack 2008 R2


View products that this article applies to.

Introduction

Microsoft HPC Pack 2008 R2 Service Pack 2 (SP2) improves the performance and stability of HPC Pack 2008 R2-based clusters, and adds the following functionality:

Microsoft Azure integration
  • Add Microsoft Azure Virtual Machine roles to the cluster. As Microsoft HPC Pack 2008 R2 Service Pack 1 (SP1) introduced the ability to add Azure Worker nodes to the cluster, SP2 introduces the ability to add Azure Virtual Machine nodes. Azure Virtual Machine nodes support a wider range of applications and runtimes than Azure Worker nodes do. For example, applications that require long-running or complex installation are large, have many dependencies, or require manual interaction in the installation might not be suitable for worker nodes. With Azure Virtual Machine nodes, you can build a virtual hard disk (VHD) that includes an operating system and installed applications, save the VHD to the cloud, and then use the VHD to deploy Azure Virtual Machine nodes to the cluster.
  • Run Message Passing Interface (MPI) jobs on Azure node. SP2 includes support for running MPI jobs on Azure nodes. This lets you provision computing resources on demand for MPI jobs. The MPI features are installed on both worker and virtual machine Azure nodes.
  • Run Excel workbook offloading jobs on Azure nodes. As SP1 introduced the ability to run user-defined function offloading jobs on Azure nodes, SP2 introduces the ability to run Excel workbook offloading jobs on Azure nodes. This enables you to provision computing resources on demand for Excel jobs. The HPC Services for Excel features for user-defined function and workbook offloading are included in Azure nodes that you deploy as virtual machine nodes. Workbook offloading is not supported on Azure nodes that you deploy as worker nodes.
  • Automatically run configuration scripts on new Azure nodes. In SP2, you can create a script that includes configuration commands that you want to run on new Azure node instances. For example, you can include commands to create firewall exceptions for applications, set environment variables, create shared folders, or run installers. You upload the script to Azure Storage, and then specify the name of the script in the Azure node template. The script runs automatically as part of the provisioning process, both when you deploy a set of Azure nodes and when a node is reprovisioned automatically by the Microsoft Azure system. If you want to configure a subset of the nodes in a deployment, you can create a custom node group to define the subset, and then use the %HPC_NODE_GROUPS% environment variable in your script to check for inclusion in the group before you run the command. 
  • Connect to Azure nodes with Remote Desktop. In SP2, you can use Remote Desktop to help monitor and manage Azure nodes that are added to the HPC cluster. As with on-premises nodes, you can select one or more nodes in HPC Cluster Manager and then click Remote Desktop in the Actions pane to start a connection with the nodes. By default, this action is available with Azure Virtual Machine roles, and can be enabled for Azure Worker roles if remote access credentials are supplied in the node template.
  • Enable Azure Connect on Azure nodes. In SP2, you can enable Azure Connect with your Azure nodes. With Azure Connect, you can enable connectivity between Azure nodes and on-premises endpoints that have the Azure Connect agent installed. This can help provide access from Azure nodes to UNC file shares and license servers on-premises. For the Beta, a limited preview of this feature is enabled. You can use the Remote Desktop functionality to install the Azure Connect agent on your Azure nodes, and associate the Azure nodes with an on-premise group through the Azure portal in order to experiment with this feature.
  • New diagnostic tests for Azure nodes. SP2 includes three new diagnostic tests in the Microsoft Azure test suite. The Microsoft Azure Firewall Ports Test verifies connectivity between the head node and Microsoft Azure. You can run this test before you deploy Azure nodes to make sure that any existing firewall is configured to allow for deployment, scheduler, and broker communication between the head node and Microsoft Azure. The Microsoft Azure Services Connection Test verifies that the services that are running on the head node can connect to Microsoft Azure by using the subscription information and certificates that are specified in an Azure node template. The Template test parameter lets you specify which node template to test. The Microsoft Azure MPI Communication Test runs a simple MPI ping-pong test between pairs of Azure nodes to verify that MPI communication is working.
Job scheduling

The following features are new in job scheduling:
  • Guarantee availability of computing resources for different user groups. In SP2, you can configure the HPC Job Scheduler Service to allocate resources based on Resource Pools. Resource Pools help you define what proportion of the cluster cores must be guaranteed for specific user groups or job types. If a user group is not using all the guaranteed cores, those cores can be used by other groups. You must use job templates to associate a user group that has a Resource Pool. Jobs that use the job template will collectively be guaranteed the proportion of cluster cores that are defined for the Resource Pool, and will be scheduled within the pool according to job priority, submit time, and scheduling mode (Queued or Balanced). Resource Pool scheduling works best on clusters with homogeneous resources. You can compare actual and guaranteed allocations for each resource pool with the Pool Usage report in Charts and Diagnostics.
  • Enable or require users to log on by using soft card authentication when submitting jobs to the cluster. In SP2, you can enable soft card authentication on the cluster which will allow for smart card users to run jobs. To set this up, you must work with your Certificate Authentication or public key infrastructure (PKI) administrator to select or create a certificate template that must be used when generating a soft card for the cluster. The certificate template must allow for the private key to be exported, and can also have an associated access control list that defines who can use the certificate template. You can then specify the name of the template in the HpcSoftCardTemplate cluster property (set cluster properties by using the cluscfg setparams or set-hpcClusterProperty). When users want to access the cluster, they can generate a soft card credential that is based on this template by running hpccred createcert or New-HpcSoftcard. By default, the HpcSoftCard cluster property is set to Disabled. If you want users to always use soft card authentication, set the property to Required. If you want users to select between a password and a soft card to log on, set the property to Allowed.
  • Submit jobs to the cluster from a web portal. In SP2, a cluster administrator can install the HPC Web Services Suite to set up a web portal that enables cluster users to submit and monitor jobs without installing the HPC Pack client utilities. A cluster administrator can create and customize Job Submission Pages in the portal. Additionally, administrators can provide default values for application-specific command lines and parameters. Application command information can be defined and saved as an Application Profile and can then be associated with one or more job submission pages. When you start the portal, it automatically includes one submission page that is based on the Default job template. 
  • Use an HTTP web service to submit jobs across platforms or across domains. SP2 provides access to the HPC Job Scheduler Service That Uses an HTTP web service that is based on the representational state transfer (REST) model. With a suitable client, users can define, submit, modify, list, view, re-queue, and cancel jobs from other programming languages and operating systems. The full range of job description options is available through this service. This includes defining task dependencies. The service is included in the HPC Pack web features and can be installed by using the HpcWebFeatures.msi. An example client is included in the SDK code samples for SP2.
  • Specify different submission or activation filters for different kinds of jobs. In SP2, you can add multiple custom filters to the cluster and use job templates to define which filters should run for a particular kind of job. For example, you can make sure that an activation filter that checks for license availability only runs on jobs that require a license. This kind of job-specific filter must be defined as a DLL (and will run in the same process as the HPC Job Scheduler Service) instead of as an executable exactly like the cluster-wide filters (which run in a separate process). When a job is submitted or ready for activation, any job-specific filters will run before the cluster-wide filter.
  • Over-subscribe or under-subscribe cores or sockets on individual cluster nodes. In SP2, cluster administrators can fine tune cluster performance by controlling how many HPC tasks should run on a particular node. Over-subscription lets you schedule more processes on a node than there are physical cores or sockets. Generally, if a node has eight cores, then eight processes could potentially run on that node. With over-subscription, you can set the subscribedCores node property to a larger number, for example 16, and the HPC Job Scheduler Service could potentially start 16 processes on that node. For example, this can be useful if part of the cluster workload consists of coordinator tasks that use very few compute cycles. However, under-subscription lets you schedule fewer tasks on a node than there are physical cores or sockets. This can be useful if you only want to use a subset of cores or sockets on a particular node for cluster jobs.
  • Give more resources to higher priority jobs by pre-empting lower priority jobs. SP2 includes a new job scheduler configuration option to enable a "Grow by pre-emption" policy. When this policy is enabled, the HPC Job Scheduler uses pre-emption to increase the allocated resources ("grow") of a higher priority job toward its maximum. By default, pre-emption only occurs to start a job with its minimum requested resources (“graceful pre-emption” option enabled), and the job increases toward its maximum resources as other jobs complete ("increase resources automatically (grow)" option enabled). Enabling the "Grow by pre-emption" policy helps ensure that high priority work can complete more quickly.
Cluster management

The following features are new in cluster management:

  • Add workstation nodes that are in a separate domain. SP2 supports adding workstation nodes to the cluster that belong to a different domain than the head node. To join nodes from a different domain, you must specify the Fully Qualified Domain Name (FQDN) of the head node when you install HPC Pack on the workstations.
  • Automatically stop jobs on workstation nodes if the CPU becomes busy with non-HPC work. Administrators can configure workstation nodes to become available based on user activity detection. Workstations can automatically become available for jobs (come Online) if a specified time period has elapsed without keyboard or mouse input and if the CPU usage drops lower than a specified threshold. In SP1, HPC jobs are automatically stopped when keyboard or mouse input is detected. In SP2, HPC jobs are also stopped when the CPU usage for non-HPC work increases above the specified threshold. This helps ensure that if workstation users start or schedule work on their computer before leaving for the night, the HPC jobs will not interfere.
  • Validate environment configurations before you create a new cluster. SP2 provides a stand-alone tool, the Microsoft HPC Pack 2008 R2 Installation Preparation Wizard. This tool helps you check for operating system and environment configurations that can cause issues when you create a new cluster. You can run the wizard on the server that will act as the head node (before you install HPC Pack), or on another computer that is connected to the Enterprise network. In the tool, you answer questions about your intended configurations. The tool performs checks based on your answers, and then generates a report that lists results, installation warnings, best practices, and checklists. The preinstallation wizard is available on the HPC tool pack download page.
  • Export and import cluster configurations as part of failure recovery plan. SP2 includes utilities that help export and import cluster configurations such as HPC user and administrator groups, node groups, node templates, job templates, job scheduler configuration settings, service-oriented architectures (SOA) service configuration files, and custom diagnostic test. Export-HpcConfiguration and Import-HpcConfiguration are implemented as .ps1 scripts (located in the %CCP_HOME%bin folder). You can import the saved settings onto a new cluster that is running the same version of HPC Pack. To continue submitting jobs on the new cluster, users only have to change the name of the cluster in their applications or in the HPC client utilities. To export cluster configurations to a folder named C:\HpcConfig, run HPC PowerShell as an Administrator and type export-HpcConfiguration –path c:\hpcconfig.
  • Start the HPC Job Scheduler Service in Restore Mode by using an HPC PowerShell cmdlet. SP2 includes a new cluster property named RestoreMode that you can set when you have to start the HPC Job Scheduler Service in restore mode. Previously, you could enable restore mode by setting a registry key, now you can run HPC PowerShell as an Administrator and use set-hpcClusterProperty –RestoreMode:$true. When restore operations are complete, the property is automatically set back to False. The HPC Job Scheduler Service restore mode helps bring the cluster to a consistent state when you are performing a full-system or database restore. For more information, see Steps to Perform Before and After Restoring the HPC Databases from a Backup.
Runtime and development

The following features are new for runtime and development:
  • Common-data APIs for SOA workloads. SP2 includes new APIs that support staging and accessing common data that is required by all calculation requests within one or more sessions. You can create a new kind of client called a DataClient. The data client includes methods to upload data to the cluster (to the runtime user data shared folder) and to read and write data. If you want the data to be available to other cluster users, you can specify the list of users when you call DataClient.Create(). Optionally, you can associate the data to the session life cycle so that when the session ends, the data is automatically deleted from the share. Code samples are available in the SDK code sample download. Common data features are not supported on Azure nodes.
  • Runtime user data share created automatically to support SOA common data jobs. When you install SP2, the installation wizard includes a step to configure a shared folder for runtime user data. This share is used by the SOA common data runtime. For a production cluster, you can create a shared folder for the runtime data on a separate file server and then specify the path of that share in the SP2 installation wizard. If you are evaluating the common data features in a test cluster, or if you are setting up a small cluster, you can accept the default runtime data configuration during setup. The default configuration creates a hidden share on the head node to provide out-of-the-box functionality for the common data workloads.
  • In-process broker APIs available to help reduce communication overhead for SOA sessions. The SP2 APIs include an option to enable an in-process broker. The in-process broker runs in the client process, and thereby eliminates the need for a broker node, reduces session creation time, and reduces the number of hops for each message. For example, one usage pattern for the in-process broker is as follows: Instead of running the client application on a client computer, you submit the client application to the cluster as a single-task job. The client application creates a session on the cluster, and instead of passing messages through a broker node, the client sends requests and receives responses directly from the service hosts (compute nodes). Code samples are available in the SDK code sample download. The in-process broker supports interactive sessions only, and is not supported on Azure nodes.

↑ Back to the top


Resolution

Update information

How to obtain this update

The update is available for download from the following Microsoft Download Center website:
Download Download the update package now.
For more information about how to download Microsoft support files, click the following article number to view the article in the Microsoft Knowledge Base:
119591 How to obtain Microsoft support files from online services
Microsoft scanned this file for viruses. Microsoft used the most current virus-detection software that was available on the date that the file was posted. The file is stored on security-enhanced servers that help prevent any unauthorized changes to the file.

Prerequisites

To apply this update, you must be running Windows HPC Server 2008 R2. Additionally, HPC Pack 2008 R2 SP1 must be installed.

For more information about HPC Pack 2008 R2 SP1, visit the following Microsoft website:

Installation instructions

To install this update, run this update on the head node.

Note If you have a pair of high-availability head nodes, run this update on the active node, and then run this update on the passive node.

Restart requirement

You must restart the computer after you install this update.

Update replacement information

This update does not replace a previously released update.

↑ Back to the top


Status

Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.

↑ Back to the top


Keywords: kbqfe, kbfix, kbsurveynew, kbexpertiseinter, atdownload, kb

↑ Back to the top

Article Info
Article ID : 2565784
Revision : 1
Created on : 1/7/2017
Published on : 6/21/2014
Exists online : False
Views : 248