Archive for the ‘HPC’ Category

Virtualization for HPC: The Heterogeneity Issue

January 15, 2010

I’ve been advocating for awhile now that virtualization has much to offer HPC customers (see here.) In this blog entry I’d like to focus on one specific use case, heterogeneity. It’s an interesting case because while heterogeneity is either desirable or to be avoided, depending on your viewpoint, virtualization can help in either case.

The diagram above depicts a typical HPC cluster installation with each compute node running whichever distro was chosen as that site’s standard OS. Homogeneity like this eases the administrative burden, but it does so at the cost of flexibility for end-users. Consider, for example, a shared compute resource like a national supercomputing center or a centralized cluster serving multiple departments within a company or other organization. Homogeneity can be a real problem for end-users whose applications only run on either other versions of the chosen cluster OS or, worse, on completely different operating systems. These users are generally not able to use these centralized facilities unless they can port their application to the appropriate OS or convinced their application provider to do so.

The situation with respect to heterogeneity for software providers, or ISVs — independent software vendors, is quite different. These providers have been wrestling with expenses and other difficulties related to heterogeneity for years. For example, while ISVs typically develop their applications on a single platform (OS 0 above,) they must often port and support their application on several operating systems in order to address the needs of their customer base. Assuming the ISV decides correctly which operating systems should be supported to maximize revenue, it must still incur considerable expenses to continually qualify and re-qualify their application on each supported operating system version. And maintain a complex, multi-platform testing infrastructure and in-house expertise to support these efforts as well.

Imagine instead a virtualized world, as shown above. In such a world, cluster nodes run hypervisors on which pre-built and pre-configured software environments (virtual machines) are run. These virtual machines include the end-user’s application and the operating system required to run that application. So far as I can see, everyone wins. Let’s look at each constituency in turn:

  • End-users — End-users have complete freedom to run any application using any operating system because all of that software is wrapped inside a virtual machine whose internal details are hidden. The VM could be supplied by an ISV, built by an open-source application’s community, or created by the end-user. Because the VM is a black box from the cluster’s perspective, the choice of application and operating system need no longer be restricted by cluster administrators.
  • Cluster admins — In a virtualized world, cluster administrators are in the business of launching and managing the lifecycle of virtual machines on cluster nodes and no longer need deal with the complexities of OS upgrades, configuring software stacks, handling end-user special software requests, etc. Of course, a site might still opt to provide a set of pre-configured “standard” VMs for end-users who do not have a need for the flexibility of providing their own VMs. (If this all sounds familiar — it should. Running a shared, virtualized HPC infrastructure would be very much like running a public cloud infrastructure like EC2. But that is a topic for another day.)
  • ISVs — ISVs can now significantly reduce the complexity and cost of their business. Since ISV applications would be delivered wrapped within a virtual machine that also includes an operating system and other required software, ISVs would be free to select a single OS environment for developing, testing, AND deploying their application. Rather than basing their operating system choice on market share considerations, the decision could be made based on the quality of the development environment, or perhaps the stability or performance levels achievable with a particular OS, or perhaps on the ability to partner closely with an OS vendor to jointly deliver a highly-optimized, robust, and completely supported experience for end-customers.

Advertisements

Sun Grid Engine: Still Firing on All Cylinders

January 14, 2010

The Sun Grid Engine team has just released the latest version of SGE, humbly called Sun Grid Engine 6.2 update 5. It’s a yawner of a name for a release that actually contains some substantial new features and improvements to Sun’s distributed resource management software, among them Hadoop integration, topology-aware scheduling at the node level (think NUMA), and improved cloud integration and power management capabilities.

You can get the bits directly here. Or you can visit Dan’s blog for more details first. And then get the bits.

Sun HPC Consortium Videos Now Available

December 21, 2009

Thanks to Rich Brueckner and Deirdré Straughan, videos and PDFs are now available from the Sun HPC Consortium meeting held just prior to Supercomputing ’09 in Portland, Oregon. Go here to see a variety of talks from Sun, Sun partners, and Sun customers on all things HPC. Highlights for me included Dr. Happy Sithole’s presentation on Africa’s largest HPC cluster (PDF|video), Marc Parizeau’s talk about CLUMEQ’s Collossus system and its unique datacenter design (PDF|video), and Tom Verbiscer’s talk describing Univa UD’s approach to HPC and virtualization, including some real application benchmark numbers illustrating the viability of the approach (PDF|video).

My talk, HPC Trends, Challenges, and Virtualization (PDF|video) is an evolution of a talk I gave earlier this year in Germany. The primary purposes of the talk were to illustrate the increasing number of common challenges faced by enterprise, cloud, and HPC users and to highlight some of the potential benefits of this convergence to the HPC community. Virtualization is specifically discussed as one such opportunity.

You Put Your HPC Cluster in a…WHAT??

November 19, 2009

Judging from a quick look at the survey results from this weekend’s Sun HPC Consortium meeting in Portland, Oregon, Marc Parizeau‘s talk was a favorite with both customers and Sun employees.

Marc is Deputy Director of CLUMEQ and a professor at Université Laval in Québec City. His talk, Colossus: A cool HPC tower! [PDF, 10MB], describes with many photos how a 1960s era Van de Graaff generator facility was turned into an innovative, state of the art, supercomputing installation featuring Sun Constellation hardware. Very much worth a look.

A nicely-produced CLUMEQ / Constellation video that describes the creation of this computing facility is also available on YouTube.

Climate Modeling: How much computing required to run a Century Experiment?

November 18, 2009

Henry Tufo from NCAR and CU-Boulder spoke this weekend at the Sun HPC Consortium meeting here in Portland, OR. As part of his talk, More than a Big Machine: Why We Need Multiple Breakthroughs to Tackle Cloud Resolving Climate [PDF], he estimated the number of floating-point operations (FLOPs) needed to compute a climate model over a one-century time scale with a 1 km atmosphere model.

His answer was the highlight of the Consortium for me: A Century Experiment requires about a mole of FLOPs. 🙂


Workshop on High Performance Virtualization

October 19, 2009

As discussed in several earlier posts (here and here), virtualization technology is destined to play an important role in HPC. If you have an opinion about that or are interested in learning more, consider attending or submitting a technical paper or position paper to the Workshop on High Performance Virtualization: Architecture, Systems, Software, and Analysis, which is being held in conjunction with HPCA-16, the 16th International Symposium on High Performance Computing Architecture, and also in conjunction with PPoPP 2010, the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Apparently, Bangalore is the place to be in early January 2010!

Time is short: Workshop submissions are due November 10, 2009.

Fortress: Parallel by Default

October 16, 2009

I gave a short talk about Fortress, a new parallel language, at the Sun HPC Workshop in Regensburg, Germany and thought I’d post the slides here with commentary. Since I’m certainly not a Fortress expert by any stretch of the imagination, my intent was to give the audience a feel for the language and its origins rather than attempt a deep dive in any particular area. Christine Flood from the SunLabs Programming Languages Research Group helped me with the slides. I also stole liberally from presentations and other materials created by other Fortress team members.

The unofficial Fortress tag line, inspired by Fortress’s emphasis on programmer productivity. With Fortress, programmer/scientists express their algorithms in a mathematical notation that is much closer to their domain of expertise than the syntax of the typical programming language. We’ll see numerous examples in the following slides.

At the highest level, there are two things to know about Fortress. First, that it started as a SunLabs research project, and, second, that the work is being done in the open under as the Project Fortress Community, whose website is here. Source code downloads, documentation, code samples, etc., are all available on the site.

Fortress was conceived as part of Sun’s involvement in a DARPA program called High Productivity Computing Systems (HPCS,) which was designed to encourage the development of hardware and software approaches that would significantly increase the productivity of the application developers and users of High Performance Computing systems. Each of the three companies selected to continue past the introductory phase of the program proposed a language designed to meet these requirements. IBM chose essentially to extend Java for HPC, while both Cray and Sun proposed new object-oriented languages. Michèle Weiland at the University of Edinburgh has written a short technical report that offers a comparison of the three language approaches. It is available in PDF format here.

I’ve mentioned productivity, but not defined it. I recommend visiting Michael Van De Vanter‘s publications page for more insight. Michael was a member of the Sun HPCS team who focused with several colleagues on the issue of productivity in an HPCS context. His considerable publication list is here.

Because I don’t believe Sun’s HPCS proposal has ever been made public, I won’t comment further on the specific scalability goals set for Fortress other than to say they were chosen to complement the proposed hardware approach. Because Sun was not selected to proceed to the final phase of the HPCS program, we have not built the proposed system. We have, however, continued the Fortress project and several other initiatives that we believe are of continuing value.

Growability was a philosophical decision made by Fortress designers and we’ll talk about that later. For now, note that Fortress is implemented as a small core with an extensive and growing set of capabilities provided by libraries.

As mentioned earlier, Fortress is designed to accommodate the programmer/scientist by allowing algorithms to be expressed directly in familiar mathematical notation. It is also important to note that Fortress constructs are parallel by default, unlike many other languages which require an explicit declaration to create parallelism. Actually to be more precise, Fortress is “potentially parallel” by default. If parallelism can be found, it will be exploited.

Finally, some code. We will look at several versions of a factorial function over the next several slides to illustrate some features of Fortress. (For additional illustrative Fibonacci examples, go here.) The first version of the function is shown here beneath a concise, mathematical definition of factorial for reference.

The red underlines highlight two Fortressisms. First, the condition in the first conditional is written naturally as a single range rather than as the more conventional (and convoluted) two-clause condition. And, second, the actual recursion shows that juxtaposition can be used to imply multiplication as is common when writing mathematical statements.

This version defines a new operator, the “!” factorial operator, and then uses that operator in the recursive step. The code has also been run through the Fortress pretty printer that converts it from ASCII form to a more mathematically formatted representation. As you can see, the core logic of the code now closely mimics the mathematical definition of factorial.

This non-recursive version of the operator definition uses a loop to compute the factorial.

Since Fortress is parallel by default, all iterations of this loop could theoretically be executed in parallel, depending on the underlying platform. The “atomic” keyword ensures that the update of the variable result is performed atomically to ensure correct execution.

This slide shows an example of how Fortress code is written with a standard keyboard and what the code looks like after it is formatted with Fortify, the Fortress pretty printer. Several common mathematical operators are shown at the bottom of the slide along with their ASCII equivalents.

A few examples of Fortress operator precedence. Perhaps the most interesting point is the fact that white space matters to the Fortress parser. Since the spacing in the 2nd negative example implies a precedence different than the actual precedence, this statement would be rejected by Fortress on the theory that its execution would not compute the result intended by the programmer.

Don’t go overboard with juxtaposition as a multiplication operator — there is clearly still a role for parentheses in Fortress, especially when dealing with complex expressions. While these two statements are supposed to be equivalent, I should point out that the first statement actually has a typo and will be rejected by Fortress. Can you spot the error? It’s the “3n” that’s the problem because it isn’t a valid Fortress number, illustrating one case in which a juxtaposition in everyday math isn’t accepted by the language. Put a space between the “3” and the “n” to fix the problem.

Here is a larger example of Fortress code. On the left is the algorithmic description of the conjugate gradient (CG) component of the NAS parallel benchmarks, taken directly from the original 1994 technical report. On the right is the Fortress code. Or do I have that backwards? 🙂

More Fortress code is available on the Fortress by Example page at the community site.

Several ways to express ranges in Fortress.

The first static array definition creates a one dimensional array of 1000 32-bit integers. The second definition creates a one dimensional array of length size, initialized to zero. I know it looks like a 2D array, but the 2nd instance of ZZ32 in the Array construct refers to the type of the index rather than specifying a 2nd array dimension.

The last array subexpression is interesting, since it is only partially specified. It extracts a 20×10 subarray from array b starting at its origin.

Tuple components can be evaluated in parallel, including arguments to functions. As with for loops, do clauses execute in parallel.

In Fortress, generators control how loops are run and generators generally run computations in any order, often in parallel. As an example, the sum reduction over X and Y is controlled by a generator that will cause the summation of the products to occur in parallel or at the least in a non-deterministic order if running on a single-processor machine.

In Fortress, when parallelism is generated the execution of that work is handled using a work stealing strategy similar to that used by Cilk. Essentially, when a compute resource finishes executing its tasks, it pulls work items from other processor’s work queues, ensuring that compute resources stay busy by load balancing the available work across all processors.

Essentially, a restatement of an earlier point: In Fortress, generators play the role that iterators play in other languages. By relegating the details of how the index space is processed to the generator, it is natural to then also allow the generator to control how the enclosed processing steps are executed. A generator might execute computations serially or in parallel.

A generator could conceivably also control whether computations are done locally on a single system or distributed across a cluster, though the Fortress interpreter currently only executes within a single node. To me, the generator concept is one of the nicer aspects of Fortress.

Guy Steele, who is Fortress Principal Investigator along with Eric Allen, has been working in the programming languages area long enough to know the wisdom of these statements. Watch him live the reality of growing a language in his keynote at the 1998 ACM OOPSLA conference. Be amazed at the cleverness, but listen to the message as well.

The latest version of the Fortress interpreter (source and binary) is available here. If you would like to browse the source code online, do so here.

Some informational pointers. Christine also tells me that the team is working on an overview talk like this one. Except I expect it will be a lot better. 🙂 Though I only scratched the surface in a superficial way, I hope this brief overview has given you at least the flavor of what Project Fortress is about.

HPC Virtual Conference: No Travel Budget? No Problem!

September 4, 2009

Sun is holding a virtual HPC conference on September 17th featuring Andy Bechtolsheim as keynote speaker. Andy will be talking about the challenges around creating Exaflop systems by 2020, after which he will participate in a chat session with attendees. In fact, each of the conference speakers (see agenda) will chat with attendees after their presentations.

There will also be two sets of exhibits to “visit” to find information on HPC solutions for specific industries or to get information on specific HPC technologies. Industries covered include MCAE, EDA, Government/Education/Research, Life Sciences, and Digital Media. There will be technology exhibits on storage software and hardware, integrated software stack for HPC, compute and networking hardware, and HPC services.

This is a free event. Register here.

Parallel Computing: Berkeley’s Summer Bootcamp

August 27, 2009

Two weeks ago the Parallel Computing Laboratory at the University of California Berkeley ran an excellent three-day summer bootcamp on parallel computing. I was one of about 200 people who attended remotely while another large pile of people elected to attend in person on the UCB campus. This was an excellent opportunity to listen to some very well known and talented people in the HPC community. Video and presentation material is available on the web and I would recommend it to anyone interested in parallel computing or HPC. See below for details.

The bootcamp, which was called the 2009 Par Lab Boot Camp – Short Course on Parallel Programming covered a wide array of useful topics, including introductions to many of the current and emerging HPC parallel computing models (pthreads, OpenMP, MPI, UPC, CUDA, OpenCL, etc.), hands-on labs for in-person attendees, and some nice discussions on parallelism and how to find it with an emphasis on the motifs (patterns) of parallelism identified in The Landscape of Parallel Computing Research: A View From Berkeley. There was also a presentation on performance analysis tools and several application-level talks. It was an excellent event.

The bootcamp agenda is shown below. Session videos and PDF decks are available here.

talk title speaker
Introduction and Welcome Dave Patterson (UCB)
Introduction to Parallel Architectures John Kubiatowicz (UCB)
Shared Memory Programming with Pthreads, OpenMP and TBB Katherine Yelick (UCB & LBNL), Tim Mattson (Intel), Michael Wrinn (Intel)
Sources of parallelism and locality in simulation James Demmel (UCB)
Architecting Parallel Software Using Design Patterns Kurt Keutzer (UCB)
Data-Parallel Programming on Manycore Graphics Processors Bryan Catanzaro (UCB)
OpenCL Tim Mattson (Intel)
Computational Patterns of Parallel Programming James Demmel (UCB)
Building Parallel Applications Ras Bodik (UCB), Ras Bodik (UCB), Nelson Morgan (UCB)
Distributed Memory Programming in MPI and UPC Katherine Yelick (UCB & LBNL)
Performance Analysis Tools Karl Fuerlinger (UCB)
Cloud Computing Matei Zaharia (UCB)

HPC Trends and Virtualization

July 15, 2009

Here are the slides (with associated commentary) that I used for a talk I gave recently in Hamburg at Sun’s HPC Consortium meeting just prior to the International Supercomputing Conference. The topic was HPC trends with a focus on virtualization and the continued convergence of HPC, Enterprise, and Cloud IT. A PDF version of the slides is available here.

Challenges faced by the HPC community arise from several sources. Some are created by the arrival and coming ubiquity of multi-core processors. Others stem from the continuing increases in problem sizes at the high end of HPC and the commensurate need for ever-larger compute and storage clusters. And still others derive from the broadening of HPC to a wider array of users, primarily commercial/industrial users for whom HPC techniques may offer a significant future competitive advantage.

Perhaps chief among these challenges is the increasingly difficult issue of cluster management complexity, which has been identified by IDC as a primary area of concern. This is especially true at the high end due to the sheer scale involved, but problems exist in the low and midrange as well since commercial/industrial customers are generally much less interested in or tolerant of approaches with a high degree of operational complexity–they expect more than is generally available currently.

Application resilience is also becoming an issue in HPC circles. At the high end there is a recognition that large distributed applications must be able to make meaningful forward progress while running on clusters whose components may be experiencing failures on a near-continuous basis due to the extremely large sizes of these systems. At the midrange and low end, individual nodes will include enough memory and CPU cores that their continued operation in the presence of failures becomes a significant issue.

Gone are the days of macho bragging about the MegaWatt rating of one’s HPC datacenter. Power and cooling must be minimized while still delivering good performance for a site’s HPC workload. How this will be accomplished is an area of significant interest to the HPC community–and their funding agencies.

While the ramifications of multi-core processors to HPC are a critical issue as are issues related to future programming models and high productivity computing, these are not dealt with in this talk due to time constraints.

Most HPC practitioners are comfortable with the idea that innovations in HPC eventually become useful to the wider IT community. Extreme adherents to this view may point out that the World Wide Web itself is a byproduct of work done within the HPC community. Absent that claim, there are still plenty of examples illustrating the point that HPC technologies and techniques do eventually find broader applicability.

This system was recently announced by Oracle. Under the hood, its architecture should be familiar to any HPC practitioner: it is a DDR InfiniBand, x86-based cluster in a box, expandable in a scalable way to eight cabinets with both compute and storage included.

The value of a high-bandwidth, low-latency interconnect like InfiniBand is a good example of the leverage of HPC technologies in the Enterprise. We’ve also seen significant InfiniBand interest from the Financial Services community for whom extremely high messaging rates are important in real-time trading and other applications.

It is also important to realize that “benefit” flows in both directions in these cases. While the Enterprise may benefit from HPC innovations, the HPC community also benefits any time Enterprise uptake occurs since adoption by the much larger Enterprise marker virtually assures that these technologies will continue to be developed and improved by vendors. Widespread adoption of InfiniBand outside of its core HPC constituency would be a very positive development for the HPC community.

A few months ago, several colleagues and I held a panel session in Second Life to introduce Sun employees to High Performance Computing as part of our “inside outreach” to the broader Sun community.

As I noted at the time, it was a strange experience to be talking about “HPC” while sitting inside what is essentially an HPC application — that is, Second Life itself. SL is a great example of how HPC techniques can be repurposed to deliver business value in areas far from what we would typically label High Performance Computing. In this case, SL executes about 30M concurrent server-side scripts at any given time and uses the Havok physics engine to simulate a virtual world that has been block-decomposed across about 15K processor cores. Storage requirements are about 100 TB in over one billion files. It sure smells like HPC to me.

Summary: HPC advances benefit the Enterprise in numerous ways. Certainly with interconnect technology, as we’ve discussed. With respect to horizontal scaling, HPC has been the poster child for massive horizontal scalability for over a decade when clusters first made their appearance in HPC (but more about this later.)

Parallelization for performance has not been broadly addressed beyond HPC…at least not yet. With the free ride offered by rocketing clock speed increases coming to an end, parallelization for performance is going to rapidly become everyone’s problem and not an HPC-specific issue. The question at hand is whether parallelization techniques developed for HPC can be repurposed for use by the much broader developer community. It is beyond the scope of this talk to discuss this in detail, but I believe the answer is that both the HPC community and the broader community have a shared interest in developing newer and easier to use parallelization techniques.

A word on storage. While Enterprise is the realm of Big Database, it is the HPC community that has been wrestling with both huge data storage requirements and equally huge data transfer requirements. Billions of files and PetaBytes of storage are not uncommon in HPC at this point with aggregate data transfer rates of hundreds of GigaBytes per second.

One can also look at HPC technologies that are of benefit to Cloud Computing. To do that, however, realize that before “Cloud Computing” there was “Grid Computing“, which came directly from work by the HPC community. The idea of allowing remote access to large, scalable compute and storage resources is very familiar to HPC practitioners since that is the model used worldwide to allow a myriad of individual researchers access to HPC resources in an economically feasible way. Handling horizontal scale and the mapping of workload to available resources are core HPC competencies that translate directly to Cloud Computing requirements.

Of course, Clouds are not the same as Grids. Clouds offer advanced APIs and other mechanisms for accessing remote resources. And Clouds generally depend on virtualization as a core technology.
But more on that later.

As a community, we tend to think of HPC as being leading and bleeding edge. But is that always the case? Have there been advances in either Enterprise or Cloud that can be used to advantage HPC? There is no question in my mind that the answer to this question is a very strong Yes.

Let’s talk in more detail about how Enterprise and Cloud advances can help address application resilience, cluster management complexity, effective use of resources, and power efficiency for HPC. Specifically, I’d like to discuss how the virtualization technologies used in Enterprise and Cloud can be used to address these current and emerging HPC pain points.

I am going to use this diagram frequently on subsequent slides so it is important to define our terms. When I say “virtualization” I am referring to OS virtualization of the type done with, for example, Xen on x86 systems or LDOMs on SPARC systems. With such approaches, a thin layer of software or firmware (the hypervisor) works in conjunction with a control entity (called DOM0 or the Control Domain) to mediate access to a server’s physical hardware and to allow multiple operating system instances to run concurrently on that hardware. These operating system instances are usually called “guest OS” or virtual machine instances.

This particular diagram illustrates server consolidation, the common Enterprise use-case for virtualization. With server consolidation, workload previously run on physically separate machines is aggregated onto a single system, usually to achieve savings on either capital or operational expense or both. This virtualization is essentially transparent to an application running within a guest OS instance, which is part of the power of the approach since applications need not be modified to run in a virtualized environment. Note that while there are cases in which consolidating multiple guest OSes onto a single node as above would be useful in an HPC context, the more common HPC scenario involves running a single guest OS instance per node.

While server consolidation is important in a Cloud context to reduce operational costs for the Cloud provider, the encapsulation of pre-integrated and pre-tested software in a portable virtual machine is perhaps the most important aspect of virtualization for Cloud Computing. Cloud users can create an entirely customized software environment that supports their application, create a virtual machine file that includes this software, and then upload and run this software on a Cloud’s virtualized infrastructure. As we will see, this encapsulation can be used to advantage in certain HPC scenarios as well.

Before discussing specific HPC use-cases for virtualization, we must first address the issue of performance since any significant reduction in application performance would not be acceptable to HPC users, rendering virtualization uninteresting to the community.

Yes, I know. You can’t read the graphics because they are too small. That was actually intentional even for the slides that were projected at the conference. The graphs show comparisons of performance in virtualized and non-virtualized (native) environments for several aspects of large, linear algebra computations. The full paper, The Impact of Paravirtualized Memory Hierarchy on Linear Algebra Computational Kernels and Software by Youseff, Seymour, You, Dongarra, and Wolski is available here.

The tiny graphs show that there was essentially no performance difference found between native and virtualized environments. The curves in the two lower graphs are basically identical, showing essentially the same performance levels for virtual and native. The top-left histogram shows the same performance across all virtual and native test cases. In the top-right graph each of the “fat” histogram bars represents a different test with separate virtual and native test results shown within each fat bar. The fat bars are flat because there was little or no difference between virtual and native performance results.

These results, while comforting, should not be surprising. HPC codes are generally compute intensive so one would expect such straight-line code to execute at full speed in a virtualized environment. These tests also confirm, however, that aspects of memory performance are essentially unaffected by virtualization as well. These results are a tribute primarily to the maturity of virtualization support included in current processor architectures.

Note, however, that the explorations described in this paper focused solely on the performance of computational kernels running within a single node. For virtualization to be useful for HPC, it must also offer good performance for distributed, parallel applications as well. And there, dear reader, is the problem.

Current virtualization approaches typically use what is called a split driver model to handle IO operations, which involves using a device driver within the guest OS instance that communicates to the “bottom” half of the driver which runs in DOM0. The DOM0 driver has direct control of the real hardware, which it accesses on behalf of any guest OS instances that make IO requests. While this correctly virtualizes the IO hardware, it does so with a significant performance penalty. That extra hop (in both directions) plays havok with achievable IO bandwidths and latencies, clearly not appropriate for either high-performance MPI communications or high-speed access to storage.

Far preferable would be to allow each guest OS instance direct access to real hardware resources. This is precisely the purpose of PCI-IOV (IO Virtualization), a part of the PCI specification that specifies how PCI devices should behave in a virtualized environment. Simply put, PCI-IOV allows a single physical device to masquerade as several separate physical devices, each with their own hardware resources. Each of these pseudo-physical devices can then be assigned directly to a guest OS instance for its own use, avoiding the proxied IO situation shown on the previous slide. Such an approach should greatly improve current IO performance situation.

PCI-IOV requires hardware support from the IO device and such support is beginning to appear. It also requires software support at the OS and hypervisor level and that is beginning to appear as well.

While not based on PCI-IOV, the work done by Liu, Huang, Abali, and Panda and reported in their paper, High Performance VMM-Bypass I/O in Virtual Machines, gives an idea as to what is possible when the proxied approach is bypassed and the high-performance aspects of the underlying IO hardware are made available to the guest OS instance. The full paper is available here.

The graph on the top-left shows MPI latency as a function of message size for the virtual and native cases while the top-right graph shows throughput as a function of message size in both cases. You will note that the curves on each graph are essentially identical. These tests used MVAPICH in polling mode to achieve these results.

By contrast, the bottom-left graph reports virtual
and native Netperf results shown as transactions per second across a range of message sizes. In this case, the virtualized results are not as good, especially at the smaller message sizes. This is due to the fact that interrupt processing is still proxied through DOM0 and for small message sizes it has an appreciable effect on throughput, about 25% in the worst case.

The final graph compares the performance of NAS parallel benchmarks in the virtual and native cases and shows little performance impact in these cases for virtualization.

The work makes a plausible case for the feasibility of virtualization for HPC applications, including parallel distributed HPC applications. More exploration is needed, as is PCI-IOV capable InfiniBand hardware.

This and the next few slides outline several of the dozen or so use-cases for virtualization in HPC. The value of these use-cases to an HPC customer needs to be measured against any performance degradations introduced by virtualization to assess the utility of a virtualized HPC approach. I believe that several of these use-cases are compelling enough for virtualization to warrant serious attention from the HPC community as a future path for the entire community.

Heterogeneity. Many HPC sites support end users with a wide array of software requirements. While some may be able to use the default operating system version installed at a site, others may require a different version of the same OS due to application constraints. Yet others may require a completely different OS or the installation of other non-standard software on their compute nodes. With virtualization, the choice of operating system, application, and software stack is left to end users (or ISVs) who package their software into pre-built virtual machines for execution on an HPC hardware resource, much the way cloud computing offerings work today. In such an environment, site administrators manage the life cycle of virtual machines rather than configuring and providing a proliferation of different software stacks to meet their users’ disparate needs. Of course, a site may still elect to make standard software environments available for those users who do not require custom compute environments.

Effective distributed resource management is an important aspect of HPC site administration. Scheduling a mixture of jobs of various sizes and priorities onto shared compute resources is a complex process that can often lead to less than optimal use of available resources. Using Live Migration, a distributed resource management system can make dynamic provisioning decisions to shift running jobs onto different compute nodes to free resources for an arriving high-priority job, or to consolidate running workloads onto fewer resources for power management purposes.

For those not familiar, Live Migration allows a running guest OS instance to be shifted from one physical machine to another without shutting it down. More precisely, the OS instance continues to run as its memory pages are migrated and then at some point, it is actually stopped so the remaining pages can be migrated to the new machine, at which point its execution can be resumed. The technique is described in detail in this paper.

Live migration can be significantly accelerated by bringing the speed and efficiency of InfiniBand RDMA to bear on the movement of data between systems as described in High Performance Virtual Machine Migration with RDMA over Modern Interconnects by Huang, Gao, Liu, and Panda. See the full paper for details on yet another example of how the Enterprise can benefit from advances made by the HPC community.

The ability to checkpoint long-running jobs has long been an HPC requirement. Checkpoint-restart (CPR) can be used to protect jobs from underlying system failures, to deal with scheduled maintenance events, and to allow jobs to be temporarily preempted by higher-priority jobs and later restarted. While an important requirement, HPC vendors have generally been unable to deliver adequate CPR functionality, requiring application writers to code customized checkpointing capabilities within their applications. Using virtualization to save the state of a virtual machines, both single-VM and multi-VM (e.g. MPI) jobs can be checkpointed more easily and more completely than was achievable with traditional, process-based CPR schemes.

As HPC cluster sizes continue to grow and with them the sizes of distributed parallel applications, it becomes increasingly important to protect application state from underlying hardware failures. Checkpointing is one method for offering this protection, but it is expensive in time and resources since the state of the entire application must be written to disk at each checkpoint interval.

Using Live Migration, it will possible to dynamically relocate individual ranks of a running MPI application from failing nodes to other healthy nodes. In such a scenario, applications will pause briefly and then continue as affected MPI ranks are migrated and required MPI connections are re-established to the new nodes. Coupled with advanced fault management capabilities, this becomes a fast and incremental method of maintaining application forward progress in the presence of underlying system failures. Both multi-node and single-node applications can be protected with this mechanism.

In summary, virtualization holds much promise for HPC as a technology that can be used to mitigate a number of significant pain points for the HPC community. With appropriate hardware support, both compute and IO performance should be acceptable in a virtualized environment especially when judged against the benefits to be accrued from a virtualized approach.

Virtualization for HPC is mostly a research topic currently. While incremental steps are feasible now, full support for all high-impact use cases will require significant engineering work to achieve.

This talk focused on virtualization and the benefits it can bring to the HPC community. This is one example of a much larger trend towards convergence between HPC, Enterprise, and Cloud. As this convergence continues, we will see many additional opportunities to leverage advances in one sphere to the future benefit of all. I see it as fortuitous that this convergence is underway since it allows significant amounts of product development effort to be leveraged across multiple markets, an approach that is particularly compelling in a time of reduced resources and economic uncertainty.