Ready, set, go

DOD supercomputer program looks for speed and power

"Typically, the kind of equations we're trying to solve require from dozens to thousands of differential calculations." ? Cray Henry, Defense Department

Zaid Hamid

Twice a year, the world's fastest supercomputers come to a screeching halt so the systems can run a benchmark test called Linpack to determine how fast they are, at least in comparison to one another.

Linpack measures how many trillions of floating-point operations per second (teraflops) the machine is capable of executing. It is the benchmark used to rank the fastest supercomputers in the world in the Top 500 List.

As an exercise in flexing muscle, Linpack is about as useful as any other benchmark. But as a tool for judging supercomputing systems in a procurement process, it is limited at best. The Defense Department, through its High Performance Computing Modernization Program, or HPCMP, is shaking up the supercomputing world by applying a more disciplined approach to purchasing big iron.

Instead of using a generic benchmark to compare models, the program issues a set of metrics that carefully codifies its workload. For vendors, that means DOD program leaders want the best ? yet most cost-effective ? systems companies can provide to handle a defined workload.

"We don't specify how big the machine is," said Cray Henry, head of the program. "We will run a sample problem of a fixed size and call the result our target time. We then put a bid on the street and say, 'We want you to build a machine that will run this twice as fast.' " It is up to the vendor to figure out how to achieve those results.

Sounds simple, but in the field of supercomputers, this common-sense approach is rather radical.

"It's a well-oiled process," said Alison Ryan, vice president of business development at SGI. She said that for vendors, "this kind of procurement is actually difficult. It takes a lot of nontrivial work. It's easier to do a procurement based on Linpack." But in the end, the work is worthwhile for both DOD and the vendor because "it's actually getting the right equipment for your users."
"They've done a great job on the program in institutionalizing the [request for proposal] process," said Peter Ungaro, chief executive officer at supercomputer company Cray Inc.

DOD created HPCMP in 1994 as a way to pool resources for supercomputing power. Instead of having each service buy supercomputers for its big jobs, the services could collectively buy an array of machines that could handle a wider variety of tasks, including the big ones.

Today, the program has an annual budget of about $250 million, including $50 million for two new supercomputers. Eight shared-resource centers tackle about 600 projects each year submitted by 4,600 users from the military services, academia and industry.

As of December 2006, the program had control of machines that could do a total of 315.5 teraflops, and that number grows by one quarter each year as the oldest machines are replaced or augmented by newer technologies.

The program has also developed a painstakingly thorough process for specifying what kind of systems it needs.

Getting specific

What is so different about HPCMP? It defines its users' workload rather than a set of generic performance goals.

Henry said most workloads on the systems fall into one of about 10 categories, such as computational fluid dynamics, structural mechanics, chemistry and materials science, climate modeling and simulation, and electromagnetics.

Each job has a unique performance characteristic and can best be run on a unique combination of processors, memory, interconnects and software. "This is better because it gauges true workload," Ryan said.

To quantify those types of jobs, HPCMP crafted a program called the linear optimizer, which calculates the overall system performance for handling each job. It weights each according to how often it is executed and factors in the price of each system in addition to existing systems that can already execute those tasks.
Once numbers have been generated for each proposed system, the program takes usability into consideration. Henry said that is hard to quantify, but it includes factors such as third-party software available for the platform and what compilers, debuggers and other development tools are available.

These performance and usability numbers are then weighted against the past performance of the vendors. From there, the right system may be obvious ? or it may come down to a narrow choice among a handful of systems.

"It's not often they need the same type of system year after year," Ungaro said.

Bottom line

DOD generally is well-represented on the twice-annual list of the world's fastest computers ? it had 11 in the June Top 100 ranking ? but the true beneficiaries are the researchers who use the machines. The biggest benefit? "Time to solution," Henry said.

DOD might need to know the performance characteristics of an airplane fuselage. Using a very accurate simulation rather than testing actual fuselages saves money and time.

"Typically, the kind of equations we're trying to solve require from dozens to thousands of differential calculations," Henry said. And each equation "can require a tremendous number of iterations."

Imagine executing a single problem a million or even tens of millions of times at once, with each execution involving thousands of calculations. That's the size of the job these systems usually handle.

DOD has many problems to test against. Programs track toxic releases of gas spread across an environment. They help develop better algorithms for tracking targets on the ground from moving radars. They speed development of missiles. In one example, supercomputing shortened the development time of the Hellfire missile to just 13 months, allowing it to be deployed in Iraq two years earlier than otherwise would have been possible.

By providing the fastest computing power available, the program in its modest way can assure DOD stays ahead of the enemy.

Joab Jackson is chief technology editor at Government Computer News magazine. He can be reached at

About the Author

Joab Jackson is the senior technology editor for Government Computer News.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here


  • POWER TRAINING: How to engage your customers

    Don't miss our Aug. 2 Washington Technology Power Training session on Mastering Stakeholder Engagement, where you'll learned the critical skills you need to more fully connect with your customers and win more business. Read More


    In our latest Project 38 Podcast, editor Nick Wakeman interviews Tom Romeo, the leader of Maximus Federal about how it has zoomed up the 2019 Top 100. Read More

contracts DB

Washington Technology Daily

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.