Speed really matters

Supercomputing flies to the rescue of the F-22.

Aerospace engineers have long used wind tunnels
to test aircraft designs, but the tunnels
have limits.

Although wind tunnel tests can provide
thousands of data points, researchers need
more than that to design complex, modern
aircraft, said Cray Henry, director of the
Defense Department's High Performance
Computing Modernization program. That's
why researchers are turning to simulations
using supercomputers to study full aircraft
maneuvers.

"You can figure out a much wider envelope
of information about these aircraft," Henry
said. "That provides a much better prediction
of where an aircraft might have problems. You
can create significantly more information with
a richer understanding of performance characteristics
through simulation than you can do
with most physical tests."

CRITICAL REQUIREMENTS

Understanding the maneuverability and stress
limitations of the F-22 Raptor fighter was critical
to Air Force officials when designing the
aircraft. The high-G maneuvers the F-22 performs
are not only taxing on the body of the
fighter pilot but also put loads and stresses on
the plane's airframe.

Designers needed to know if the loads
would exceed the limits of the aircraft. They
used supercomputers from DOD's High
Performance Computing Modernization program
to run the loads analysis. Only after the
tests showed that the aircraft would be safe did
the F-22 go into service, in 2005.

The F-22A loads program began with computer
models that identified the anticipated
maneuver loads based on flight simulation,
aerodynamic distributions and the associated
structural responses. Wind tunnel models
were used to validate the aerodynamic distributions.
Real-world flight test loads were used
to adjust, validate and correlate the model.
The loads model also can be used now that
the fighters are in service to assess new configurations
of the aircraft or identify areas of
potential flight and service life problems
because of expanded missions. That could help
identify potential problems before they
become issues in the field.

The model required more than 10,000
hours of computer time on the SGI Origin
3900 supercomputer at the program's supercomputing
center at the Aeronautical Systems
Center at Wright-Patterson Air Force Base in
Ohio.

"For the most part, the flight test program
gave us confidence in our loads model," said
Brian Bohl, an aeronautical engineer with
Lockheed Martin Corp.'s F-22A Structures
Certification IPT-Loads and Criteria team.
"There were some surprises along the way, but
that's the reason why you flight test."

PROGRAM USERS

The DOD High Performance Computing
Modernization program provides supercomputing
services to the scientists and engineers
in DOD's civilian and military areas and
defense contractors. All the work is research
for developing new products and processes for
DOD.

Developing better fuels or building new
armored vehicles are typical projects under
way at the center. Researchers often have
enough understanding of projects' fundamental
physics and chemistry to build mathematical
models for the supercomputer. The models
show how the systems work and allow
researchers to run many virtual experiments.

That helps reduce the design time because
researchers can conduct many tests in a short
amount of time before actually building a
promising design.

The defense laboratories from the military
services are among the biggest users of the
program. Several dozen universities also work
with the laboratories and have access to the
high-performance systems.

One project studied jet engines and ways to
make them more efficient. Researchers wanted
to design an engine that could fly farther or
faster with the same amount of fuel used in
today's engines.

The research focused on coatings used on
the internal parts of jet engines. Metal components
must be coated because engines burn
hot enough to melt metal. The metal components
are coated with ceramics and other
materials that shield them from that heat.

"We've had a couple of projects over the
years that look at, are there different chemical
properties that can be applied to metal components,"
Henry said. "They looked at building
different bonding layers and how to get different
materials to adhere to the metal and stay."

The work focused on understanding the
thermal cycles, the combustion cycles and how
materials break down in that ultrahot environment.
Simulations were conducted to see what
breaks down coatings and what can be done to
make them stronger and lighter.

"They've come up with some good ideas on
how to change the coatings in engines," Henry
said. "I would say look at the commercial airline
industry over the next decade and see how
much engines change internally. Using supercomputers,
we were able to look at what the
possible compounds in engines could be."

THE PARALLEL APPROACH

One thing delaying wider use of supercomputers
is the complexity of writing the applications.
Traditional computer coding is done in
a linear manner: Step 1, Step 2, Step 3 and so
on. But supercomputers require a different
kind of thinking because processes occur
simultaneously.

"You have to decompose your math into a
way that you can solve both sides of an equation
at the same time," Henry said.

The need for people and methods to develop
those new high-performance methods will
continue to grow in the future.

Doug Beizer (dbeizer@1105govinfo.com) is a staff
writer at Washington Technology.

NEXT STORY: Pedal to the metal