FE Updated: Supercomputing When It’s Needed

Aug. 7, 2008
The steady increase in desktop computer power always seems overtaken by larger CAD models that need more complex and lengthy analysis.

Ronnie Hoogerwerf
Application Engineer
Interactive
Supercomputing Inc.
Waltham, Mass.

Edited by Paul Dvorak

And with simulation increasingly replacing physical testing, it’s not uncommon for a design team to tie up a company’s most capable computer for days.

High-performance computers (HPC), one solution to insufficient compute power, come in the form of traditional supercomputers, multiprocessor servers, or parallel clusters. But these have been the domain of elite technologists. In addition, the expense and programming complexity of HPCs send most would-be users back to their desktop.

Recent on-demand supercomputing services, however, can make up for a research effort’s occasional compute shortfall. Such computer systems are intended for applications that will benefit from parallel processing as well as when research teams prefer to develop their own applications or work with the equations and matrices that describe the physics, chemistry, or biology of their work.

On-demand supercomputing services are offered by national supercomputing research labs and the private sector at what most consider affordable costs. (In some cases, they are free.) Another example of such “bridge” software and hardware is the Star-P On-Demand service. It combines high performance and multiprocessor hardware and software in an Internet-based service.

Several recent developments make on-demand supercomputing effective in a production environment. For example, many research engineers work with familiar desktop applications such as Matlab from The MathWorks Inc., Natick, Mass. (mathworks.com), or the open source Python language. Python is a high-level programming tool favored by scientists, engineers, and analysts. It’s significant in the HPC world because, when paired with Star-P, it lets researchers tackle bigger problems than in-house computers would allow using large data sets. Users can write Python programs capable of handling large matrices and array objects on their desktop PCs, without running out of steam.

In some cases, users of packaged ISV provided software applications, such as those from Ansys Inc., have in-house pre or postprocessing routines which may be Matlab or Python based. In those cases, Star-P can be used to accelerate the in-house portion of the processing.

A seamless integration between a user’s desktop client and the HPCserver software lets users export the computationally intensive portions of their desktop software to another physically remote machine without having to choreograph the complex parallelism and intramachine communications.

Take finite-element analysis, for example. There are many dedicated FEA packages, but Matlab is widely used by those who need more control of the underlying algorithms, as well as the equations that govern relationships between nodes in the mesh. Typical applications include structural analysis, fluid flow, high-temperature plasma flow, airframe optimization, grain-boundary effect in crystals, and others.

HPC work typically flows this way:
1. A user exports the 3D geometry of an object to be studied from a computer-aided design program.
2. Import the geometry into Matlab.
3. Assemble the matrices that define the set of equations to be solved (stiffness and force matrices, for instance).
4. Solve the resulting system of equations (for example, [F] = [K] [x], where [F] = an applied-force matrix, [K] = stiffness matrix, and [x] = displacement matrix for which the equation is solved).

HPC on demand makes computationally intensive workflows more amenable because it provides parallel-processor execution for familiar desktop software, and it eliminates having to parallelize the code in C, C++, Fortran, or MPI to run on parallel computers.

For the FEA example, features in the software provide task and data-parallel modes. The taskparallel mode is best suited for executing operations in parallel that do not depend on each other, such as generating sparse matrices. With matrices assembled, equations are solved several ways using either standard Matlab functions or by plugging in a range of solvers from the open source community or numerical library vendors. For example, using an OpenConnect feature, users can plug in solvers from Sandia National Labs’ Trilinos library.

The Trilinos framework and library (trilinos.sandia.gov/index.html) provides a range of high-performance capabilities for solving the numerical systems at the heart of many complex multiphysics applications. Trilinos facilitates the design, integration, and support of mathematical software libraries within an object-oriented framework for the solution of largescale, complex multiphysics engineering and scientific problems.

When users are ready to run their application, they submit it to the cluster by Internet. The application is executed on the next available nodes.

For a cost estimate, users can access Star-P On-Demand for a test period using a 20 CPU-hour trial account. For further design development, users can access the service in a pay-as-you-go mode for less than $3/CPU-hour, or purchase monthly packages of core hours at lower hourly costs.

Where HPC works well

HPC on-demand, usually a pay-per-use resource, delivers interactive parallel computing power with a modicum of training. HPC this way shortens prototyping and problem solving across a range of complex engineering applications, while at the same time making supercomputing easy and instantly interactive. The service, often running on multiprocessor Linux clusters, is accessible online from anywhere and anytime.

HPC resources also provide organizations with a lower cost “testing ground” to try and refine models and algorithms before putting them into production mode on in-house hardware clusters. The desktop client runs the same whether working in remote on-demand mode or in a local client-server configuration. Organizations can simply purchase software licenses to run their jobs locally when they are ready.

Event-driven engineering

San Diego Supercomputing Center (SDSC) is making its HPC resources publicly available to support event-driven engineering. These applications should be urgent or event driven, and have included simulations of earthquakes, warnings of tornados, predictions of hurricane paths, or forecasting the drift of a toxic plume.

On-demand supercomputing can play a critical role in these events because, when calamity strikes, detailed information is needed immediately by emergency responders as well as the public. Without fast and easy access to powerful HPCs, additional critical analysis and modeling can take hours, days, and even weeks.

SDSC’s OnDemand service uses a Dell cluster with 64 Intel dualsocket, dual-core compute nodes for a total of 256 processors. Users access the system remotely from their desktops using a Star-P client.

 

The software list on the left indicates the types and languages software users can deal with. Tasks on the right are handled by the HPC company.

 

The heart and aorta model was generated in Matlab and run on a Star-P cluster using a handful of data tags and commands. Such a simulation might model tissue to look at surface stresses or blood flow in aging bodies.

About the Author

Paul Dvorak

Paul Dvorak - Senior Editor
21 years of service. BS Mechanical Engineering, BS Secondary Education, Cleveland State University. Work experience: Highschool mathematics and physics teacher; design engineer, Primary editor for CAD/CAM technology. He isno longer with Machine Design.

Email: [email protected]

"

Paul Dvorak - Senior Editor
21 years of service. BS Mechanical Engineering, BS Secondary Education, Cleveland State University. Work experience: Highschool mathematics and physics teacher; design engineer, U.S. Air Force. Primary editor for CAD/CAM technology. He isno longer with Machine Design.

Email:=

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!