What’s the Difference Between Single- and Multiple-Core Solvers?
Download this article in PDF format.
When engineering analysts solve models using finite-element-analysis (FEA) and computational-fluid-dynamics (CFD) simulation software, they can often choose single- or multiple-core solvers. Although single core has historically been the default choice—and the easier one to deal with—in most cases, multicore has become the better option.
A multicore processing approach does require a higher initial financial investment than a single core, and is tougher to implement. But investing time and resources into multicore processing can pay off in less time than you might think. Multicores provide increased throughput, capability, efficiency, and speed.
Computing is inherently linear; it processes via logical, sequential steps. When an engineer is working with only one core, new instruction sets or packing can result in some speed and performance improvements. Some cores are faster than others, obviously, but performance on any one core can be improved only to a certain level. That is, speedups can be gained up to a point, and the benefits gained don’t go much beyond speed. A single piece of hardware can only do so much, no matter what software is being used.
Enter Multicore Processing
More and more analysts are turning to multicore processing (sometimes called high-performance computing, or HPC) for their systems. This often implies multiple computers (clusters) working together on a single problem, but not necessarily on a single solution. Multicore can also refer to multiple processes independently working on a set of problems. For example, several computers could work on different load cases simultaneously, but independently.
To maximize parallel computation, analysts can choose from a variety of multicore-processing techniques. Each is tailored to different classes of problems, or hardware configurations, which gives analysts and engineers the flexibility to apply a technique that best fits their situation. Examples of techniques employed by analysis software include:
- Shared-memory parallel
- Distributed-memory parallel with message passing between tasks
- Task-level parallelization
Further, problems can be broken up multiple ways to be parallelized:
- Spatial-domain decomposition (distributed memory)
- Frequency-domain decomposition (quasi task-level parallel)
- Parametric studies (fully task-level parallel)
Some scale better, but may use more of other resources like RAM.
The number of solves per hour increases in parallel with the number of cores employed using ANSYS LS-DYNA 3. The graph shows results from a 3-car collision simulation. Data prepared by SimuTech Group Inc.
Speed
The most obvious benefit of multicore processing is increased speed. Different solution techniques can scale better and become faster as more cores are used: iterative (low-level coupling) vs. direct solutions (highly coupled). You can explore multiple design iterations and more thoroughly examine a problem while still meeting tough deadlines.
Efficiency
Multicore processing improves the efficiency of your licenses and your time. You can more thoroughly explore the work you’re already doing—and more efficiently. It also helps extend the lifespan of your licenses, which saves money long-term.
Performance as measured by iterations per minute increases with the addition of more CPU cores in a fluid-pump model using ANSYS CFX 1. Data prepared by SimuTech Group Inc.
Capability
Perhaps the most overlooked benefit of multicore processing is how it increases capability. Since engineers can solve problems an order of magnitude faster, they can open up new physics, tasks, or problems they wouldn’t have considered before.
For example, an engineer may have the technical expertise to solve a project, but not the computing capacity to perform the work. Additional cores can unlock the possibilities of software already being used by the company. This, in turn, opens the door to new insights.
An engineer can use multicore processing to examine plasticity or creep in a model that, with single-core processing, would have been limited to only linear materials. Less time is needed to perform model simplification or preprocessing, which allows for more detail more quickly for the final analysis.
Less risky simplifications and system-level analyses are also possible with multicore technology. Furthermore, with HPC, you have a powerful optimization tool is at your disposal: parametric analysis. Parametric studies can theoretically scale perfectly and run more simulations. And they can also be used for simple sensitivity studies or “what if?” checks—optimization isn’t required, though it is possible.
Example of Multicore Processing
Here’s an example of how one program, ANSYS Fluent, scales with multicore processing. Note the significant increase in iterations per minute as more solver cores are used. Data prepared by SimuTech Group Inc.
ANSYS multicore technology, like most multicore systems, is found in two forms: shared-memory processing (SMP) and distributed-memory processing (DMP). SMP is used by a single computer to store the problem in a single pool of RAM, and all cores work on that pool. This is an efficient use of RAM in a single system, but it’s not as CPU-efficient.
DMP breaks the problem into sections, and each “chunk” is worked on by one core. Communication links transfer information across boundaries. DMP uses more RAM, but it’s more CPU-efficient when enough RAM is available. In addition, DMP can be run on a single machine or a cluster of discrete machines. In general, DMP is faster, and is considered the default method for ANSYS Mechanical.
It’s prohibitively expensive to ensure every engineer has a high-power machine, but average machines restrain the simulations and are inefficient on licenses and engineering time. To ameliorate this issue, ANSYS also offers Remote Solve Manager (RSM), which allows a team of engineers to share the best hardware resources available. The RSM allows jobs to be queued on a compute server, ensuring the solves happen on the best hardware. The result is lower total cost, higher throughput, and better simulations.
The Cloud Option: Continuous Improvement and Playing the Long Game
Cloud computing offers another way an engineering company can use multicore architecture to scale up its simulation and realize multicore’s many benefits. It avoids having to buy additional hardware that may not be needed all of the time.
Multicore processing will provide continuous improvement for your company and its projects. With lower communication overheads and faster interconnects, large simulations can be parallelized into ever-increasing numbers of cores. Furthermore, these improvements enable you to parallelize smaller simulations as efficiently as large ones. In the end, everyone benefits from lower overhead and faster interconnects.
Computational power in hardware is always on the rise. More desktops and workstations are becoming available with large numbers of cores. As the technology and hardware evolve, using software that efficiently takes advantage of these improvements allows your company to evolve along with it.
Ultimately, multicore processing can be a strong way to position your company for the future. According to Donald Firesmith, Principal Engineer at the Software Engineering Institute (SEI), single core may be becoming obsolete, which he stated in Carnegie Mellon’s Software Engineering Institute blog. He later added: “Multicore processing is becoming ubiquitous as the limitations of single core processors become more widely recognized. In many application domains, essentially all processing will be multicore.”