What’s the Difference Between Single- and Multiple-Core Solvers? (.PDF Download)
When engineering analysts solve models using finite-element-analysis (FEA) and computational-fluid-dynamics (CFD) simulation software, they can often choose single- or multiple-core solvers. Although single core has historically been the default choice—and the easier one to deal with—in most cases, multicore has become the better option.
A multicore processing approach does require a higher initial financial investment than a single core, and is tougher to implement. But investing time and resources into multicore processing can pay off in less time than you might think. Multicores provide increased throughput, capability, efficiency, and speed.
Computing is inherently linear; it processes via logical, sequential steps. When an engineer is working with only one core, new instruction sets or packing can result in some speed and performance improvements. Some cores are faster than others, obviously, but performance on any one core can be improved only to a certain level. That is, speedups can be gained up to a point, and the benefits gained don’t go much beyond speed. A single piece of hardware can only do so much, no matter what software is being used.
Enter Multicore Processing
More and more analysts are turning to multicore processing (sometimes called high-performance computing, or HPC) for their systems. This often implies multiple computers (clusters) working together on a single problem, but not necessarily on a single solution. Multicore can also refer to multiple processes independently working on a set of problems. For example, several computers could work on different load cases simultaneously, but independently.