Graphics hardware solves a range of rendering problems.
By Ian Williams
Senior applied engineer
Santa Clara, Calif.
Edited by Paul Dvorak
The power behind recent graphics hardware, usually a card or board, has been driven in part by computer gamers who want fast, colorful action, and engineering and creative professionals who need to rotate, shade, and slice huge models. Other departments have driven developments as well. For example, rendering images with photographic quality adds pizzazz to sales and marketing presentations. And simplifying complex CAD geometry helps illustrate maintenance and operator manuals. But many engineers and computer users don't really understand how standard graphics hardware and software shades and renders images. Let's lift the veil and take a peek at graphics technology and an advancement that will change the way we use computer models.
Most designers and engineers recognize smooth or Gouraud shading typically used to render computer images. The gray engine head in an accompanying image shows an example. There isn't a high degree of realism, but the image is recognizable. The rendering method relies on graphics hardware that uses small triangles to approximate an object's shape. The number of triangles the hardware processes is typically the limiting factor in how realistic the image will be. More triangles mean more colors and greater realism. But more triangles slow the operation. It's no surprise that large, complex assemblies require high-end graphics hardware if designers expect real-time interaction.
To generate a typical Gouraud-shaded image, the hardware approximates and encodes lighting equations and calculates color values at triangle vertices. Lighting conditions are defined by characteristics such as the number, position, and color of lights, as well as approximations for optical properties of materials. After calculating a color value for each vertex, the color value for pixels within triangles is determined by linearly interpolating between vertex values. This method reasonably approximates the real world and balances the trade-off between performance and realism. But there are limits. They frequently show up as diluted or washed out highlights, and in some cases, the underlying triangle mesh remains visible.
A new approach overcomes these limitations by calculating a color value for every pixel instead of every vertex, a process called per-pixel lighting. Because objects on the computer screen typically contain many more visible pixels than vertices, the per-pixel-lighting approach produces significantly more accurate lighting and visual quality.
As computer-graphics hardware becomes more sophisticated, users expect it to do more than simple shading. A process called graphics programmability might fulfill the expectation. The technology is now being offered to CAD software developers who would make it available to their users through simple menu selections. It holds the potential to further change the way engineering departments work.
For example, most large engineering firms have illustrators that produce diagrams for manuals and reference documents. While engineering migrated to 3D CAD to generate accurate geometry, the techniques for producing illustrations did not advance. This is partly because shaded representations become too confusing when reduced to 2D diagrams. They contain too much information and detail, and don't properly represent a component or assembly. Consequently, skilled artists usually simplify objects to only the needed detail.
Graphics programmability offers an alternative to an illustration department through a technique oddly called nonphotorealistic rendering. It turns CAD geometry into cartoonlike drawings that are great for manuals. Hence, it's more often called toon shading.
In the past, such images were generated from CPU-driven computations. Graphics programmability also has the potential to change this workflow from off-line and time consuming to interactive. This change will be beneficial because off-line workflow inherently yields frequent interruptions, while real-time, interactive workflow promotes experimentation. That means if you don't like the outcome, tweak the model, render it again, and a few seconds later, judge the results.
Drawing transparent objects is another area in which today's computer graphics frequently come up short. The methods and approximations typically used almost always produce incorrect visuals. Because components are represented as triangles, generating proper images of transparent objects requires drawing all triangles in a specific order — the furthest triangle from the eye first, then the next closest, and so on. If they aren't drawn in this order, objects and surfaces disappear. Most designers appreciate that the furthest triangle in the scene is different for every viewing angle. And finding that first triangle to draw for every frame on an average sized model is incomprehensible. The problem worsens in large assemblies that need frequent interaction. Complications have led users to avoid transparency all together, or approximate it using techniques that provide poor visual quality.
Adaptable graphics hardware has a solution called Order Independent Transparency. It accurately represents transparent objects, maintaining high visual quality and realism, while still handling dynamic interaction with models and components. It lets engineering software draw objects and assemblies several times to peel away layers of transparency and then blend them back together. The overall effect is more compelling and eye-catching because of the instant information it brings to sales illustrations and design reviews.