Unstructured dynamic meshes accurately capture the complicated motion of separating bombs with less drag on computational resources.
Deryl O. Snyder
Jacobs Sverdrup Inc.
Eglin Air Force Base, Fla.
Designers once relied solely on flight tests to evaluate the separation of stores bombs, missiles, external fuel tanks, and so forth from transonic military aircraft. New weapons systems often took years to certify because tests were time consuming, expensive and occasionally led to the loss of aircraft when events didn't go as expected. Dynamically unstable objects such as fuel tanks and some modern, agile munitions that happen to experience a control failure are especially challenging.
Wind-tunnel testing, and more recently, computational-fluid-dynamics (CFD) simulations, have lowered dependence on flight tests. In fact, wind-tunnel testing has become the design tool of choice. But such tests are relatively expensive and must be planned well in advance. And they have limited accuracy in some cases, as for stores released from weapons bays or the ripple release of multiple objects. The use of small-scale models can also hurt accuracy.
Early CFD modeling of separating stores combined conventional steady-state solutions with empirical or semiempirical approaches. The advent of Chimera overset (overlapping) grids makes possible unsteady, full-field simulations with or without viscous effects. But grids still must be generated, assembled, and recalculated at each time step, which is a drag on computational resources. The needs for calculation accuracy and stability often necessitate fine grids and small time steps, which further lengthens computational time. Such is the case for the simulation of stores with complex geometries, notably those released from weapons bays. Bays with intricate geometry can alter the flow field.
But an unstructured dynamic mesh technique from engineers at Jacobs Sverdrup Inc. and the U.S. Air Force Munitions Directorate significantly boosts speed of grid generation because user input mostly goes to the building of a surface mesh. Grids don't overlap so fewer grid points are needed.
The approach has three basic components: a flow solver, a six-degree-of-freedom (6DOF) trajectory calculator, and a dynamicmesh algorithm.
The Fluent flow solver from Fluent Inc., Lebanon, N.H., solves the governing fluid-dynamic equations at each time step. The calculated pressure field is integrated over the store surface to produce aerodynamic forces and moments. The 6DOF trajectory code then computes store movement based on these forces and moments. This code integrates the Newton-Euler equations of motion within Fluent as a user-defined function that dynamically links with the solver at run time. A local remeshing algorithm accommodates the moving body in the discretized computational domain. Finally, the dynamic mesh algorithm modifies the unstructured mesh to account for store movement. Small-scale body motions use a localized smoothing method, while large-scale motions that result in poor quality cells (based on a volume or skewness criteria) are locally remeshed.
A generic wing/pylon/store geometry for which benchmark experimental data are available provided a test case for the method. The 45° clipped delta wing has a 25-ft root-chord length. An ogiveflat pylon extends 2 ft below the wing leading edge. The store consists of a tangent-ogive forebody, a clipped tangent-ogive afterbody, and a cylindrical centerbody. An ogive is a pointed, curved shape most commonly used for rocket and bomb nose cones and bullets.
Fluent's Gambit preprocessor generated triangle surface meshes from CAD geometry files. The meshes were imported into the program's TGrid meshing preprocessor that produced the tetrahedral volume mesh. Using the automated meshing tools in Gambit and TGrid, the complete mesh can be created in just a few hours.
The wing, pylon, and store surfaces all used nonpermeable wall boundary conditions. A pressure far-field condition was set at the upstream and downstream domain extents; a symmetry plane was imposed at the wing root; and the downstream boundary was assigned a pressure outlet condition. A fully converged steady-state solution gave the initial condition for the separation analysis. Time steps of 0.01, 0.002, and 0.0004 sec, were evaluated for each of three grid refinements.
The result: CFD did a good job of modeling separation at a speed of Mach 1.2 at an altitude of 38,000 ft. The center-of-gravity location closely matched experimental data for all grid refinements. As expected, the store moved rearward and slightly inboard as it fell, though CFD simulations slightly underpredicted the rearward acceleration.
The simulation also agreed well with experiment for store pitch and yaw angles. The roll angle is tricky to model because the moment of inertia about the roll axis is much smaller than that of the pitch and yaw axes. Consequently roll is highly sensitive to errors in predicted aerodynamic force. In this case, the store initially pitched noseup in response to the moment produced by the ejectors that held it in place. Once free of the ejectors, the nose-down aerodynamic pitching moment reversed the trend. The store yawed initially outboard until about 0.55 sec, after which it began to turn inboard. The store rolled continuously outboard throughout the first 0.8 sec of the separation. The simulation underpredicted this trend as well, and the curve began to diverge from experimental values after about 0.3 sec.
Experimental surface pressure data from wind tunnel tests were compared with the simulation along axial lines of the store body at four circumferential locations and three instants in time. Agreement between the simulation and experiments was exceptional. Of particular interest was the 5° circumferential location line at t = 0.0 because it sits in the small gap between the pylon and store. The simulation faithfully captured the deceleration near the leading edge of the pylon.
In sum, CFD with unstructured dynamic meshing can efficiently model transonic store separation. The use of unstructured tetrahedral meshes and a fully parallelized, accurate and stable solver permits small grids and relatively large time steps without excessive computational burden. Runs as for the nominal grid case in this study can finish overnight on a desktop workstation.
Public Release, AAC/PA 03-21-05-102