Authored by: |
Imagine this scenario: You are driving a brand new luxury car. At a toll station, you accidentally open the rear power window instead of the front window and forget to close it. As the car accelerates, an annoying wind throb can be heard coming from the back. You try to close the window from the driver control but the window doesn’t move. You slow the car down, the throb stops, and suddenly you can close the window. At the dealership, they can’t find anything wrong with the car.
This happens to a number of consumers so the issue eventually makes it to the carmaker’s development department. The engineers reproduce the problem and determine the cause: a safety feature, a touchless sensor, prevents the windows from closing when obstructed. This feature is meant to keep kids, who like to put their hands out the window when it’s half open, out of harm’s way. However, no one realized that wind throb can trigger that sensor. Chances are the engineers figuring out aerodynamic behavior and running extremely sophisticated simulations never heard of the sensor let alone studied how it behaves exactly.
Unexpected behavior like this is mostly annoying, but occasionally could be dangerous. And as complex devices become more common, the likelihood of danger rises.
For a systems-engineering approach, engineers first need a way to document what the device or system does and why it is designed the way it is. This is not what today’s approach does. A CAD part model, an electric-circuit schematic, or a source-code file describe implementations — not intended functions, behaviors, dependencies, or decisions that led to a particular design. A large part of this knowledge resides in designers’ heads or, best case, is captured in documents. Careful documentation must become central to the development of complex products.
Also, designers must keep control of product variants. The number of variants is exploding as fast as the functional content. High-tech products are (hopefully) built on sophisticated platforms. Their design, sourcing, and, in many cases, their function, is adapted for different countries and markets. Engineers cannot do their jobs without a consistent way to manage changes and design in the context of variants.
In addition, product developers should foster collaboration across all stakeholders. They should participate in a unified change and project-management process where everything is tied together with PLM software.
What’s more, requirements should be embedded in the design process. Requirements-management methods have been around for years. What’s new is to treat requirements the same way as any other design deliverable. As hundreds of specialists work on software code, hardware design, and electronic components that work together to perform a single function, engineers need an easy way to find out which requirements are to be fulfilled by the component they are working on. This lets engineers know which simulation or test to be executed to ensure a change did not “break” anything. Each requirement is an object with its own life cycle; it can be revisioned and be part of work flows and tasks. Most importantly, it can be linked to any other artifact.
Requirements are part of product configurations. Why is that so important? There is rarely any high-tech product built today without some sort of platform concept. There is a common base and many configurations and feature variants that ultimately result in the final product. When requirements are built into the development and validation process, the approach is commonly referred to as the “150% model.” It comprises a single catalog of all requirements for an entire product platform.
Also important in a systems-engineering approach is to bring together mechanical, electrical, and softwaredevelopment domains. When one counts how many design, test, and simulation tools are used among mechanical designers, software engineers, and electronics designers, the result could easily be a three-digit number. Some PLM suppliers have a dream of a single integrated tool. But this seems unlikely just from looking at the increasingly specialized tools for certain industries. Companies such as Siemens PLM Software, Troy, Mich., take a different approach. Its PLM software builds data models that are detailed and powerful enough to represent design data from all the different tools and provide a “single source of truth.”
Centralized or decentralized?
A lot of discussion centers around two opposing philosophies of managing data across different development domains. One calls for a single repository, which lets users impose a single system of work flows and configurations. The other calls for decentralized repositories for CAD data, software, embedded electronics artifacts, and simulation. The repositories exchange data with each other but operate independently. Decentralized repositories are good for overall performance. They also allow rapid and frequent changes and facilitate collaboration between teams working on similar tasks. The problem is that decentralized systems don’t offer a viable solution for platforms with thousands and thousands of variants.
Many attempts have been made to connect the options and variant-management system from one application to another one. So far, it has consistently turned out that the language and semantics of variant-management systems is so complex that it is nearly impossible to have separate databases exchange information about configurations consistently.
So, how do you get the best of both worlds? The most successful players develop a strict definition of enterprise data versus application-specific data. In other words, they acknowledge that key deliverables, such as released parts, source code, released binaries, with loaders, calibration, and configuration files, consistently need to be managed in a single repository. However, there are plenty of transient, application, and domain-specific data that needn’t be managed in a centralized repository. In fact, they should not, because otherwise they would drag down performance and annoy users.
From models to “model-based design”
While models of many different specialties (FEA, thermal flow) have been used for years, the next big barrier is, in simple terms, to connect the models. In the automotive industry, for example, hundreds of sophisticated models are used to analyze and understand different aspects of the vehicle virtually. Models show things like the realtime behavior of software, the engine’s fit into the engine compartment when running under full load, and so on. However, the industry is still a long way from being able to say that the sum of all models truly “represents” the real vehicle or even a subsystem. The emphasis today and for the years to come is to develop systemic models that consistently “connect” specialized models developed and executed by a wide range of software tools.
With such a system of models, the PLM platform can provide the “glue” that lets users track the impact of changes on designs. It can trigger the reexecution of the right set of simulations if a change is made. It can also track results in the development platform, meaning the entire system of options and variants.
For example, consider the approach in Teamcenter PLM. The data model has continuously expanded to represent and “understand” the data from all the domains. So capabilities of the platform — option and variant management, schedule management, change management — get applied in a common way.
PLM forces a tight connection between program management and the status of development artifacts. For example, completion of a design is the same as the status of an artifact changing from “WIP” to “released.” If the status of an artifact is directly connected to a task, a program manager gets accurate and real-time visibility of project status without a manual reporting effort. The Siemens PLM platform has integrated program management, directly connecting artifact status to task management.
Unfortunately, out-of-the box, all-in-one PLM for large-scale embedded systems development is not realistic. About 10% of the critical process steps involve either in-house tools or commercial tools that have been tweaked beyond recognition. So, a data-management system must be designed to interface with any tool. A good set of standard, commercialized integrations with the most mainstream tools is important, but from a system perspective, openness is more important than availability of any sort of commercial connectors.
Moving from the process most manufacturers currently operate to a systems-driven model is a big transition. It changes how products are defined and how people work and interact. Today, development leaders typically own components or subsystems of the product — such as a brake assembly or a drivetrain. In the systems-driven approach, development leadership must migrate toward the features or functions — for instance, “crash safety” — that are implemented by several electronic and hardware subsystems. That leadership spans many teams and includes suppliers and external partners.
So where to start implementing systems engineering? There is no single answer. In a major transition, it is imperative to quickly show quantifiable benefits from the sometimes painful changes-off processes, methods, and tools. Pick a large problem creating a lot of nonconformance cost and solve that one first. One example might be hardware-software traceability. Most manufacturers of high-tech products continue to find glitches in embedded software but don’t have an accurate way to identify — down to the serial number — which shipped products are impacted and need to be recalled or serviced to fix the problem.
Often, the understanding of compatibility and dependency is so limited that the upgrade has a much larger scope than necessary. So, lack of traceability directly translates into huge additional cost. Companies that build a release process for an entire mechatronic BOM have a leg up managing all revisions, assessing compatibility between hardware and software. They get tangible and probably measurable benefits from viewing dependencies of all these artifacts from implemented functions and requirements to avoid warranty costs.
One must acknowledge that the transition to a systemsdriven approach constitutes a change of organization, roles, and culture, along with processes and tools. As such, it doesn’t come shrink-wrapped in a tool, even if tool vendors love to suggest that. Plan for it to take several years to implement a systems-driven approach.
How do you tell if a system is “open”?
An open system should: |