Shutterstock and Getty Images
Cybersecurity

Build Resilience to Hardware Vulnerabilities

Jan. 28, 2020
Balancing security and safety is a delicate task and needs extra awareness from design engineers.

The growing security risks associated with connected devices and machine learning require a level of cybersecurity expertise that hasn’t been needed in the past. Managing cyber risk is a difficult and complex challenge for companies of all sizes—from small firms to multinational corporations.

A pair of recent, high-profile breaches illustrate just how disastrously vulnerabilities can be exploited.

After personal data for thousands of Ring home security camera owners were compromised, the Santa Monica-based company (now owned by Amazon) was hit with a lawsuit for failing to fulfill its promise of providing privacy and security for its customers. Ring claimed the breach was perpetrated by hackers. But security investigators deduced that the leaked data was taken from a company database, as it included username, password, camera name and time zone.

And when Schneider Electric was attacked in 2017, operations at a critical infrastructure facility in the Middle East was crippled. The attackers used sophisticated malware, known as Triton, to infiltrate one of Schneider’s Triconex safety systems, a last line of defense that kicks in when controllers detect dangerous conditions.

These examples of rogue attacks are a sobering reminder about vulnerabilities in devices and equipment systems and the disastrous effects they could lead to, says Bahman Sistany, senior security analyst and researcher, Irdeto, a firm specializing in digital platform security and IoT connected industries.

Machine Design talked to Sistany about confidentiality and privacy of machine learning systems, the growing use of open-source software, and how privacy can be improved with appropriate intrusion detection systems.

Q: The development of machine learning drives many business-model modifications in the manufacturing process. From a security standpoint, what are the most significant changes that design engineers should be aware of now?

Sistany: The promise of smart manufacturing brings what some people call hyper-connectivity along with real-time and constant exchange of massive amounts of data.

Machine learning is an important weapon in detecting anomalies and preventative actions when an attack (or a failure) is underway. And given the amount of connectivity rate of data exchange and the resulting attack surface, the likelihood of various cyberattacks has significantly increased.

Outside of using ML for anomaly and intrusion detection, designing robust, safe and secure cyber-physical systems is an important and active area in the new business models. Although most security issues are not new, the sophistication level of attackers have improved and new tools and technologies are readily available. Implementing robust protections against various attacks is more challenging than ever. Any protection method/tool/approach, including the use of ML itself, now becomes a point of attack and needs to be protected in turn. 

At a more micro level, design engineers could certainly use some guidelines. Saltzer and Schroeder’s principles for designing secure software systems (circa 1975) are still applicable today, so they should all be considered as good design principles. As far as ML-based systems are concerned, a few of their principles are even more pertinent. (See Sistany’s modified version of Saltzer and Schroeder’s principles in the sidebar below.)

Q: What are some of the big-ticket vulnerabilities and issues that need to be identified and overcome in smart design and smart manufacturing?

Sistany: A common theme in smart manufacturing and its promise of improved manufacturing optimization and flexibility is increased automation, which implies more connectivity of “things.” The high automation and connectedness in turn results in a greater attack surface and more potential vulnerabilities

The increased automation in smart manufacturing is of course realized using distributed/collaborative machine learning algorithms (such as federated learning), resulting in significant reduction in costs and time of production, primarily by yielding insights to improve product quality and production yield. Many of these benefits rely on having reliable data inputs from a wide variety of process points. As these process points are spread over various stakeholders in the supply chain, the ability to apply ML is going to rely upon having a reliable “digital thread” of product, production and performance.

Security techniques are required to maintain the integrity of the digital thread—a critical requirement for ML to be applied successfully. Another important concern is functional safety for critical components and infrastructure. Balancing security and safety is often a delicate task and needs extra awareness and attention from design engineers.

Another unrelated use of machine learning in smart manufacturing is due to the serious risk of cyberattacks which has dramatically escalated given the increased connectivity. Distributed/collaborative machine learning on physical data to detect anomalies and intrusion attempts is a very promising emerging technique for discovering cyber-physical attacks in smart manufacturing.

Whenever ML is used, whether for improving product quality and production yield or for intrusion detection, the usual known attacks on ML systems are in scope and countermeasures need to be considered. By usual known attacks I mean the attacks already covered in the literature on confidentiality and privacy of ML systems. Model extraction, model inversion and malicious training are some of these known attacks. The adversarial goal for each of these attacks is different: learning the model, learning the training data or properties of the training data and exfiltration of sensitive training data are the goals for the mentioned attacks.

Q: Open-source is viewed as an important step toward future-proofing production lines, as well as to ensure compatibility across the manufacturing floor. What are the design and deployment vulnerabilities one can expect from moving in this direction?

Sistany: Open-source software (OSS) is a fact of life in today’s software systems, including industrial control systems and basically anywhere software is used. The advantages of using OSS are hard to ignore and the cost of duplicating the functionality of OSS is prohibitive. It is no surprise that most organizations today will have some OSS running on their computer systems.

Comparing open-source to closed-source software, the one major advantage of OSS is the “many-eyes” element that exists in most open-source software. Typically, not only many more contribute to an OSS project than your average closed-source one, many more also use the OSS project, leading to bugs being discovered early and often. However, the fact that the software is open to many eyes is also a disadvantage. Malicious actors could inspect the source of an OSS package looking for specific vulnerabilities and develop exploits to take advantage of the vulnerabilities.

Other ways malicious actors could pose risks to organizations include inserting non-obvious backdoors and Trojans. Organizations that use OSS need to first and foremost have a solid security and best-practices policy that should guide designers on how to select, inspect, test and audit OSS packages. Being aware of common vulnerabilities and frequent purposeful software updates must be part of the policy, which needs to be implemented and enforced throughout the organization.   

Q: In your article, “What Would Privacy-Preserving Machine Learning Look Like?” you refer to access control as a way to protect a system. Please explain how privacy concerns can be improved through privacy models and federated learning.

Sistany: In distributed learning environments such as federated learning environments, besides the usual node level authentication and authorization, finer grain access control may be needed depending on the attack model. For example, limiting access to internals of a model is certainly a good design principle (see Saltzer and Schroeder’s principles for “Separation of Privilege”). The access could even be policy based and look something like “only certain queries are allowed by certain nodes” reminiscent of role-based access control (RBAC) systems.

Let’s be a bit more specific and explore the possible attack contexts, but let’s first quickly define federated learning. Federated learning (FL) is distributed learning where model training no longer takes place at a central cloud server. Model training instead is distributed among many client devices, which use local data to train a local model and only send their model parameter updates. The server uses the individual updates from clients and aggregates them to build a shared model. Having kept local data on the devices and not sharing them with a server helps allay privacy concerns to a large degree. In short, bring “code to data” instead of “data to code.”

For a concrete example of where FL is being used, consider intrusion detection systems (IDS). Attackers are getting better at hiding and obfuscating their attacks by blending in any side effects or anomalies that the intrusion causes. To improve the accuracy of intrusion detection and also to shorten the delay in discovering an anomaly, FL and other distributed ML approaches are used to pool together monitoring data gathered by individual edge devices. 

For ML-based systems, we typically talk about two distinct phases: the training phase and the inference phase. Continuing with the FL example, two attack contexts are noteworthy.

  • Model poisoning attacks: The goal of an attacker is to get the model to misclassify a set of chosen inputs with high confidence—that is, poison the global shared model in a targeted manner. Model poisoning attack is the counterpart of data poisoning attacks for non-FL contexts. Model poisoning is also a kind of evasion attack or at least could lead to an evasion attack. An evasion attack is using an “adversarial example,” an input that is slightly perturbed such that the model responds with the attacker’s desired classification resulting in evading the intended classification. In the federated IDS example, the attacker’s desired classification is “benign.”
  • Membership inference attacks by a malicious participant (or malicious aggregator/server): What can be inferred about the local data used for training the local model? Given an exact data point, was it used to train the model for a specific client? The malicious participant in this scenario, knowing his own local updates and having access to snapshots of the shared model, may find out another participant’s local update. In the federated IDS example, imagine a low-security edge device has been breached and an attacker is trying to learn about sensitive information coming from other higher security devices contributing to the shared/federated model.

Model updates in an FL context still need to be protected via differential privacy or secure multiparty computation. However, a balancing act is often needed as these approaches often provide privacy at the cost of reduced model performance or system efficiency. 

Editor’s note: Computer scientists Jerome Saltzer and Michael Schroeder in their 1975 article, “The Protection of Information in Computer Systems,” devised principles for the design of secure software systems.

What follows is the list of Saltzer and Schroeder’s principles. Bahman Sistany has made some minor changes; for the purposes of this article, the ones pertinent to ML are italicized, including some relevant examples.

1. Economy of mechanism: Keep the design as simple and small as possible.

2. Fail-safe defaults: Base access decisions on permission rather than exclusion.

  • For example, in a Federated Learning context, deny anomalous model updates by default

3. Complete mediation: Check every access to every object for authority.

4. Open design: Assume the design/specs is not a secret

  • Always assume a white box context where attackers have access to training data and model parameters since black box attacks have shown to be extremely effective against ML models.

5. Separation of privilege: Fine-grained privileges one per operation instead of wholesale privileges that allow multiple operations

  • Federated Learning’s approach to preserving privacy is directly based on this principle where federation also implies separation of privilege.

6. Least privilege: Only allow the necessary minimum privilege and no more

7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.

8. Psychological acceptability, also known as usable security: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.

  • Formal and accepted definition of privacy in ML systems, that is, differential privacy is not intuitive for humans and it cannot be expected to be applied directly by users, so design your systems with privacy in mind with little or no involvement from users

9. Work factor: Compare the cost of circumventing the mechanism with the resources of a potential attacker.

10. Compromise recording: It is sometimes suggested that mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss.

About the Author

Rehana Begg | Editor-in-Chief, Machine Design

As Machine Design’s content lead, Rehana Begg is tasked with elevating the voice of the design and multi-disciplinary engineer in the face of digital transformation and engineering innovation. Begg has more than 24 years of editorial experience and has spent the past decade in the trenches of industrial manufacturing, focusing on new technologies, manufacturing innovation and business. Her B2B career has taken her from corporate boardrooms to plant floors and underground mining stopes, covering everything from automation & IIoT, robotics, mechanical design and additive manufacturing to plant operations, maintenance, reliability and continuous improvement. Begg holds an MBA, a Master of Journalism degree, and a BA (Hons.) in Political Science. She is committed to lifelong learning and feeds her passion for innovation in publishing, transparent science and clear communication by attending relevant conferences and seminars/workshops. 

Follow Rehana Begg via the following social media handles:

X: @rehanabegg

LinkedIn: @rehanabegg and @MachineDesign

Sponsored Recommendations

Flexible Power and Energy Systems for the Evolving Factory

Aug. 29, 2024
Exploring industrial drives, power supplies, and energy solutions to reduce peak power usage and installation costs, & to promote overall system efficiency

Timber Recanting with SEW-EURODRIVE!

Aug. 29, 2024
SEW-EURODRIVE's VFDs and gearmotors enhance timber resawing by delivering precise, efficient cuts while reducing equipment stress. Upgrade your sawmill to improve safety, yield...

Advancing Automation with Linear Motors and Electric Cylinders

Aug. 28, 2024
With SEW‑EURODRIVE, you get first-class linear motors for applications that require direct translational movement.

Gear Up for the Toughest Jobs!

Aug. 28, 2024
Check out SEW-EURODRIVEs heavy-duty gear units, built to power through mining, cement, and steel challenges with ease!

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!