Design limitations, workforce challenges, high integration costs, slow productivity growth…These are the nemeses for companies grappling with industrial automation.
Ben Armstrong and Julie Shah, researchers and coleaders of MIT Work of the Future, an initiative that supports multidisciplinary research on the ways technology is changing work, offer forthcoming advice: Avoid zero-sum automation.
The “zero-sum” reference is commonly associated with game theory to describe a situation where a rational actor will seek the greatest gain for himself at the expense of other players in the game. In other words, what a company gains from automation in productivity it tends to lose in process flexibility, noted Armstrong and Shah.
A better approach, they argue, is “positive-sum automation,” which they describe as the motivation to use technology design and strategy to improve both productivity and flexibility.
The researchers offer a three-pronged approach to reducing complexity and advancing widespread adoption of automation that center on design, integration and measurement:
- Design: Automation tools should incorporate low-code programming interfaces so that employees need minimal technical skills to repair or adjust them in real time.
- Integration: A bottom-up approach to automation is preferred, because line employees—who have the closest perspective on a process—also have the best vantage for recommending and developing how it is automated.
- Measurement: Although productivity emerges as the No. 1 reason for automation, the researchers found the actual reasons behind automation to be more nuanced. That means that measuring success should take these nuances into consideration, and companies should develop appropriate metrics for measuring the success of their automation projects, ranging from cite/machine level to system and people.
These principles were modeled after solutions designed to work collaboratively with humans and to extend capabilities for optimal results. The automation loop they envision considers the KPIs for human teams to be the most relevant.
In keeping with Armstrong and Shah’s perspective of successful automation, it is true that we’ve come a long way from the lights-out, factory-of-the-future experiment envisioned in the 1980s and 1990s. Today, the spike in digital transformation increasingly demands inventive solutions in robotics, hardware and automation software (computing intelligence that power machines) keep pace.
The emergent technology examples that follow below are not strictly intended to meet nor fail Armstrong and Shah’s expectations, yet their potential to significantly impact a wide range of applications suggest an ultimate quest to overcome bottlenecks to improving productivity and flexibility.
Remote Robotics Support Turns Downtime into Uptime
Consider the robotics industry as an example. More than 500,000 new industrial robots were installed worldwide in 2021 alone. According to the International Federation of Robotics, demand is driven by a host of factors, ranging from labor shortages and reshoring initiatives to rising e-commerce demand. Add the fact that robot downtime can cost a plant more than $1 million per hour and the need for more inventive workarounds become obvious, said Fredrik Ryden, CEO of Seattle-based Olis Robotics, which specializes in remote monitoring, control and error recovery technology for industrial robots.
“When every minute counts, you need to leverage remote tools to react as quickly as possible, no matter where you are,” he noted.
During a video interview with Machine Design, Ryden demonstrated how remote support can be part of an overall solution to labor challenges and turnover issues, particularly in situations where plants may not have trained staff. “Downtime is a big issue, but the problem is that a lot of these companies can’t even buy automation and robotics in the first place because they don’t feel that they have the skills for them,” he said.
That makes the opportunity to buy a remotely supported robot cell a very appealing proposition. When a plant experiences a downtime event, such as an outage in a robotic cell, the impact can be costly. An ability to provide rapid diagnostics in this situation allows the engineer or technician to make an accurate assessment of what caused the event and what the next move ought to be.
Partnering primarily with robot integrators that serve high volumes and big end-users, Olis Robotics’ latest offering, Olis Connect, can be controlled directly in a web browser. The plug-and-play module is delivered on an Edge-hosted PC, and is intended for brand new and legacy industrial robot arms and robotic cells. Virtual set-up can be complete within 30 min.
Typically, remote access can be via corporate VPN or an industrial router. During the interview, Ryden keyed in the IP address and was immediately connected to a robotic cell located 45 min. from his office. He had full control of the demo program set up at a FANUC facility in Auburn, Wash.
Once installed, and secure remote access has been configured, users can monitor and manage their automation remotely. Should a problem arise, alerts are sent to the user’s device without connecting to the cloud. Users can perform error recovery actions as needed, such as releasing the grip of the robot’s end-effector or repositioning a part.
The objective is not to habitually take over or intervene in the work of the robot. “We’re experts in remote controlling robots, and our job is to make sure that users remote control robots as little as possible,” Ryden explained. “When they do, the solution needs to be secure from a cybersecurity standpoint and needs to be safe from a robot safety standpoint.”
He teased the robot in a way that checked whether the safety systems were engaged. “If I were to violate a safety system by going outside of the zone where this robot is allowed to operate, the robot would shut off and I wouldn’t actually be able to restart it,” Ryden said.
The system works across robot brands. According to Ryden, Olis Robotics currently provides support for robots from Universal Robots and FANUC, addressing about 20% of the operational stock of more than 3.5 million industrial robots deployed worldwide.
Decision-makers need tangible information before making a call on how to best manage the overall situation, argued Ryden: “What they need are facts…They need to know how bad the downtime event actually is so they can determine whether they need to send the whole shift home, or tell their trucks to turn around.”
There are many benefits to giving experts, including the technician who installed the cell, remote access and an ability to view the cell. The plant benefits not only from their ability to assess what is went wrong, but also how long it might take to have the system up and running, said Ryden. They may even have insights into the lead time on parts needed. “If we can have facts faster, we can react faster,” he emphasized.
Automation Platforms Offer Centralized Communication and Integration
Among machine builders and end-users in manufacturing, the use of remote I/O modules is a commonly adopted approach to connecting sensors and actuators to PLCs. During Festo’s media educational tour back in August, Eric Rice, product market manager, Electric Automation, said the company’s portfolio of automation components was designed for easy integration and robust functionality for the equipment and machinery on the plant floor.
Festo’s distributed I/O solution CPX-AP-A, along with its established CPX-AP-I decentralized I/O, has been in development for over a decade, Rice said.
The backplane communications run between Festo devices. “The way we do that is to physically tie all of our sensors, actuators, pneumatic cylinders, etc. to remote I/O modules and pneumatic valve terminals, and connect those devices to the PLC over an industrial Ethernet connection,” Rice explained. Commonly used Ethernet connections include EtherCAT, EtherNet/IP, Profane and Modbus.
The costs associated with integrated devices into the code and the assignment of IP addresses can be prohibitive. But by introducing a gateway or bus interface, Festo has reduced the number of devices that connect to the industrial Ethernet network without sacrificing functionality. Whether physically attached on-terminal or connected via cable, all components—such as PLCs, valves, motors, drives and I/O—can be incorporated within a smart terminal under a single IP address.
While the volume of data passed back and forth and the performance of the network remains unchanged, Festo is able to decrease the customer’s cost by, as Rice put it, “putting this through a single integration point instead of multiple…The CPX-AP-I and the CPX-AP-A are the two fundamental product lines that make this happen.”
Reducing the number of controls the engineer needs for integrating devices also eliminates the need for additional IP addresses, and therefore reduces costs. Rice said that Festo also provides IO link support for easy integration of third-party devices.
Rice explained that these advancements make it easy to use sensors and actuators from other vendors. The solution supports all major Ethernet networks, which simplifies implementations for specialty machine builders or OEMs who sell their machines to a variety of different end-users.
Next-Level Perception Tools Cross the AI Threshold
No discussion about emergent tech in 2023 is complete without a mention of generative AI.
Simply defined, generative AI describes algorithms that use neural networks to identify patterns and structures within existing data to generate new content. Inputs include text, image, audio, video and code. Generative AI technology is expected to add $10.5 billion in revenue for manufacturing operations worldwide by 2033, according to ABI Research.
As advancements in generative AI take hold, decisions will need to be made regarding whether specific workloads should be located on-premises, at the edge or in the cloud. Over the past 12 months alone, generative AI has transformed text and natural language processing and has brought unprecedented advancements to operational technology across industries, particularly spanning defect detection, real-time asset tracking, autonomous planning and navigation, as well as human-robot interactions.
Generative AI offers zero-shot learning (the ability for the AI model to recognize things it hasn’t seen before in training) with a natural language interface to simplify the development, deployment and management of AI at the edge, explained Deepu Talla, NVIDIA’s vice president and general manager of Embedded & Edge Computing.
Accelerated computing capabilities that enable the development of next-generation robotics solutions is a major priority for NVIDIA. In October, the company announced major expansions to the NVIDIA Jetson platform, giving developers access to tools and tutorials for deploying open-source LLMs, diffusion models to generate impressive interactive images, vision language models (VLMs) and vision transformers (ViTs) that combine vision AI and natural language processing to provide comprehensive understanding of the scene
Additionally, the launch of NVIDIA Metropolis brings a collection of powerful APIs and microservices for developers to easily develop and deploy applications on the NVIDIA Jetson edge-AI platform. Siemens, for example, is using the application framework for edge AI to connect fleets of robots and IoT devices in the industrial context.
“Imagine the advancements that we’ll see for autonomous machines, robotics—where the camera is one of the most important sensors, of course—and we can adapt it to other sensors, including Lidar, radar and so on,” Talla said during a press briefing. “Computer vision is right at the tipping point.”
During the press call, Talla reinforced the case for generative AI. The time it takes customers to start their design and have all of their AI models and systems into production is time-consuming (potentially taking several years) due to the slow development cycle and the volume of data needed before a neural network does well for a specific task. Neural networks do not generalize very well, he said. In contrast, since generative AI is based on large language models, he describes it as “fairly generalizable.”
“That leads to faster development cycles, higher accuracy and also one-shot or even zero-shot learning, in some cases,” Talla said. Additionally, using natural language prompts is democratizing solutions, so anybody can communicate with an AI model to a prompt it and get the right output.