Artificial intelligence (AI) solutions are set to upend a wide variety of industries. By mimicking the predictive, classification, data intensive problem-solving capabilities of the human mind, these technologies are poised to dramatically increase the degree of automation across fields including medicine, transportation and manufacturing, where machines can achieve similar tasks more rapidly. Already, we’re seeing early versions of self-driving cars, automated customer service chatbots, robo-investors and even automated tutoring and writing services.
Basically, AI is positioned to soon take over a large number of tasks that have historically been completed by human workers—a fact that has led to considerable concerns about people being displaced from their jobs in the coming years.
So, it’s more than a little ironic that AI is also creating a need for a new set of leadership capabilities—needs that can’t easily be met by technology or by classical leadership skills. The future will bring the best of AI working collaboratively with humans to enable the best capabilities from human-machine teams.
In general, leadership requires the sort of “soft” or “people” skills that machines struggle to replicate. It’s unlikely, to cite a few examples, that we’ll see a robotic NBA coach, Fortune 500 CEO or military platoon leader in our lifetimes. Not only are these ideas a bit unsettling, but AI simply isn’t very good at the sort of intuitive, holistic decision-making required to organize people around a common goal and motivate team members to give their best effort. And leading AI teams presents its own set of unique challenges.
Here are four of the biggest hurdles facing AI leaders:
1. Starting in the “Deep End”
Historically, technical professionals could expect to be principal contributors within their organizations for seven to 10 years (or even more) before being promoted to leadership roles.
That’s no longer the case.
For one, Millennials and members of Gen Z aren’t content to sit on the sidelines. If they feel as though they have more skills or knowledge than the people above them, they’re going to demand more responsibility—and they’ll often leave if they don’t get it. But perhaps even more importantly, these younger workers are often correct in their understanding that they bring a high level of skill and talent with them.
The fact is, the technical world is simply changing too quickly for many mid- and late-career technical professionals to keep up. While these established employees were working on what are now consider “legacy” technologies, younger workers were rapidly developing deep expertise in emerging AI solutions while they were still in school.
As a result of this dynamic, the people chosen to lead AI teams are very often quite young and inexperienced in the business world. They have the technical knowledge required to succeed, but they lack the benefit of having spent years watching what sorts of leadership tactics work best.
Call it trial-by-fire, being thrown in the deep end or some other metaphor: This creates a huge challenge for AI leaders. And if they don’t find a way to overcome it, it can easily lead to their failure, and a significant detriment and cost to their organizations.
2. Multi-Disciplinary Teams
By their very nature, AI solutions tend to require teams with disparate skill sets. It’s one thing for an emerging leader to manage a team of people with technical backgrounds that are similar to their own. It’s quite another to ask someone to lead a team made up of data scientists, machine learning experts, computing specialists, social engineers and others.
Even building something as enormous and complex as a space shuttle largely requires a mix of electrical, mechanical and control engineers. But AI teams tend to be far more multi-disciplinary.
The following 10 guidelines for deploying AI system capabilities at the enterprise level emphasize the importance of having multi-disciplinary teams with complementary skills:
- Understand the user’s AI needs.
- Develop a clear strategic vision and roadmap for the AI system.
- Strengthen the AI team by fostering internal and external relationships.
- Build a multidisciplinary and diverse team with complementary skills.
- Continue to expand AI team skills as the future of work evolves.
- Demonstrate an initial AI capability, then iterate.
- Identify measurable metrics.
- Verify individual subcomponents and validate end-to-end AI systems.
- Secure the AI system both physically and against cyber threats.
- Attend to ethics principles for AI.
The team, collectively, must have a clear understanding of the ultimate user/customer needs. The AI team must start with the customer/user requirements to define the types of data, machine learning techniques and computing requirements needed to effectively deploy the AI system capabilities.
This requires leaders of AI teams to take a systems view of the product or service they’re offering or developing. Rather than focusing on, say, machine learning or data science, emerging AI leaders may want to view their work through a more holistic lens. There are plenty of places to learn the ins and outs of algorithms and data science, but opportunities to consider the entirety of an end-to-end AI system, and the leadership challenges, tend to be more scarce.
3. Retention of Talent
Remember how I said earlier that younger employees, in particular, will readily jump from a job if they don’t feel as though their skills are being properly utilized?
That’s not an idle threat.
The tech sector has the highest employee turnover rate of any industry, with around 13% of employees leaving their jobs in a given year. And there are even more opportunities available for workers with very specialized and in-demand skills—including the skills necessary to contribute effectively to AI teams.
A high churn rate can really hinder the success of AI teams, especially when valuable contributors leave in the middle of an important project. Motivating these talented individuals to stay in their roles is one of the most pressing challenges facing AI leaders.
4. Ethics
If you’re leading a car company, it’s pretty obvious how your product is going to benefit society: It’s going to help people to get from Point A to Point B. Companies are always striving to make their vehicles safer and more stylish, and of course there’s growing movement toward both electric vehicles and self-driving cars. (There’s AI again—it’s everywhere!) But for the most part, the automobile is a well-established product with well-established benefits.
The same isn’t totally true for AI.
In many ways, the tech sector is still largely figuring out how AI is going to affect society. In addition to the job worries mentioned above, many people are concerned about the problems posed by “deep fake” videos, the use of facial recognition for surveillance and other AI applications.
Many employees want to work on projects that they feel will have a significant social benefit—or, at the very least, want to work on projects that don’t create new problems. It’s up to AI leaders to push the work of their employees in a direction that the entire team can be proud of.
These challenges are considerable, but they’re not insurmountable. In my MIT Professional Education class, students learn the key drivers through the lens of an end-to-end AI system architecture. The course addresses significant leadership challenges in building AI products or services. The class culminates with the development of an AI strategic plan to guide their leadership in building AI systems.
By confronting and finding ways to overcome these obstacles early on, AI leaders can unlock the untold potential that lies ahead for AI.
David R. Martinez, co-instructor of the MIT Professional Education Course Engineering Leadership in the Age of AI, is associate head in the Cyber Security and Information Sciences Division at MIT Lincoln Laboratory. His focus is on the strategic and innovative directions of the division in the areas of artificial intelligence for cybersecurity, cyber-resilient systems, Big Data analytics and secure cloud computing.