As industries surge ahead with generative AI deployments, the effects—both positive and negative—reverberate.
By one estimate, the entire market more than doubled in just three years, reaching $240 billion in value and more than a quarter-billion users worldwide. The global reach and impact raise the need for risk mitigation and effective governance.
The European Union’s comprehensive AI regulations finalized on Dec. 8, are the first of their kind, setting the pace for policymakers globally. For example, clear obligations were set out and agreed for AI systems classified as high-risk (“due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law”). Additionally, guardrails for general-purpose AI systems, and the models they are based on, will have to adhere to transparency requirements.
In the United States, President Joe Biden in October issued an executive order focused on AI’s impact on national security and discrimination, essentially seeking to manage the safe development of AI without preventing companies from profiting.
Efforts to Govern AI Policy
The MIT Schwarzman College of Computing and the MIT Washington Office has prepared policy briefs intended as an overall governance framework for the U.S. and to help leadership in policymaking decisions.
The main paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” noted that “users (and the public at large) should have a clear idea of what they are getting (and not getting) from an AI system, and should be able to feel confident about that understanding through mechanisms such as contractual relationships, disclosures, and audits.”
The paper lays out fundamental principles that will guide leadership in sanctioning the deployment of beneficial AI. Such a framework would prioritize security, safety, shared prosperity, and democratic and civic values, note the authors, who suggested in a press note that AI tools can be regulated by existing U.S. government entities that already oversee the relevant domain.
READ MORE: The Future of Connected Worker Technology and Its Impact on Industrial Training
“As a country we’re already regulating a lot of relatively high-risk things and providing governance there,” said Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. “We’re not saying that’s sufficient, but let’s start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach.”
The paper highlighted the importance of having AI providers define the purpose and intent of AI applications in advance and that foster the examination of new technologies within the context of existing regulations and principles that are germane to any given AI tool.
MIT’s policy brief was authored by a panel of leaders in AI research, including Dan Huttenlocher, dean of the MIT Schwarzman College of Computing; Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and Head of the Department of Electrical Engineering and Computer Science; and David Goldston, director of the MIT Washington Office, with guidance from an ad hoc committee.
READ MORE: Big Tech & Big Ideas Permeate Industrial Thinking in 2023
Other topics in the series of policy briefs include:
- Large Language Models
- Can We Have Pro-Worker AI? Choosing a Path of Machines in Service of Minds
- Labeling AI-Generated Content: Promises, Perils, and Future Direction