President Biden's landmark Executive Order concerning AI technologies sends a strong signal to Big AI Tech and federal agencies that fostering the technology's potential for the greater good… while mitigating its great potential harms is a key White House priority.
On the one hand, the EO tackles concerns about the government's ability to keep pace with AI adoption, emphasizing that the US Executive Branch has a role to play in its safe development and fostering.
On the other hand, the EO is a reaction to Congressional stupor, with US lawmakers incapable of agreeing on a budget or international aid packages, let alone on the highly complex and intertwining issues of data privacy, national security, competition and AI safety.
The AI EO establishes a set of comprehensive priorities to drive rulemaking and standards-setting, and public-private investments into responsible AI development and use.
Critically, Biden is using his bully pulpit to urge Congress to pass a national privacy law inclusive of AI and related automated-decision making technologies.
The EO covers among other topics:
- Concerns about discrimination and other socio-economic harms across all sectors of the economy and governmental services;
- Imperatives to ensure AI safety as a matter national defense and foreign policy interests
- Endorsements of Privacy Enhancing Technologies (PETs) and using AI to check-and-balance other AI; and
- Expectations and timelines for federal agencies responsible for regulating internal and external uses of AI.
Who is it aimed at?
The EO is directed at federal agencies now tasked with developing standards and methods for ensuring AI is a safe and productive technology. Less directly, but more forcibly than with pre-EO hearings and discussions, the EO is aimed at high-stakes AI developers along the lines of:
- Do you wish to work with the US Government, America's biggest spender, on issues ranging from education to law enforcement? Upcoming rules, standards and independent testing procedures will be for you.
- Is the technology you are developing critical to national interests, or perhaps detrimental to them if mismanaged? Ditto.
The fascinating point here is that the White House invoked the Defense Production Act to ensure the EO is carried out in the void of Congressional lawmaking. This is the same war-era statute the previous administration used to ramp up N95 mask and other PPE production. Put another way, regulating AI is a national defense and public safety imperative.
Federal responses include:
- NIST creating a consortium to enhance AI system safety standards;
- The FTC marshaling its attention and powers under the FTC Act to tackle AI-related consumer harms;
- The HHS forming a task force for responsible AI in healthcare; and
- The OPM opening federal AI job positions via AI.gov.
Still, in the void of actual legislation the EO's effectiveness depends largely on how agencies will interpret and then act on its directives, doing so within the limits of their Congressional authority. As with seen in case of SCOTUS clipping the EPA's wings, those limits may very well be checked in court.
There's also the matter of the next President nixing the EO...
The EO is a continuation of the Administration’s steering on cybersecurity and critical infrastructure, privacy protections, and consumer rights in the age of AI. Despite innovation and competitive concerns from the tech community, this is undoubtedly a step forward for the US – with Congress or without.
Today's White House has a once-in-a-generation team of Lina Khan, Rebecca Slaughter and Alvaro Bedoya (FTC); Jonathan Kanter (DoJ); and Rohit Chopra (CFPB) and others – savvy regulators innovative in wielding the limited powers Congress granted them.
To borrow from Apple's Tim Cook, AI is now a "fundamental" technology requiring a succinct coordination of policymakers and enforcers. One way or another, at the federal or state level, AI will be regulated in the U.S.