U.S. President Joe Biden signed an ambitious executive order on artificial intelligence that aims to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.
The order is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. The order—which will likely need to be augmented by congressional action—seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.
Using the Defense Production Act, the order requires leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.
The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software.
The extensive order touches on matters of privacy, civil rights, consumer protections, scientific research, and worker rights.
White House chief of staff Jeff Zients recalled Biden giving his staff a directive when formulating the order to move with urgency.
AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output, and improve government services, among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities, and provide a tool to scammers and criminals.
The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate text, images, and sounds.
Several experts in the field applauded the move. David Brauchler, principal security consultant at NCC Group said, “I agree with the government that one of the greater risks of generative AI is the ability to create near-instant humanlike content, which opens new doors for threat actors, including spam and social engineering. AI systems that can impact human lives such as safety systems, critical infrastructure, and finance should also implement measures to safeguard the system’s assets, even when the AI model fails. There’s a balance of interests between the government’s goal to prevent abuse of these automated systems and their usefulness in future technologies.”
The long-awaited Biden Administration Executive Order on Artificial Intelligence is multidimensional, building on prior administration actions. Overall, it's the right move to put forward a framework for the responsible use of artificial intelligence in government, said Jordan Burris, vice president and head of public sector strategy, Socure.
“Yet, the directive’s lofty goals—to create standards for the safe, secure, and interoperable usage of AI systems, regulate the use of AI technologies across sectors, increase the number of skilled AI practitioners, evaluate agency usage of commercially available personal identifiable information, and assess agency uses of AI—will result in implementation complexity and paralysis without the right sense of urgency and alignment to execute. Beyond words on paper, there must be a change in budgets, culture, and practices if we are to be successful in turning the corner,” Burris warned.
Arvind Krishna, chairman and CEO, IBM noted that this Executive Order sends a critical message: that AI used by the United States government will be responsible AI.
“IBM proudly supports the White House voluntary commitments on AI which align with our own long-standing practices to promote trust in this powerful technology. Through those commitments and our watsonx AI platform that prioritizes scalability and governance, IBM is uniquely positioned to help federal agencies embrace AI in ways that are responsible and trusted,” Krishna said.