US president Joe Biden announced new guidelines for the safe development of AI AFP via Getty Images
An executive order on artificial intelligence issued by US president Joe Biden aims to show leadership in regulating AI safety and security 鈥 but most of the follow-through will require action from US lawmakers and the voluntary goodwill of tech companies.
叠颈诲别苍鈥檚 directs a wide array of US government agencies to develop guidelines for testing and using AI systems, including having the National Institute of Standards and Technology set benchmarks for 鈥渞ed team testing鈥 to probe for potential AI vulnerabilities prior to public release.
鈥淭he language in this executive order and in the White House鈥檚 discussion of it suggests an interest in being seen as the most aggressive and proactive in addressing AI regulation,鈥 says at Cornell University in New York.
Advertisement
It is probably 鈥渘o coincidence鈥 that 叠颈诲别苍鈥檚 executive order came out just before the UK government convened its own AI summit, says Kreps. But she cautioned that the executive order alone will not have much impact unless the US Congress can produce bipartisan legislation and resources to back it up 鈥 something that she sees as unlikely during the 2024 US presidential election year.
This follows a trend of non-binding actions by the Biden administration on AI. For example, last year the administration issued a and it recently solicited voluntary pledges from major companies developing AI, says at the University of Bologna, Italy.
One potentially impactful part of 叠颈诲别苍鈥檚 executive order covers foundation models 鈥 large AI models trained on huge datasets 鈥 if they pose 鈥渁 serious risk to national security, national economic security, or national public health and safety鈥. The order uses another piece of legislation called the Defense Production Act to require companies developing such AIs to notify the federal government about the training process and share the results of all red team safety testing.
Free newsletter
Sign up to The Weekly
The best of New 女生小视频, including long-reads, culture, podcasts and news, each week.

Such AIs could include OpenAI鈥檚 GPT-3.5 and GPT-4 models, which are behind ChatGPT, Google鈥檚 PaLM 2 model, which supports the company鈥檚 Bard AI chatbot, and Stability AI鈥檚 Stable Diffusion model, which generates images. 鈥淚t would force companies that have been very closed-off about how their models work to crack open their black boxes,鈥 says Hine.
But Hine said 鈥渢he devil is in the details鈥 when it comes to how the US government defines which foundation models pose a 鈥渟erious risk鈥. Similarly, Kreps questioned the 鈥渜ualifiers and ambiguities鈥 of the executive order鈥檚 wording; the document is unclear about how it defines 鈥渇oundation model鈥 and who determines what qualifies as a threat.
The US also still lacks the type of strong data protection laws seen in the European Union and China. Similar laws could support AI regulations, says Hine. She pointed out that China has focused on implementing 鈥渢argeted, vertical laws addressing specific aspects of AI鈥, such as generative AIs or facial recognition use. The European Union, on the other hand, has been working to create political consensus among its members on a broad horizontal approach covering all aspects of AI.
鈥淸The US] has the [AI] development chops, but it doesn鈥檛 have much concrete regulation to stand on,鈥 says Hine. 鈥淲hat it does have is strong statements about 鈥楢I with democratic values鈥 and agreements to cooperate with allied countries.鈥
Topics:


