Biden Takes Action to Regulate AI
President Biden signed a sweeping new executive order on Monday to place guardrails on the use and development of AI, including provisions that will make large upcoming AI models like OpenAI’s GPT-5 subject to oversight before they are released. Speaking to a room of lawmakers, industry leaders and reporters at the White House on Monday, Biden described how the executive order was designed to mitigate the risks from AI while still tapping into its benefits.
As part of the executive order, any company building an AI model that could pose a risk to national security must disclose it to the government and share data about what is being done to secure it in accordance with federal standards to be developed by the National Institute of Standards and Technology. The decree to share pre-release testing data applies only to models that haven’t been released yet — which would include GPT-5, the much anticipated successor to hugely popular GPT-4.
The order also aims to kick off a hiring blitz for AI workers in the federal government with “dozens to hundreds” of AI-focused hires. Plus, it says it will reduce the barriers to immigration for international workers in the AI sector. The order also establishes the creation of guidelines and standards for the use of AI by the government.
Protecting Against Risks and Promoting Responsible Innovation
The executive order is the broadest attempt yet by the Biden administration to create functional guardrails for the development of artificial intelligence while cementing the U.S. as a leader in AI policy. The order explicitly calls on Congress to pass bipartisan data privacy legislation in an acknowledgement that AI heightens the incentives for invasive data collection.
The arrival of virally popular ChatGPT late last year brought the promise and potential perils of AI into clear focus and the U.S. government has scrambled to introduce guardrails since. The order addresses fears that AI could be used to discriminate against citizens, target critical infrastructure, or be used in warfare by requiring large AI models and programs to be assessed by federal agencies before being deployed.
The executive order has wide power within the federal government to establish standards and guidelines for various agencies. For instance, to combat AI-supported fraud like “deep-fake” videos or AI-voice generated calls, it instructs the Department of Commerce to develop guidelines for federal agencies to use watermarking and content authentication tools to label AI-generated content.
Global Efforts to Regulate AI
Biden’s executive order comes as the European Union inches closer to introducing the world’s first AI laws and other countries are also moving to restrict AI use. Vice President Kamala Harris is planning to represent the administration at a major AI summit in London, where she will outline the administration’s AI policies and call for greater collaboration with both America’s allies and adversaries on regulating AI companies. Senator Chuck Schumer is leading a push in Congress to introduce AI legislation
Despite the widespread acknowledgement from many in tech circles that AI legislation is needed, others openly oppose any rules that might curtail the explosive growth of the AI industry.