President Joe Biden has issued an executive order today outlining the federal government’s first regulations on artificial intelligence (AI) systems.
The directive is presented as a way of protecting Americans from the harmful risks posed by AI and White House Deputy Chief of Staff Bruce Reed describes it as “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
The Department of Commerce has been directed to “develop guidance for content authentication and watermarking to clearly label AI-generated content.” This will protect Americans from “AI-enabled fraud and deception.”
“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” says the order.
A number of departments have been issued with tasks related to AI safeguarding including the National Institute of Standards and Technology which will set “rigorous standards” for testing before an AI system is released to the public.
The Biden administration wants large companies to share safety test results with the U.S. government before the official release of AI systems but a Biden administration official tells The Verge that “we’re not going to recall publically available models that are out there.”
“Existing models are still subject to the anti-discrimination rules already in place,” the spokesperson adds.
The Department of Homeland Security will establish an AI Safety and Security Board and work with the National Institute of Standards and Safety to develop a “red team” to safeguard against chemical, biological, radiological, nuclear, and cyber security risks.
Agencies have been directed to produce a report on the impact of AI on the labor market and job displacement. The government also wants to hire more AI specialists with the relevant openings being posted on AI.gov.
An executive order is not a permanent law and this directive will most likely only last the length of Bidren’s administration.
In July, leading AI companies attended a meeting at the White House where they pledged a set of commitments to manage the risks associated with generative AI, including watermarking AI-generated material.
An official says that they want to see some of these directives implemented within 90 days while other aspects of the order could take up to a year to be fulfilled.
Image credits: Header photo licensed via Depositphotos.