Vice President Kamala Harris Announces AI Safety Institute

Vice President Kamala Harris

Just two days after President Joe Biden issued an executive order outlining the federal government’s first regulations concerning artificial intelligence (AI) systems, Vice President Kamala Harris has announced the establishment of the United States AI Safety Institute.

As reported by Engadget, Harris announced “a half dozen more machine learning initiatives that the administration is undertaking” on Tuesday at the AI Safety Summit in England.

Among these initiatives is the United States AI Safety Institute (US AISI). A press release from the White House describes the US AISI as follows:

The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. The US AISI will operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk. The Institute will develop technical guidance that will be used by regulators considering rulemaking and enforcement on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI, and would serve as a driver of the future workforce for safe and trusted AI. It will also enable information-sharing and research collaboration with peer institutions internationally, including the UK’s planned AI Safety Institute (UK AISI), and partner with outside experts from civil society, academia, and industry.

“President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said during her remarks at the summit.

Vice President Harris added, “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions.”

Harris explained that to understand what makes AI safe — or dangerous — people must consider the full spectrum of AI risks, including threats not only to individuals but to communities, institutions, and humanity at large.

The United States AI Safety Institute will be established in cooperation with the Department of Commerce. Reuters reports that Secretary of Commerce Gina Raimondo said today in a speech at the AI Safety Summit, “I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium,” emphasizing that the private sector must be involved in any assessment, management, and regulation of AI technologies.

The new efforts in the U.S. will fall under the National Institute of Standards and Technology (NIST) and will be responsible for numerous AI-related tasks, including reviewing advanced AI models. AI model testing aligns with what President Biden outlined earlier this week.

Harris outlined additional initiatives during her speech in England, including policy guidance for the U.S. government’s use of AI, a declaration on the responsible military use of AI and autonomous technologies, a new initiative to advance AI in view of public interest, detecting and blocking fraudulent AI-driven phone calls, calling on international bodies to work with the U.S. on content authentication standards, and pledging that the U.S. will respect people’s rights in the face of increased AI technologies.

“As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies,” Harris said. “One important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation.”


Image credits: Header photo licensed via Depositphotos.

Discussion