Vice President Kamala Harris and top government officials will meet with chief executives from Google, Microsoft, OpenAI, and Anthropic at the White House later this week to discuss artificial intelligence (AI) technology.
Reuters reports that amid growing concerns about AI’s rapid progress and the adverse effects it may have on the American people, government officials want to discuss issues surrounding AI products and technologies.
The invitation reportedly noted President Biden’s “expectation that companies like yours must make sure their products are safe before making them available to the public.” Last October, the White House released a blueprint outlining five principles that it believes should guide the design, use, and deployment of automated systems to protect the public from potential harms resulting from AI technology.
More recently, President Biden remarked that he remains unsure about the potential dangers of AI, but that companies, including those invited to meet with the Vice President, have a responsibility to ensure the safety of their AI developments.
AI has been making political waves lately, with Republicans launching an AI-generated attack ad against the President last month, and Democrats introducing a bill yesterday that would require political ads to disclose the use of AI, which the referenced Republican attack ad did.
The major players in the tech space aren’t immune to internal concerns about AI, either. Geoffrey Hinton, 75, spent much of his career developing neural networks — the mathematical system underpinning generative AI models — and recently resigned from his position at Google while expressing regrets about his life’s work. While Hinton left Google partly to be able to freely express concerns over AI technology and warn of its dangers, he has claimed that Google itself has acted responsibly concerning AI.
Artificial intelligence technology is shifting faster than the legal system can react. However, the European Union aims to contend with at least the copyright implications of the datasets companies use to develop their generative AI models. With the noteworthy exception of Adobe Firefly, many generative models, like Midjourney and DALL-E, refuse to disclose their data sources and are reportedly built upon stolen, copyrighted works. The EU’s revised AI Act aims to force companies to reveal their datasets, which would undoubtedly open a legal can of worms for AI developers.
The White House doesn’t appear to be focused on copyright claims at the moment but is more concerned about the potential impacts AI has on privacy, proliferating harmful information, and the economy.
Bloomberg reports that the White House is also interested in how companies use AI to manage their workers. In a blog post on the topic, the White House Office of Science and Technology Policy expresses concerns that AI technology can pressure workers to utilize unsafe practices to meet AI-monitored metrics, suppress workers’ rights to free speech and collective bargaining, and result in workers being discriminated against in unfair — and possibly illegal — ways.
The Biden administration has also been seeking public input on different ways that AI technologies and services can and should be held accountable and how best to protect people from potential harm caused by AI. There are undoubtedly growing concerns about AI’s impact on national security and education.
Image credits: Photos licensed via Depositphotos.