Tech Companies Meta, Adobe, and OpenAI Join Responsible AI Consortium

A sign for the NIST outdoors.

More than 200 companies and organizations signed on to join the US AI Safety Insitute Consortium to advance responsible use of artificial intelligence.

The consortium is meant to address issues around how to approach artificial intelligence safely and responsibly as the technology quickly proliferates. Members include Adobe, Apple, Canva, Meta, Microsoft, Nvidia, and OpenAI. Notably, many of the groups participating offer AI technology of some kind. Adobe’s generative additions to Photoshop have gained popularity, Apple recently announced generative AI features could come later this year, Canva has a suite of AI creator tools, Meta has MetaAI, and Microsoft just rebranded its Bard offering to Gemini.

Those same companies also seem busy laying down the tracks of protections as their own artificially generated trains move forward. Earlier this week, Meta revealed further labeling for photorealistic AI-generated images to help combat deepfakes. Adobe took a different approach in training its generative model by limiting the data it used. Apple’s somewhat late-to-the-party entrance could also be part of its attempt to navigate the issues responsibly and to take extra time developing the technology (on that front, little information is available as of yet).

The AISIC falls under the National Institute of Standards and Technology and is tasked with bringing about the goals set by President Joe Biden in his recent executive order, according to an NIST release.

“AI is moving the world into very new territory. And like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts,” a press briefing regarding the consortium from NIST Director Laurie E. Locascio reads.

“That is why NIST brings together these incredible collaborations of representatives from industry, academia, civil society and the government, all coming together to tackle challenges that are of national importance.”

Members will contribute to at least one area of focus for the AISIC, which includes developing guidance and benchmarks for areas like identifying and evaluating AI capabilities, identifying potentially harmful uses of AI, authentication, and finding safe, secure, and trustworthy ways of deploying AI technology.


Image credits: Header photo licensed via Depositphotos.

Discussion