Google Will Legally Protect its Generative AI Users

Google user

Google has offered indemnity to its users if they are sued for copyright violation after using its generative AI tools.

It follows a similar move from Shutterstock and Adobe; companies want to alay customer fears over any potential legal wranglings over artificial intelligence (AI).

It comes after last week’s news that Google is integrating an AI image generator into one of its search bars.

“To our knowledge, Google is the first in the industry to offer a comprehensive, two-pronged approach to indemnity that specifically covers both types of claims,” a company spokesperson tells Reuters.

In a blog post, Google puts it plainly: “If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved,” writes Neal Suggs and Phil Venables.

Google names two specific indemnities: Training data and generated output. The former means Google will protect users if a third party claims copyright infringement in the training data that Google uses to make its generative AI products.

The generated output indemnity relates to a third party claiming that the content which came out of one of Google’s generative AI tool infringes on their intellectual property — then Google will defend that claim on behalf of the user.

However, there are limits to Google’s protection. Indemnity will not apply if users “intentionally create or use generated output to infringe the rights of others.”

Will the Customer Need This?

There are an ever-growing number of lawsuits in relation to artificial intelligence (AI) but so far all of them are directed at the companies that make the products and not individual users.

Google itself has already been targeted with a class-action lawsuit and major AI image generators including Stable Diffusion, Midjourney, and DALL-E are all facing lawsuits in one form or another.

It all stems from the controversial way in which generative AI came about. Machine learning involves massive amounts of data being fed into an algorithm so the computer can learn how to imitate human output.

However, the sheer volume of data involved means that some AI companies took a shortcut by taking data and using it in a way that the copyright holders never consented to.


Image credits: Header photo licensed via Depositphotos.

Discussion