ChatGPT Used to Write Part of Arizona State Law on Deepfakes

A man with short, dark curly hair and a beard is standing and speaking into a microphone. He is wearing a light grey blazer over a white dress shirt. The background is blurred, with a visible doorframe.
Arizona State Representative Alexander Kolodin. | Gage Skidmore

An Arizona state representative turned to ChatGPT to help him write part of a law regulating deepfakes in elections.

Republican Alexander Kolodin asked ChatGPT to define exactly what a deepfake is after getting stuck on the terminology while writing House Bill 2394 which allows Arizona residents to ask a judge to declare whether an alleged deepfake is real or not.

“I am by no means a computer scientist,” Kolodin says per The Guardian. “And so when I was trying to write the technical portion of it, in terms of what sort of technological processing makes something a deepfake, I was kind of struggling with the terminology. So I thought to myself, well, let me just ask the subject matter expert. And so I asked ChatGPT to write a definition of what was a deepfake.”

Political candidates will also be able to get a judge to declare a deepfake as a hoax and the law is aimed at combatting misinformation.

Kolodin asked the Large Language Model ChatGPT to define exactly what “digital impersonation is” and he shared a screenshot of ChatGPT’s response. He and his colleagues were satisfied with the AI’s response, adding that part of the bill “probably got fiddled with the least — people seemed to be pretty cool with that.”

“I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin adds.

Fighting deepfakes is a rare area of bipartisan support with several states advancing legislation on robots impersonating humans in the absence of federal legislation.

By introducing a mechanism that allows a court to decide whether a video or image is truthful or not, Arizona’s bill differs from other states that have either outlawed deepfakes in a political context or require disclosure that a piece of media is AI-generated.

Kolodin believes that forcing media off the internet is futile and also a First Amendment issue.

“Now at least their campaign has as a declaration from a court saying, this doesn’t look like it’s you, and they could use that for counternarrative messaging,” he says.

There are exceptions: namely if the content depicts someone doing something sexual but beyond that, if the video is labeled as AI-generated or any reasonable person can see that it is a deepfake then it won’t be taken down.

Kolodin hopes that Arizona’s more laissez-faire approach to deepfakes will be adopted by other states.

“I think deepfakes have a legitimate role to play in our political discourse,” he says. “And when you have politicians regulating speech, you kind of have the fox guarding the hen house, so they’re gonna say, oh, anything that makes me look silly is a crime. I absolutely hope that other state legislators pick this up.”

Image credits: Photograph by Gage Skidmore.