Tesla Claim Seven-Year-Old Video of Elon Musk May Be a Deepfake

Tesla sensationally tried to claim that a seven-year-old video (above) of the company’s CEO Elon Musk may be an artificial intelligence (AI) generated deepfake in a wrongful death lawsuit.

Defense lawyers for Tesla made the comments in an ongoing civil case surrounding Tesla driver Walter Huang’s fatal car crash in March 2018.

Huang died while driving a Tesla Model X. His surviving family subsequently sued the carmaker a little over a year later — with attorneys arguing that Tesla’s driver-assist software failed and was at fault in Huang’s death.

However, Tesla alleges that Huang was playing a video game on his phone at the time of the crash and disregarded multiple warnings from his vehicle’s software.

In the ongoing case, Huang’s family’s attorneys sought to depose Musk and interview him regarding statements he previously made about the safety of Tesla’s driver-assist software.

“I really would consider autonomous driving to be basically a solved problem,” Musk said in a recorded video interview that was uploaded to YouTube in 2016.

“A Model S and Model X, at this point, can drive autonomously with greater safety than a person. Right now.”

However, Tesla opposed the request for a deposition with Musk in court filings, controversially questioning the authenticity of the public recording and suggesting the video may be an AI-generated deepfake.

“[Musk], like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did,” Tesla lawyers claim.

‘Deeply Troubling’

However, the judge overseeing the wrongful death lawsuit slammed Tesla’s claim that videos of CEO Elon Musk’s public statements might be deepfakes and called the argument “deeply troubling.”

“Their position is that because Mr. Musk is famous and might be more of a target for deepfakes, his public statements are immune,” Santa Clara County Superior Court Judge Evette D. Pennypacker writes.

“In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”

Judge Evette Pennypacker rejected Tesla’s arguments and tentatively ordered a limited, three-hour deposition where Musk could be asked whether he actually made the statements on the recordings. The lawsuit is scheduled to go into trial on July 31.

‘The Deepfake Defense’

According to NPR, the Tesla lawsuit is not the first time deepfakes have been invoked in an attempt to rebut evidence. Two of the defendants on trial for the Jan. 6 riots tried to claim a video showing them at the Capitol could have been created or manipulated by AI.

With the rise of AI-generated imagery, lawyers are increasingly using “deepfake defense” in cases.

“Right now, it’s sort of like the wild, wild west, where lawyers can say, ‘Well, let’s run this up the flagpole and see what we can do with it,'” Loyola Law School professor Rebecca Delfino tells NPR.

And if such accusations that photographic evidence may be deepfakes become more common in legal cases, juries may come to expect further proof that a photograph or video is real and not AI-generated.


Image credits: Header photo licensed via Depositphotos.

Discussion