Artificial intelligence (AI) can now create a realistic video of an individual dancing from a single photograph.
According to New Scientist, the AI technology was trained on TikTok dance trends so it can transform a still image of a person into a dancing video.
The model, titled “Disentangled Control for Referring Human Dance Generation in Real World (DisCo)”, is the result of a collaboration between researchers at Microsoft as well as Tan Wang’s team of scientists at Nanyang Technological University in Singapore.
DisCo effectively splits a photograph into three parts: the background, the foreground, and the person’s pose in the shot.
The AI can then morph the person into a series of poses to create individual frames that, when compiled back into a video, produce realistic footage of that person dancing.
“With these things, you can try to compose anything you want,” Tan Wang, a third-year Ph.D. student at MReaL Lab of Nanyang Technological University tells New Scientist.
“If you want Elon Musk to dance, you can just use our [code].”
The team trained DisCo on around 700,000 generic images of people taken from TikTok so that it could learn about poses and how to separate foregrounds from backgrounds.
The researchers then trained it further on a small data set of about 350 dance videos, each 10 to 15 seconds long, to give the AI a deeper knowledge of how people move while dancing.
BGR reports that the DisCo technology could allow users to generate dance videos of themselves on TikTok — without ever needing to learn the new choreographies in the first place. However, this ability would result in deepfaked content making its way onto social media.
According to BDG, DisCo could also potentially be used in the post-production phase of movies and TV shows. Studios will be able to add dance routines to their actors in projects without ever hiring dancers.
AI technology is being increasingly used in filmmaking. Last year, filmmakers used deepfake technology to visually dub the new action-thriller Fall when they were asked to remove the profanities from the film but did not have the budget to reshoot scenes. The software can also be used change an actor’s spoken language in a movie.
Image credits: All photos via “DisCo: Disentangled Control for Referring Human Dance Generationin Real World” by Tan Wang, Linjie Li, Kevin Lin, Chung-Ching Lin, Zhengyuan Yang, and Hanwang Zhang, Zicheng Liu, and Lijuan Wang.