Pika Labs Makes AI-Generated Characters Talk With Lip Sync Feature

Pika Labs Lip Sync
This AI-generated character is talking.

AI video generation platform Pika Labs has added a terrifying new feature — lip syncing audio to AI characters.

With OpenAI recently causing a stir after it previewed its AI video generator Sora. Pika Labs, a far smaller company, has unveiled this new feature that is exclusive to its platform — for now.

Pika built the new Lip Sync feature in conjunction with AI audio platform ElevenLabs and it allows creators to give words to people in AI videos and sync their lip movements to the desired speech.

It represents another leap forward in AI video as the technology increasingly matures. It now seems inevitable that AI video will at some point reach a similar fidelity to AI images.

Filmmakers using AI video can now make their characters hold a conversation. Previously, AI video characters couldn’t speak or audio would just have to be dubbed over a shot of them.

Lip Sync is only available to Pika Lab users who are subscribed to the Pro plan which costs $58 per month.

As demonstrated in the example video, the feature is not yet perfect but it’s another step forward and will satisfy some filmmakers.

To use it, users must type in a prompt for the text-to-audio program to work. But direct audio can also be uploaded if the director already has their own sound. For example, an audio-only podcast could be brought to life.

Tom’s Guide notes that a similar offering from Synthesia already exists but it generates talking heads only rather than characters.

2024 is looking like it will be a big year for AI video generation with established companies like Runway ML and Pika Labs facing competition from Stable Diffusion and OpenAI’s Sora. Midjourney is also rumored to be working on an AI video platform.

In November, Pika Labs launched version 1.0 after announcing it had raised $55 million.


Image credits: Courtesy of Pika Labs.

Discussion