Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Google's VLOGGER AI model can generate video avatars from images - what could go wrong?

Mar, 23, 2024 Hi-network.com
google-2024-vlogger-spalsh-image.png

VLOGGER can take a single photograph of someone and create clips in high-fidelity and varying lengths, with accurate facial expressions and body movements, down to a blink, exceeding previous kinds of "talking head" software.

Google

The artificial Intelligence (AI) community has gotten so good at producing fake moving pictures -- take a look at OpenAI's Sora, introduced last month, with its slick imaginary fly-throughs -- that one has to ask an intellectual and practical question: what should we do with all these videos?

Also: OpenAI unveils text-to-video model and the results are astonishing. Take a look for yourself

This week, Google scholar Enric Corona and his colleagues answered: control them using our VLOGGER tool. VLOGGER can generate a high-resolution video of people talking based on a single photograph. More importantly, VLOGGER can animate the video according to a speech sample, meaning the technology can animate the videos as a controlled likeness of a person -- an "avatar" of high fidelity.

This tool could enable all kinds of creations. On the simplest level, Corona's team suggests VLOGGER could have a big impact on helpdesk avatars because more realistic-looking synthetic talking humans can "develop empathy." They suggest the technology could "enable entirely new use cases, such as enhanced online communication, education, or personalized virtual assistants."

VLOGGER could also conceivably lead to a new frontier in deepfakes, real-seeming likenesses that say and do things the actual person never actually did. Corona's team intends to provide consideration of the societal implications of VLOGGER in supplementary supporting materials. However, that material is not available on the project's GitHub page. reached out to Corona to ask about the supporting materials but had not received a reply at publishing time.

Also: As AI agents spread, so do the risks, scholars say

As described in the formal paper, "VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis", Corona's team aims to move past the inaccuracies of the state of the art in avatars. "The creation of realistic videos of humans is still complex and ripe with artifacts," Corona's team wrote.

The team noted that existing video avatars often crop out the body and hands, showing just the face. VLOGGER can show whole torsos along with hand movements. Other tools usually have limited variations across facial expressions or poses, offering just rudimentary lip-syncing. VLOGGER can generate "high-resolution video of head and upper-body motion [...] featuring considerably diverse facial expressions and gestures" and is "the first approach to generate talking and moving humans given speech inputs."

As the research team explained, "it is precisely automation and behavioral realism that [are] what we aim for in this work: VLOGGER is a multi-modal interface to an embodied conversational agent, equipped with an audio and animated visual representation, featuring complex facial expressions and increasing level of body motion, designed to support natural conversations with a human user."

google-2024-vlogger-example

Based on a single photograph, left, the VLOGGER software predicts the frames of video, right, that should accompany each moment of a sound file of someone speaking, using a process known as "diffusion", and then generates those frames of video in high-definition quality. 

Google

VLOGGER brings together a few recent trends in deep learning.

Multi-modality converges the many modes AI tools can absorb and synthesize, including text and audio, and images and video. 

Large language models such as OpenAI's GPT-4 make it possible to use natural language as the input to drive actions of various kinds, be it creating paragraphs of text, a song, or a picture.

Researchers have also found numerous ways to create lifelike images and videos in recent years by refining "diffusion." The term comes from molecular physics and refers to how, as the temperature rises, particles of matter go from being highly concentrated in an area to being more spread out. By analogy, bits of digital information can be seen as "diffuse" the more incoherent they become with digital noise.

Also: Move over Gemini, open-source AI has video tricks of its own

AI diffusion introduces noise into an image and reconstructs the original image to train a neural network to find the rules by which it was constructed. Diffusion is the root of the impressive image-generation process in Stability AI's Stable Diffusion and OpenAI's DALL-E. It's also how OpenAI creates slick videos in Sora.

For VLOGGER, Corona's team trained a neural network to associate a speaker's audio with individual frames of video of that speaker. The team combined a diffusion process of reconstructing the video frame from the audio using yet another recent innovation, the Transformer. 

The Transformer uses the attention method to predict video frames based on frames that have happened in the past, in conjunction with the audio. By predicting actions, the neural network learns to render accurate hand and body movements and facial expressions, frame by frame, in sync with the audio.

The final step is to use the predictions from that first neural network to subsequently power the generation of high-resolution frames of video using a second neural network that also employs diffusion. That second step is also a high-water mark in data. 

Also: Generative AI fails in this very common ability of human thought

To make the high-resolution images, Corona's team compiled MENTOR, a dataset featuring 800,000 "identities" of videos of people speaking. MENTOR consists of 2,200 hours of video, which the team claims makes it "the largest dataset used to date in terms of identities and length" and is 10 times larger than prior comparable datasets.

The authors find they can enhance that process with a follow-on step called "fine-tuning." By submitting a full-length video to VLOGGER, after it's already been "pre-trained" on MENTOR, they can more realistically capture the idiosyncrasies of a person's head movement, such as blinking: "By fine-tuning our diffusion model with more data, on a monocular video of a subject, VLOGGER can learn to capture the identity better, e.g. when the reference image displays the eyes as closed," a process the team refers to as "personalization."

google-2024-vlogger-architecture

VLOGGER's neural net is a combination of two different neural nets. The first one uses "masked attention" via a Transformer to predict what poses should happen in a frame of video based on the sound coming from the recorded audio signal of the speaker. The second neural net uses diffusion to generate a consistent sequence of video frames using the clues of body motion and expression from the first neural net.

Google

The larger point of this approach -- linking predictions in one neural network with high-res imagery, and what makes VLOGGER provocative -- is that the program is not merely generating a video, such as the way Sora does. VLOGGER links that video to actions and expressions that can be controlled. Its lifelike videos can be manipulated as they unfold, like puppets.

Also:Nvidia CEO Jensen Huang unveils next-gen 'Blackwell' chip family at GTC

"Our objective is to bridge the gap between recent video synthesis efforts," Corona's team wrote, "which can generate dynamic videos with no control over identity or pose, and controllable image generation methods."

Not only can VLOGGER be a voice-driven avatar, but it can also lead to editing functions, such as altering the mouth or eyes of a speaking subject. For example, a virtual person who blinks a lot in a video could be changed to blinking a little or not at all. A wide-mouthed manner of speaking could be narrowed to a more discrete motion of the lips.

google-2024-vlogger-edited-videos.png

Having achieved a way to control high-resolution video via voice cues, VLOGGER opens the way to manipulations, such as changing the lip movements of the speaker at each stretch of the video to be different from the original source video.

VLOGGER

Having achieved a new state of the art in simulating people, the question not addressed by Corona's team is what the world should expect from any misuse of the technology. It's easy to imagine likenesses of a political figure saying something absolutely catastrophic about, say, imminent nuclear war.

Presumably, the next stage in this avatar game will be neural networks that, like the 'Voight-Kampff test' in the movie Blade Runner, can help society detect which speakers are real and which are just deepfakes with remarkably lifelike manners. 

tag-icon Горячие метки: 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.