Real time manipulation of videos by Stanford researchers

Real time manipulation of videos by Stanford researchers

The video up top shows a work-in-progress system called Face2Face (research paper here) being built by researchers at Stanford, the Max Planck Institute and the University of Erlangen-Nuremberg.

The short version: take a YouTube video of someone speaking like, say, George W. Bush. Use a standard RGB webcam to capture a video of someone else emoting and saying something entirely different. Throw both videos into the Face2Face system and, bam, you’ve now got a relatively believable video of George W. Bush’s face — now almost entirely synthesized — doing whatever the actor in the second video wanted the target’s face to do. It even tries to work out what the interior of their mouth should look like as they’re speaking. Read more

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: