Real time manipulation of videos by Stanford researchers
The video up top shows a work-in-progress system called Face2Face (research paper here) being built by researchers at Stanford, the Max Planck Institute and the University of Erlangen-Nuremberg.
The short version: take a YouTube video of someone speaking like, say, George W. Bush. Use a standard RGB webcam to capture a video of someone else emoting and saying something entirely different. Throw both videos into the Face2Face system and, bam, you’ve now got a relatively believable video of George W. Bush’s face — now almost entirely synthesized — doing whatever the actor in the second video wanted the target’s face to do. It even tries to work out what the interior of their mouth should look like as they’re speaking. Read more