The development, which is as equally impressive as it is disturbing, is the result of a collaboration between researchers from Germany’s University of Erlangen-Nuremberg, the Max-Planck Institute for Informatics, and Stanford University in California.
The study, Face2Face: Real-time Face Capture and Reenactment of RGB Videos,
“animates the facial expressions of the target video by a source actor and re-renders the manipulated output video in a photo-realistic fashion,” according to the researchers.
The project works with a webcam and manipulates videos in real-time.
The research team have showcased the technology in a video in which they choose some of the people one may most want to manipulate, if they had the chance.
The technology tracks the facial expressions of both source and target video and addresses details such as teeth by finding the mouth interior that best matches the re-targeted expression and warping it to produce the right fit.
Reenactment is achieved by “fast and efficient deformation transfer” between source and target and the process is completed as the synthesized target’s face is re-rendered on top of the corresponding video footage.
Concerns have been raised, however, about how the technology could be applied – and if we will be able to trust what we see.
Transferring facial expressions from live actors to recorded actors. Incredible. Makes the future hard to trust. https://t.co/Vkeglu6OXA— Jason Fried (@jasonfried) March 19, 2016
New AI method for transferring facial expressions onto other faces = in future gonna be hard to verify vid clips https://t.co/ikdbJelBqq— Jack Clark (@jackclarkSF) March 18, 2016