The famous painting by Leonardo da Vinci can be animated using facial expressions from a real person. AI can be used to transform still images into moving and talking heads, according to a paper published by Samsung AI Centre in Russia. In a video of the Mona Lisa, she is seen turning her head, mouthing and even blinking.
The technology works by mapping moving facial features from other people on to the painting.
The technique is known as puppeteering.
The report said: “Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them.
“In order to create a personalised talking head model, these works require training on a large dataset of images of a single person.”
The Moscow-based researchers collated source images from a real-life person to animate a picture.
The number of images taken can also add to the realism of the movement.
The report explained: “It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few and one shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators.
“Crucially, the system is able to initialise the parameters of both the generator and the discriminator in a person specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters.”
The previous phase allows the model to work on inputs where there are very few examples.
The results aren’t too bad when there is only one picture available, but gets more realistic when there are more.
The scientists hope such an approach is able to learn highly realistic and personalised talking head models of new people and even portrait paintings.
However, the technology cannot move any part of the Mona Lisa’a upper torso.