Who is able to produce:
Professional – Amateur – Anyone.
However, technology is galloping and in the near future predicted that anyone would be able to produce it.
There are applications available that users can download and start experimenting.
Level of deception:
Low – Average – High – Very high.
Detecting deepfakes is a challenging problem. Amateurish deepfakes sometimes can be detected by the naked eye, but deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes — if we can detect them at all.
Deepfake is an AI-based technology used to produce or alter video content by editing faces (face-swapping or creating new face expressions). First deepfakes were created by changing people faces in videos by celebrity faces, particularly in pornographic video clips. It was done in December 2017 by Reddit user known as deepfakes (a portmanteau of “deep learning” and “fake” after whom this type was named) who used deep learning technology to edit the faces. For spreading false information, deep learning technology is used to create a new facial expression of celebrities which simulates facial muscle movements to represent saying fabricated text which was newer told by that person.
Working principle (what and how does it do):
Deepfake video is created by using two competing AI systems – one is called the generator, and the other is called the discriminator. The generator creates a fake video clip and then asks the discriminator to determine whether the clip is real or fake. Each time the discriminator accurately identifies a video clip as being fake, it gives the generator a clue about what not to do when creating the next clip.
As the generator gets better at creating fake video clips, the discriminator gets better at spotting them. Conversely, as the discriminator gets better at spotting fake video, the generator gets better at creating them.
Together, the generator and discriminator form something called a generative adversarial network (GEN). The first step in establishing a GAN is to identify the desired output and create a training dataset for the generator. Once the generator begins creating an acceptable level of output, video clips can be fed to the discriminator.
Gizmodo article about deepfake: https://gizmodo.com/insanely-accurate-lip-synching-tech-could-turn-fake-new-1796843610.
Synthesising Obama video:
If deepfake is not professional one can spot that shadows do not fall as they should be falling, or the person is not blinking. But if deepfake is of higher quality, there is no way to recognise it using your eyes. Many firms are trying to develop software which could help to identify deepfakes: https://techcrunch.com/2020/09/14/sentinel-loads-up-with-1-35m-in-the-deepfake-detection-arms-race/.
The US military is also funding an effort to catch deepfakes: https://www.technologyreview.com/s/611146/the-us-military-is-funding-an-effort-to-catch-deepfakes-and-other-ai-trickery/.