Deepfake Artifact
Jump to navigation
Jump to search
A Deepfake Artifact is a artifact that was produced by a deep learning-based generation system.
- See: Deep Learning, Human Image Synthesis, Superimposition, Generative Adversarial Network, Fake News.
References
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Deepfake Retrieved:2019-2-4.
- Deepfake, a portmanteau of “deep learning” and "fake", is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a “generative adversarial network” (GAN).[1] The combination of the existing and source videos results in a fake video that shows a person or persons performing an action at an event that never occurred in reality.
Such fake videos can be created to, for example, show a person performing sex acts they never took part in, or can be used to alter the words or gestures a politician uses to make it look like that person said something they never did. Because of these capabilities, deepfakes may be used to create fake celebrity pornographic videos or revenge porn.[2] Deepfakes can also be used to create fake news and malicious hoaxes.
- Deepfake, a portmanteau of “deep learning” and "fake", is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a “generative adversarial network” (GAN).[1] The combination of the existing and source videos results in a fake video that shows a person or persons performing an action at an event that never occurred in reality.