Technology used to manipulate and mislead with video has been in the headlines in recent years. However, audio-visual manipulation can be contextual, rather than technical. (For example, see this story about an ornithologist who helped expose a video falsely attributed to Colombian guerillas in an attempt to sway an election when he identified a bird call.) A relabeled video with an inaccurate description can be even harder to detect than digital trickery -- keep the other tips you've learned via this guide in mind to verify the accuracy of a video as you would other informational resources. For instance, you can apply the SIFT framework discussed on this guide's homepage. Explore below for more information about digital manipulation in audio and video.
Video content is facing increased scrutiny as media manipulators take advantage of everyday tools ("cheap fakes") and sophisticated Artificial Intelligence-based software ("deep fakes") to create duplicitous content. CNN's interactive "Is that video real?" page helps users learn how to hone in on features of manipulated video and audio content.
This infographic (click for a closer look), created by EdTech company Data Society, charts examples of media manipulation according to the level of technical expertise required to create problematic videos. Data Society's 2019 report, "Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence", provides greater context -- including discussion of the effect of all types of media manipulation on those who are politically, socially, and economically vulnerable, as well as the role of this manipulation in reinforcing certain power structures.
In this video, Purdue University engineers discuss technological tools utilizing Artificial Intelligence being developed to cope with manipulated media. You can learn more about this initiative in this 2019 news article: "'Deepfakes', manipulated video and audio, pose threat to elections".