I was watching a White House press conference on Nov. 7 when CNN reporter Jim Acosta got into a heated discussion with President Donald Trump over some questions relating Russian election-meddling investigations. The interaction was memorable because of some name-calling on the part of the president and some verbal pushback on the part of Acosta. During the discussion, a White House intern attempted to retrieve a microphone from Acosta to pass it to the next questioner, NBC’s Peter Alexander.
There was a brief struggle for the microphone. Then it was passed to NBC for the next question. This sort of action is highly unusual in any press conference, especially one in the White House. During my years of press conferences, I’d never seen one that contentious. Hours later, the White House announced that Acosta’s press credentials had been revoked.
Shortly after the announcement, White House Press Secretary Sarah Sanders tweeted a video she said demonstrated that the reporter had physically pushed away the intern and said that this was why Acosta’s pass had been revoked. When I looked at the video released by the White House, it was clear to me that something was wrong. The video did indeed show Acosta appear to strike the intern. So why didn’t I remember that from the time it happened?
Easy in This Case to Determine What Was Real, What Was Not
Because it didn’t. The press conference had been covered extensively by the television media at the time it took place, so original videos of the event in question were readily available. I watched the video from several sources, and my memory of the event agreed with the contemporaneous videos. Clearly, the White House video had been altered.
The specifics of the alteration appeared later in a number of places, including in The Washington Post, where the newspaper was able to show the original video and the altered one side-by-side. In this case, the video alteration was crudely done, making the changes readily apparent.
But with the video manipulation tools already available, much better alterations are possible. Some, called “deep fakes,” use a combination of video-editing tools and AI to learn a subject’s actions and then to use existing photographs and videos to make convincing fakes that can have the subject doing or saying almost anything. In fact, the Washington newspaper, The Hill, shows a fake public service announcement starring Barack Obama. This video, despite the fact that it’s voiced by comedian Jordan Peele, is far more convincing than the White House video.
While today such video manipulation provides mostly a new sort of political theater, the fact is that it can provide some real risks to the public view of reality—and to your company. The first risk is that the public will eventually stop believing video images of anything, and video will no longer be considered evidence of reality. But greater harm can come to your organization by those who want to cause harm or simply to achieve some advantage over you.
How Does an Enterprise Defend Itself Against a Doctored Video?
As an illustration, suppose a video were to surface on YouTube or on Twitter appearing to show your CEO making a racist or misogynistic statement in a meeting? Or suppose a video appeared that seemed to show a blatant case of sexual misconduct at a company function? What would you do?
Allegations of such conduct have already been shown to be extremely harmful. When the charges are verbal and there’s no evidence to back them up, a company and its executives may be able to survive them, absent other information such as a pattern of behavior. But what about a video?
There are firms such as Storyful that can examine video and detect fakes. But you can do a lot to make such detection easier. Perhaps the best way is to save video evidence of the public actions of your organization’s senior executives so that there is actual, verifiable video that you can show hasn’t been altered.
For now, at least, it may be best to make certain that you make and preserve high-resolution digital video of what your company does in public and near-public situations. Those videos should include timing information, full metadata and, where possible, GPS data, and then stored on their original media where they can’t be tampered with. You should also preserve copies of such videos where they can be searched so that the event can be shown in an original form. That way, if someone uses the event to create a fake video, you have the original to prove it.
Still Images Often Used in Deep Fakes
Other types of imagery are a little easier to prove. For example, tracking down the origin of a still image that’s being used in an inauthentic way is now fairly easy using Google’s reverse image lookup feature. But detecting fake videos is more complex, and some have suggested cryptographic signing of videos as a way to prove that they’re authentic. To do that, you first need the high-quality video I mentioned earlier.
But protecting your company may not require cryptographic techniques. Those videos can also be used to show that the executive being targeted was somewhere else at the time, making it obvious that the video was faked.
For most companies, such precautions may never be necessary. But then, there was a time when we didn’t think most companies need the kind of security protections that are now routine, either. Maybe it’s time to acknowledge the risks and at least start taking steps to manage them.