The last few days saw someone’s career destroyed on the basis of a shaky handheld video, posted by one party to a dispute who clearly had an agenda.
We all know the script. A video (possibly edited, and certainly not showing the entirety of the incident) is posted, revealing some perceived bad behavior. The person in the video is quickly tracked down by an online mob, their boss contacted with demands to fire them.
The boss, waking up to 300 emails and LinkedIn messages, panics and immediately spews out a social media response about how their employee’s actions “don’t represent our values” along with a pink slip in the employee’s inbox.
Jen Monroe suggested in this piece today that we might have reached a turning point in supporting the online outrage mobs. Some people have been lukewarm towards this particular case, possibly because the instigator has a history of “outrage” (in a previous incident, he suggested an AirBnB host was a satanist).
I hope she’s right, but I’m less sanguine.
People obviously didn’t learn from the Covington Catholic students kerfuffle that rushing to judgment based on an edited video might not be the wisest approach. So I doubt they’ll learn fast enough that soon the issue won’t just be that the video doesn’t show the whole thing.
The issue will be whether it happened at all.
The technology of deepfakes was highlighted earlier this year with some fake Tom Cruise videos on TikTok. These videos were harmless, they didn’t portray Cruise in a negative light. But it doesn’t take much imagination to see how they could have.
My friend, Matthew Ferraro, has worked on the issue of deepfakes, particularly in the context of national security. This is obviously a real threat - but anyone who thinks the same threat doesn’t apply elswhere hasn’t been paying attention.
Right now, producing a good deepfake requires quite a bit of preparation and tech. But it won’t be that way for long. Remember when effective photoediting required Photoshop, and someone skilled enough to do it well? Now, anyone can download an app to filter their pics, change their appearance, or even paste themselves into movie scenes.
Will a basic deepfake app produce a result good enough to pass expert scrutiny? Hell no. But will it likely work in the context of shaky, grainy phonecam footage of a supposed argument in a parking lot? For sure. And plenty of us have left lots of material online to work with - videos and audio on social media.
If you think some of these grievance entrepreneurs wouldn’t be motivated to do this, look at how far they’ve been incentivised already. “Twitter, do your thing” is a hell of a drug.
We’re going to end up with some cyber-forensic innocence project, but they won’t be able to move fast enough to vindicate the wrongly accused, before the smug (or panicked) boss or publisher has dropped the victim like a hot coal.