The efficacy of deepfakes

Can we really write it off as "not a threat"?

A few days back, NPR put out an article discussing why deepfakes aren’t all that powerful in spreading disinformation. Link to article.

According to the article:

“We’ve already passed the stage at which they would have been most effective,” said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. “They’re the dog that never barked.”

I agree. This might be the case when it comes to Russian influence. There are simpler, more cost-effective ways to conduct active measures, like memes. Besides, America already has the infrastructure in place to combat influence ops, and have been doing so for a while now.

However, there are certain demographics whose governments may not have the capability to identify and perform damage control when a disinformation campaign hits, let alone deepfakes. An example of this demographic: India.

the Indian landscape

The disinformation problem in India is way more sophisticated, and harder to combat than in the West. There are a couple of reasons for this:

India has had a long-standing problem with misinformation. The 2019 elections, the recent CAA controversy and even more recently—the coronavirus. In some cases, it has even lead to mob violence.

All of this shows that the populace is easily influenced, and deepfakes are only going to simplify this. What’s worse is explaining to a rural crowd that something like a deepfake can exist—comprehension and adoption of technology has always been slow in India, and can be attributed to socio-economic factors.

There also exists a majority of the population that’s already been influenced to a certain degree: the right wing. A deepfake of a Muslim leader trashing Hinduism will be eaten up instantly. They are inclined to believe it is true, by virtue of prior influence and given the present circumstances.

countering deepfakes

The thing about deepfakes is the tech to spot them already exists. In fact, some can even be eyeballed. Deepfake imagery tends to have weird artifacting, which can be noticed upon closer inspection. Deepfake videos, of people specifically, blink / move weirdly. The problem at hand, however, is the general public cannot be expected to notice these at a quick glance, and the task of proving a fake is left to researchers and fact checkers.

Further, India does not have the infrastructure to combat deepfakes at scale. By the time a research group / think tank catches wind of it, the damage is likely already done. Besides, disseminating contradictory information, i.e. “this video is fake”, is also a task of its own. Public opinion has already been swayed, and the brain dislikes contradictions.

why haven’t we seen it yet?

Creating a deepfake isn’t trivial. Rather, creating a convincing one isn’t. I would also assume that most political propaganda outlets are just large social media operations. They lack the technical prowess and / or the funding to produce a deepfake. This doesn’t mean they can’t ever.

It goes without saying, but this post isn’t specific to India. I’d say other countries with a similar socio-economic status are in a similar predicament. Don’t write off deepfakes as a non-issue just because America did.

Questions or comments? Send an email.