The efficacy of deepfakes
Can we really write it off as "not a threat"?
A few days back, NPR put out an article discussing why deepfakes aren’t all that powerful in spreading disinformation. Link to article.
According to the article:
“We’ve already passed the stage at which they would have been most effective,” said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. “They’re the dog that never barked.”
I agree. This might be the case when it comes to Russian influence. There are simpler, more cost-effective ways to conduct active measures, like memes. Besides, America already has the infrastructure in place to combat influence ops, and have been doing so for a while now.
However, there are certain demographics whose governments may not have the capability to identify and perform damage control when a disinformation campaign hits, let alone deepfakes. An example of this demographic: India.
the Indian landscape
The disinformation problem in India is way more sophisticated, and harder to combat than in the West. There are a couple of reasons for this:
- The infrastructure for fake news already exists: WhatsApp
- Fact checking media in 22 different languages is non-trivial
India has had a long-standing problem with misinformation. The 2019 elections, the recent CAA controversy and even more recently—the coronavirus. In some cases, it has even lead to mob violence.
All of this shows that the populace is easily influenced, and deepfakes are only going to simplify this. What’s worse is explaining to a rural crowd that something like a deepfake can exist—comprehension and adoption of technology has always been slow in India, and can be attributed to socio-economic factors.
There also exists a majority of the population that’s already been influenced to a certain degree: the right wing. A deepfake of a Muslim leader trashing Hinduism will be eaten up instantly. They are inclined to believe it is true, by virtue of prior influence and given the present circumstances.
The thing about deepfakes is the tech to spot them already exists. In fact, some can even be eyeballed. Deepfake imagery tends to have weird artifacting, which can be noticed upon closer inspection. Deepfake videos, of people specifically, blink / move weirdly. The problem at hand, however, is the general public cannot be expected to notice these at a quick glance, and the task of proving a fake is left to researchers and fact checkers.
Further, India does not have the infrastructure to combat deepfakes at scale. By the time a research group / think tank catches wind of it, the damage is likely already done. Besides, disseminating contradictory information, i.e. “this video is fake”, is also a task of its own. Public opinion has already been swayed, and the brain dislikes contradictions.
why haven’t we seen it yet?
Creating a deepfake isn’t trivial. Rather, creating a convincing one isn’t. I would also assume that most political propaganda outlets are just large social media operations. They lack the technical prowess and / or the funding to produce a deepfake. This doesn’t mean they can’t ever.
It goes without saying, but this post isn’t specific to India. I’d say other countries with a similar socio-economic status are in a similar predicament. Don’t write off deepfakes as a non-issue just because America did.
Welcome to the second monthly update for KISS. This post will be quite a long one, we've seen some nice changes this month and some great work by the Community.…
via KISS Linux Blog on May 25, 2020
There is an opportunity here to signal that Zoom is not a Chinese asset. Zoom can effectively remove itself from the board by completely mitigating passive surveillance. When no state’s intelligence agency benefits from a home field advantage with Zoom, th…
via grugq’s domain on May 08, 2020
TikTok videos have grown increasingly popular over the last few years, with short clips showing people dancing, lip syncing, doing viral challenges, and so on. This relatively new platform lets users share short video clips, and can be looped. It is simila…
via bellingcat on May 25, 2020