Key elements of the disinformation kill chain
Key elements of the disinformation kill chain include reputational manipulation, automated laser phishing and computational propaganda. Important means are deepfakes and shallowfakes.
- Diplomacy & reputational manipulation: This entails using advanced digital deception technologies to provoke unfounded diplomatic or military reactions by an adversary, or impersonate and delegitimize leaders and influencers of opponent groups and states.
- Automated laser phishing: This comprises using malicious AI to hyper-target hitherto trustworthy entities, making these targets acting in ways they otherwise would not have acted or communicated
- Computational propaganda: Computational propaganda mainly uses social media, targeting human psychology with rumors and gossip, and using algorithms to manipulate public opinion.
Social messenger services such as Telegram, Facebook Messenger, and WhatsApp play a crucial role in these tactics. The communications apps encrypt messages to prevent others from properly monitoring their origins. Examples of such campaigns entail:
- In 2018, false rumors of a group of kidnappers spread on the app, resulting in the killing of more than 24 innocent people in India.
- Disinformation campaigns linking virus vaccination to death spread in Brazil in 2018 on WhatsApp and threatened the government’s efforts to vaccinate citizens against yellow fever.
- In 2020, Covid-19-related misinformation, from fake cures to false causes and origins spread across all messenger platforms globally.
Faking Videos and Audios
Correctly timing the release and distribution of a controversial video or audio recording can have vast effects on the outcome of elections, may compromise peace talks, blow up trade negotiations, or influence an electoral referendum.
The ability to impersonate politicians on digital platforms poses a reputational risk to these individuals. By means of deepfakes and shallowfakes, advances in technology are quickly bringing these threats closer to reality.
Deepfakes are hyper-realistic doctored images and videos. Historically, creating realistic fake images and videos required extensive editing expertise and custom tools. Today, so-called GANs (generative adversarial networks), a class of unsupervised learning algorithms, can automate this process and easily create increasingly sophisticated fakes. Code for creating convincing deepfakes is open-source and available to anyone in software packages such as DeepFaceLab. This makes deepfakes a viable tool for all kinds of hackers and manipulators.
The number of online deepfakes has exploded recently. The vast majority of deepfake videos focus on pornography. However, a small percentage of them have political aims. And very often the fake nature of a video can not be determined exactly casting doubts about the targeted persons.
This uncertainty could give rise to an alarming phenomenon known as the “Liar’s Dividend,” where anyone can feasibly deflect responsibility by declaring an image or video as fake — a tactic aimed at undermining truth. According to Sensity, a startup tracking deepfake activity, the number doubles roughly every 6 months.
Less sophisticated approaches to video manipulation are known as “shallowfakes” and involve speeding up, slowing down, or otherwise altering a video can have devastating effects on a person’s reputation as well. A shallowfake of the speaker of the House of Representatives Nancy Pelosi in May 2019 gave the impression that she was drunk and slurring her words. The video was retweeted by Donald Trump and received more than two million views within two days.
Malicious AI manipulating people
The number of data points available online for any particular person at any given time has risen tremendously in the past years, ranging from 1,900 to 10,000 for the average Western world citizen. This information includes personal health, demographic characteristics, political views, and more.
Advertising companies use such data points to target individuals with personalized ads, while brands use them to develop new products and political parties analyze them to target voters.
Disinformation kill chain actors also like such information as personal data usually play a significant role in the early stages of a disinformation campaign. Firstly, actors use such information to specifically target individuals and groups potentially sympathetic to their message. Secondly, hackers may use such data to craft sophisticated phishing attacks and to collect sensitive information, usually hijacking personal accounts.
Targeting audiences, be it via online ads or editorial content, is a crucial part of the disinformation chain.
While online reconnaissance can identify the social media groups, pages, and forums most hospitable to a divisive or targeted message, buying online ads provides another useful tool for targeting individuals meeting a particular profile.
In the lead-up to the 2020 US presidential election, an unknown entity behind the website “Protect My Vote” purchased hundreds of ads that yielded hundreds of thousands of views on Facebook. Promoting fears of voter mail fraud, these ads targeted older voters in specific swing states that were more likely to be sympathetic to the message. The ads made unsubstantiated claims and, in one instance, misconstrued a quote by basketball star Lebron James.
The availability of personal data online also supercharges phishing attacks by enabling greater personalization. While many phishing attempts are unsophisticated and target thousands of individuals with the hope that just a few take the bait, a portion of hyper-targeted attacks seek large payouts in the form of high-profile accounts and confidential data.
Selectively sharing personal or sensitive information provides disinformation campaigns with a sense of authenticity, and leaking the information ahead of significant events increases its impact.