2020 has been a breakout year for information warfare leading to “disinformation” and “misinformation.” The Covid-19 pandemic has been escorted by an infodemic, comprised of faked information about who was responsible for the outbreak, cures and treatments, and conspiracy theories.
Emerging technologies like artificial intelligence accelerate the threats
Social and traditional media have been used in an effort to sway U.S. voters’ preferences and perspectives ahead of the 2020 US presidential election campaign, to shift U.S. policies, to increase discord, and to undermine confidence in the established democratic process.”
Emerging technologies like artificial intelligence accelerate the threats — but are also at the center of tech developments to detect and protect from fake news and misinformation. We take a close look at tactics, strategies, and the evolving technologies behind information warfare and analyze the key elements of AI-based disinformation kill chains.
Information warfare at a glance
Disinformation and propaganda are nothing new — they have simply evolved and adapted to better meet today’s technical and political ecosystem. Because resources needed and costs and tech skills involved for embarking on an online disinformation campaign are quite low, the number of actors and the amount of malicious content produced have increased tremendously since 2010.
Threats include domestic and international entities – states, pressure groups, and individuals. In fact, domestic players are expected to have a greater impact on the 2020 US election than foreign actors, according to a report by New York University (NYU).
Digital attackers have developed an effective blueprint for information warfare and fake news campaigns. Very frequently they use a 7-step process, which the US Department of Homeland Security has labeled the disinformation kill chain.
It usually begins with an objective, such as generating support for specific goals like removing sanctions, passing a law, etc. From there, attackers thoroughly analyze the online behavior of their target audience. They quickly develop an online infrastructure (mainly websites) and start to produce and distribute manipulative content (usually focused on producing social media posts). A convincing narrative is created and to make it more compelling the attackers fake supporting evidence like news articles, infographics, statistics, and quotes. They distribute this information by making use of computer-controlled accounts or bots.
Over the past few years, this 7-step process has unfolded time and again and digital warfare attackers have tried to perfect it with new technologies and audiences.
According to the “Global Inventory Of Organised Social Media Manipulation,” 70 countries used online platforms to spread disinformation in 2019 — an increase of 150% from 2017. Most of the efforts focused domestically on suppressing dissenting opinions and disparaging competing political parties. Several countries — including the likes of the US, China, Venezuela, Russia, Iran, and Saudi Arabia — tried to influence public opinion and citizens of foreign countries.
In the case of Covid-19, an April 2020 poll in the US revealed that more than 50 percent of Americans had already read or swatched news about the virus that had been completely made up and faked. This applied to the deadly nature of the virus as well as the basic facts about its origin and its outbreak.