The core of political campaigning is about real-life encounters aimed at bringing citizens together and mobilising them to vote: fundraising, events, rallies, canvassing, town hall meetings, debates. In recent years, however, a lot of these activities have moved online, as political forces across the world have realised how powerful the internet can be as a tool for political mobilisation.
But never has online campaigning been so central to the electoral battle as it will be this year, with the outbreak of the novel coronavirus and the imposition of various restrictions on public gatherings.
Indeed, this is the first truly global pandemic of the digital age. Unlike the SARS, H1N1 or Ebola outbreaks, it has had a direct impact on elections worldwide. Some 50 countries have postponed elections, whether local or national, and many more are likely to follow suit in the coming months. Given that a cure or an effective vaccine for the virus may not be developed for a while, countries will be forced to hold elections with various coronavirus-related restrictions still in place.
This means that candidates and parties will not be able to campaign in person as usual. Instead, they will have to invest even more in online campaigning. And this is bad news for the electorate, given the epidemic of misinformation online.
Over the past few years, we have witnessed a growing number of examples of how various actors have used online platforms, particularly social media, to launch vast disinformation campaigns to influence various political events, including elections and referendums.
In the Brexit referendum and the US presidential election in 2016, revelations about the activities of British firm Cambridge Analytica seeking to influence the votes through Facebook shocked the public in the UK and the US. In 2018, during Brazil’s presidential elections, the use of mass messaging on WhatsApp to propagate false information also caused much outrage.
And yet big tech companies have been slow to take action and curb the harmful use of their platforms during election season. Our research at Democracy Reporting International identified several instances of manipulation of online public discourse in 12 countries in 2019 ahead of or during elections.
In Tunisia, for example, during its legislative and presidential votes, we uncovered Facebook pages focusing on entertainment with murky affiliation and ownership, which consistently posted and sponsored political messages. These pages operated in networks, sharing each other’s content and reinforcing disinformation narratives.
We found similar strategies in Sri Lanka, where celebrity-focused pages began sharing misleading political content in the run-up to the 2019 presidential elections. Some of these posts contained divisive rhetoric aimed at spreading hatred between the various religious and ethnic groups in the country.
In Libya, Facebook pages with unclear affiliation started touting the candidacy of Muammar Gaddafi’s son, Saif al-Islam, in the presidential elections which were initially scheduled for 2019 but had to be postponed. One of them, called Mandela Libya, was created shortly after Saif al-Islam’s representatives visited Moscow in December 2019 and gained over 100,000 followers within a month of its creation. The page, which compares Saif al-Islam to South Africa’s Nelson Mandela, was one of the top sources for news on Gaddafi’s son, along with RT and Sputnik.
As online campaigning is slated to dominate electoral campaigns across the world this year and possibly 2021, the volume of data and information related to elections and political discussions will massively increase, as will the damage done by issues that remain unresolved by tech companies.
The COVID-19 outbreak has actually demonstrated that tech companies can weed out misinformation. Many platforms have tightened their rules on advertising, prohibiting ads that create a sense of urgency when referring to the coronavirus, such as ads implying a limited supply of medical gear or advertising substances that supposedly cure the infection.
Facebook recently announced it will notify users who engaged with misinformative content about the virus. Together with Twitter and YouTube, the company is taking down content that could cause harm, including posts from Brazilian President Jair Bolsonaro and Venezuela’s Nicolas Maduro that praised a dubious cure for COVID-19 and encouraged ending social distancing measures.
WhatsApp has limited messages forwarding options on its platform in an attempt to reduce the spread of misinformation about the virus. Twitter has started labelling tweets that contain deceptive or manipulated content, while Google has begun directing searches on the virus to reliable websites and taking down Google Play apps promising information about the pandemic that was not approved by a national government or medical institution.
While the harm caused by disinformation is clear and concrete during a public health crisis, we cannot ignore the long-term harm to democracy caused by inaction when it comes to elections.
Some steps can be taken to remedy this. For example, false information aimed at undermining trust in democratic elections or deceiving voters could easily be debunked and taken down from social media in the same way that tech companies are now removing misleading content on COVID-19. To judge the validity of information, tech companies rely on materials provided by the World Health Organization. In democratic countries, electoral bodies and international watchdogs could play a similar role to ensure the safety of the electoral process.
The measures taken by the big tech companies to reduce disinformation related to COVID-19 show us that they can do so when it comes to election-time disinformation campaigns, too. If they do not take action, free and fair elections could also become a victim of the coronavirus pandemic.