Article Views :

203 views

Computational Propaganda in Social Media: A Story from Brazil

277848655 427553845795918 2640489083988161528 n

Brazil follows the global trend where traditional media is rapidly losing ground to alternative, internet-based news sources which use social media and messaging applications to spread their information. The large-scale protests of 2013 were a catalyst for the creation of new online, alternative media news sources. Contrary to the old media which tried to depict the riots as the result of the actions of a small anarchist minority, the new alternative media groups were more interested in exposing the failings of the state and the true scale of the public support in favour of the protestors.

   Dominant applications used by these alternative news sources are Facebook and Youtube, with the former being the social media site that Brazilians overwhelmingly choose to spend their time on. It is thus no surprise then that Brazilian political parties have begun utilising these new technologies, incorporating them in their campaigns in order to broaden their electoral base. As a result, companies that focus on the promotion of their clients have been using big data and automated systems as part of online propaganda campaigns in order to promote their clients’ content.

   As Symantec noted, Brazil is the eighth country in the world, in terms of online bots. According to Spamhaus Project, which is a consortium that monitors networks worldwide, 485,133 bots were detected on the 17th of May 2017 in Brazil, with only China, India, Russia and South Africa displaying higher bot activity. According to a political scientist: “The use of bots is not something that just came about, they have been working for at least six years here in Brazil, and now it is becoming more common. Now the bots are becoming more sophisticated, the technology is becoming more sophisticated, such as cyborgs that are a mixture of human and bot, something more efficient than bots”.

   A prime example of the above are the 2014 Presidential elections as well as how computational propaganda campaigns were utilised in order to promote the candidates’ messages and then later on to destabilise President Rousseff’s government and lead to her political downfall.

The Pre-Election Day Computational Propaganda Campaign

   During the 2014 presidential elections both candidates (Dilam Rousseff and Aecio Neves)  were supported by bots. The most notable example of this being the online activity during the presidential debate between the two candidates. During the first 15 minutes of the debate the tweets with hashtags relating to Neves tripled while tweets regarding Rousseff saw no similar surge. The fact that tweets regarding Rousseff did not see a similar increase is an indication of bots being deployed to boost Neves’ online presence during the debate. This was later supported by Rousseff’s online group, Muda Mais, according to which more than 60 automated accounts were detected with the sole purpose of boosting Neves’ campaign. In addition to the above mentioned instances, her party also detected various accounts on Facebook, Twitter and other social media which also appeared to be run by bots. These accounts were linked to a businessman who was paid BRL 130.000 in order to support Neves’ campaign.

   Leaked internal party documents also confirmed that Rousseff was also deploying bots during the presidential campaign, albeit at a lesser degree compared to Neves. According to this document, Neves’ bot campaign was extensive, encompassing the use of bots on both social networks and private messaging apps (i.e. WhatsApp), spending around 10 million Brazilian Reais for their acquisition and deployment. WhatsApp in particular appeared to be very effective for sharing political content, with fake accounts infiltrating private groups so as to share political content, articles and posts from other social media (Facebook, Twitter). These accounts were also utilised to measure the public’s reaction to the party’s communication campaign as well as to detect hot topics that the party could incorporate in its agenda.

The After-Election Day Computational Propaganda Campaign

   Despite his defeat at the ballots, Neves’ online campaign spending did not cease after the end of the presidential election but continued in order to support groups that opposed President Rousseff. Neves retained his online campaign presence through the Facebook groups “Revoltados ON LINE” and “Vem Pra Rua” and started pushing for impeachment in November 2014, mere weeks after the October presidential elections. He was joined by other opposition groups which gathered support against President Rousseff. Later, in March 2015, bot activity was detected which aimed to promote protests against the president and even impeachment.

   Successive protests took place in April, August and December of 2015 and continued into 2016. These protests brought millions of Brazilians onto the streets both against and in favour of President Rousseff. During that time, significant bot activity was detected on Twitter (both in favour and against the government) as well as great organising efforts in private networks. In the end, President Rousseff was suspended on the 12th of May 2016 and was impeached later that year in October.   

   In addition to a plethora of issues that lead to that event such as extended corruption, a struggling economy and a general loss of faith in politicians, it can be stated that the online bot campaign against her acted as a catalyst for her impeachment by speeding up her political downfall. According to Rousseff’s party, Neves never ended his computational propaganda campaign which gathered millions in opposition groups and pages, spread his message to roughly 80 million people and increased the strength of the social movements against Rousseff and the public support for her impeachment.

   In short, what this example shows us is that social media has allowed politicians to keep campaigning well after the elections are over and the vote result has been revealed. The online groups that they utilise identify their target audience using the data they have collected about their targets’ preferences, the type of content they enjoy, the demographic group they belong to and their personal contacts. They are also able to infiltrate closed groups (such as those on WhatsApp and Facebook) and spread their message, often with the extra help provided by bots. Such “joint” campaigns of bots and human actors were not limited to the presidential elections but were later used at the Brazilian municipal elections as well.

Like if you are cyborg

   The 2016 Rio de Janeiro municipal elections witnessed an extended cyborg campaign like the one mentioned above. A cyborg campaign is one that uses a combination of bots and real accounts/human actors to spread its message.

   This was evident in the “doe um like” (donate one like) feature which was prominent in some candidates’ sites. This feature allowed a willing Facebook user to “donate” their ability to like and share content to their candidate of choice for a period of three months. After the user agreed to make that “donation” their profile was captured by the program and became part of their candidate’s bot army. As a result, there were armies of real accounts following automated tasks in order to promote their candidate’s message. Much of the bot activity was again focused on closed, private networks such as Facebook and WhatsApp since they are harder to monitor.

So what if it’s a bot?

   Another example of how bots can be deployed to spread their creator’s message and orchestrate computational propaganda campaigns comes from researchers at the Federal University of Minas Gerais who created two fake Twitter accounts to understand how bots could infiltrate social networks, gain followers, spread messages and interact with real people.

   The experiment started in 2011 with one account that only followed users and another that tweeted and retweeted. Both accounts managed to gather followers and the one set to reply to users proved to be the most successful of the two, with around 2,000 of its followers remaining despite the account stopping its activity and declaring itself to be a fake account created for the conduct of research. The account posed as a young Globo journalist disseminating news articles and tweets and managed to gain reactions from Brazilian celebrities, which according to the researchers, goes to show how a bot can fool human users and use them to promote its message.

Computational Propaganda and the way forward.

  What we have learnt from these cases of computational propaganda in Brazil is that governments, legal systems and administration organisations are woefully unprepared when it comes to dealing with online computational propaganda campaigns. Be it the result of legislation and administrative practice not having caught up on the digital revolution we have experienced the past two decades or simply a lack of understanding of the gravity of the situation by the people in places of power, we see that organised online efforts to drive and control public opinion are hard to detect and even harder to be stopped.

   With the use of bots, real people or a combination of the two, computational propaganda campaigns can alter election results even after the last vote has been counted and cause real life phenomena. As we saw, Neves, despite losing to his rival Rousseff, was in the end successful in taking his political rival down. Through a computational propaganda campaign that never ceased he managed to stoke the fires of resentment against Rousseff, gather allies and muster social movements against her, leading to a grand political swift in Brazil through the impeachment of the sitting president.

  It doesn’t take too much imagination in order to think how a similar computational propaganda campaign can be organised and weaponized by foreign or third party actors so as to spread discord within another country or alter its legislation to suit their own interests and goals. It can certainly be agreed upon that even the mere thought of a foreign actor being involved in a computation propaganda effort so as to help the election of the politician of their linking is enough to put us in deep thought as to how we can detect and halt such attempts. Commonly proposed solutions like banning politicians from directly running such campaigns or more advanced social media user verification methods through greater acquisition and examination of user personal data are anaemic at best and downright dangerous at worst since politicians can always hire “neutral” parties to run these campaigns for them and social media sites have been proven inept at securing the personal data of their users (e.g. Yahoo data breach of 2013, LinkedIn data breach of 2021, etc.).

   Maybe the solution lies not in knee jerk reactions to cases or even the possibility of computational propaganda campaigns, but rather in the evolution of a new way of how states and state organisations view the online world and how they deal with tech companies. All things considered, one thing is certain, despite whatever legislation is passed, computational propaganda campaigns exist, have real world consequences and are here to stay. States should acknowledge this fact and work in collaboration with tech companies in order to develop realistic legislation and mechanisms in order to detect and limit the scale of such events instead of simply “declaring war” to the online world. The new public square is online and tech companies and the state have an obligation to keep it clean.

Eleftheriou Konstantinos

Political Scientist, Negotiator

Affiliate of HICD Denmark

References

Arnaudo D., “Computational Propaganda in Brazil: Social Bots during Elections.” Samuel Woolley and Philip N. Howard, Eds. Working Paper 2017.8. Oxford, UK: Project on Computational Propaganda.

ΚΟΙΝΟΠΟΙΗΣΗ

Εγγραφείτε. Κάντε εγγραφή για να μην χάσετε μελλοντικές δημοσιεύσεις.

You can unsubscribe at any time. By signing up you are agreeing to our Terms of Service and Privacy Policy. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

ΠΕΡΙΣΣΟΤΕΡΑ
Skip to content