Computational Propaganda: Fighting An Invisible Enemy In The Cyber World

We live in a connected world as part of a cyber space where our lives are being chronicled through our online activities, especially on our mobile phones.

According to a report by market research firm techARC, India had 502.2 million smartphone users as of December 2019. This means over 77% of Indians are now accessing wireless broadband through smartphones.

While it means people have the ability to connect and access information, it also means that being humans, they will react to it. In the process they are creating a digital wealth of information though millions of likes, clicks, posts, tweets and shopping habits. With the rapid growth in Big Data, these are not just stored but also analysed in seconds.

Companies regularly go through the internet to evaluate reactions and trends so they can respond, to ensure success. An example could be the launch of a new product, a movie release or even the response to an announcement by a major company.

One of the most visible uses of this is probably in politics. Politicians, quick to identify an opportunity, jumped on to social media platforms and started using them to their advantage. The ability to reach out to millions of people without actually spending huge amounts of effort, time and money, is changing the landscape of politics. It also means that content is important and needs to be right the first time as it moves fast and can neither be controlled nor modified. Hence, they use information to analyze and gauge the mood of the public towards any given situation and then try to deliver the right response.

If this data is used for the people’s benefit, it is to be applauded. On the other hand, if this is being used for manipulating people and furthering political or ideological self-interests, it turns into computational propaganda.

The term computational propaganda literally means information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.

The 2016 presidential elections or the Brexit vote are some of the best examples. Cambridge Analytica (CA) harvested the data of millions of Facebook users, profiled them and ran targeted campaigns to influence them to vote a certain way. Although both CA and FB were exposed and faced repercussions, it showed the fallacies of sharing data on social media and led to tightening of data protection.

However, it did not stop the march of computational propaganda. Today it’s widely acknowledged that individuals, countries or organizations use Artificial Intelligence (AI) Bots to influence and drive public opinion towards their agenda, which could be political, racial or even ideological.

Various groups engage these AI Bots to trawl through the internet and drive forward a particular point of view by spreading disinformation through liking or commenting on posts and tweets in thousands, if not millions. They help create and market “fake news”.

Social media companies like Twitter, Facebook and Google are being accused of not doing enough to fight them. Facebook lost around $56 billion in market value recently, as large companies pulled out ads, citing the company’s failure to sufficiently categorize and mark hate speech and fight disinformation across it’s platforms. With an aim to maintain a balance between free speech versus hate speech, Mark Zuckerberg has said they will start labelling posts rather than removing them.

So how do we fight an unknown enemy? For now, tech companies are aiming to fight fire with fire, so fight AI with more AI. The idea is to use Artificial Intelligence and Machine Learning to pick up and fact check things, before allowing it to be posted. Essentially, we use good AI to identify the rogue bots spreading misinformation and stop their activities.

But who defines “good”? The important point is that Artificial Intelligence is just that, artificial. It reflects the thoughts of the humans who create and manage them. For example, whether Brexit is good or bad, it depends on who you ask.

The more pertinent question is should we leave this fight to a handful of tech leaders and their teams? Can we depend on them to have the best interests of humanity in their hearts instead of their share price and returns? Zuckerberg was forced to act due to heavy reaction from FB employees and loss of wealth for his company and himself. More importantly why should they decide what is good or bad for us?

Computational propaganda is the probably the biggest threat we face to our democracies and our freedom to choose without being influenced, across the world. We face psychological warfare, in the cyber world, from an invisible enemy.

On the one hand we have a handful of powerful people trying to brainwash us to align with their thought process. On the other hand, we depend on a few businessmen to prevent it from happening.

Although it could be one of many strategies, maybe that is not enough. Maybe we should look at a coordinated global effort from non-profit organizations to counter it. Maybe there are other ways we need to come up with in this ever-changing world. Some serious thought needs to be given to it.

Whatever the future strategy, ultimately the onus is on us to be able to be able to question everything we read, fact check them and make an unbiased decision, from our individual perspective. Only this can help prevent us from being manipulated to help preserve our freedom of thought and our way of life.

Get real time updates directly on you device, subscribe now.

Comments are closed.