Are your devices just telling you what you want to hear?
Social media is oddly insidious in its interplay with everyday life. It is seldom possible to pin down how much of our time it has occupied, or what we have gained in that time, but somehow it trickles into the gaps between one activity and the next until it is filling up lunch breaks, the endless slog through supermarket queues and even those brief moments spent waiting for the kettle to boil.
South Africa has long lagged behind other countries in its use of social media and new technologies. Both individuals and businesses are slow to invest in such platforms as a means of marketing and campaigning, and many still view Facebook, Instagram and the like as little more than a way to connect with friends. The ongoing Covid-19 pandemic, and the limitations it has placed on ordinary forms of engagement, may be the push South Africans need to start expanding their social media landscape. Already, many businesses are fast-tracking their shift onto virtual platforms.
It seems obvious that the ever-growing role social media plays in our lives comes with a need to understand how it influences our beliefs and behaviours. Why, then, has our progress towards this understanding been so slow? There are many obstacles to consider. The unprecedented nature of these platforms means that their development is fraught with unforeseen circumstances and lax regulations. Companies such as Twitter and ByteDance are notoriously tight-lipped when it comes to the algorithms underlying their systems. Those systems in turn are prone to manipulation by self-interested entities.
Gradually, however, the social impacts of internet communities are becoming apparent. Social media platforms have become hotbeds of misinformation, conspiracy theories and hyperpartisan views, and people are being pushed to examine what makes them vulnerable to such content.
Biases in Social Media
Fake news pays – whether you are a politician aiming to spread party propaganda, an online fraudster seeking to make an easy buck or simply a social media influencer looking to maximise followers, there are real benefits to producing false material. The question is not how low-credibility content comes about, but rather why it spreads so quickly and easily through the online world.
According to an article published in The Conversation, ‘Misinformation and biases infect social media, both intentionally and accidentally’[i], the biases inherent in people and the algorithms underlying social media platforms are responsible for the high level of intentional and accidental misinformation that occurs online. These can be broken down into three types of bias: bias in the brain, in society and in the machine.
Bias in the brain refers to the techniques our brains use to process the information we encounter in our everyday lives. With a finite capacity for processing input, our brains often resort to learned shortcuts to manage incoming stimuli and avoid information overloads.
Of course, there is no greater source of information overload than social media platforms. Instagram is a prime example – users are said to spend an average of 28 minutes on the platform per day, as of 2020[ii]. According to a PR firm that specializes in Instagram marketing, most users will spend less than 10 seconds on each post[iii]. If the time spent on each post was exactly 10 seconds, that would still mean scrolling through 168 posts in a 28-minute period. Most of this content will be forgotten almost immediately.
So what are we likely to notice and remember when engaging with social media?
People are far more likely to engage with and share content that is emotive, such as human-interest headlines or videos that contain morally abhorrent behaviour. Cognitive shortcuts bias us in favour of emotive material regardless of its accuracy. Identifying this bias and overcoming it through prioritising the source of content in determining its credibility, is a step towards countering misinformation is social media.
Bias in society has been a driving force of human behaviour since long before the rise of online networks. We are inclined to construct our social circle from those who are like us in their views and beliefs, and we are more likely to believe and propagate information from people in that circle. This pattern holds in the interactions of people on social media.
The danger in an online context is that these groups of like-minded people can become isolated from broader society, unintentionally sealing themselves in ‘echo chambers’ of appealing content through their selection of friends and influencers. These isolated spaces are vulnerable to manipulation and disinformation, as they are often almost completely cut off from those who would contradict or fact-check their content.
Finally, we have bias in the machine.
It is not uncommon to search a location or product online only to have oddly relevant adverts crop up on Facebook and Instagram soon afterwards. Similarly, searching for specific types of content on social media – a political viewpoint or fashion icon, perhaps – will lead to similar posts and posters being recommended to you in future. At times, it can seem almost as if the internet is reading your mind when it comes to your needs and interests.
The reality is hardly less unsettling. Social media platforms are constructed around complex algorithms intended to learn about and cater to each individual user. Every search, every like, every post is fed into a continuous calculation aimed at personalisation. The more each person interacts with a platform, the more effectively that platform can link them to the content they will find most relevant and engaging.
The inevitable result of this ‘personalisation’ is that users are increasingly exposed to content that reinforces their social and cognitive biases. Not only do their sources of information become less diverse, but their sense of what is popular and pervasive in society becomes warped. Social media’s tendency to promote ‘trending’ content, regardless of its accuracy, feeds into the misinformation problem.
Considering the enormous role social media plays in our lives, there is disturbingly little insight into how these algorithms work. Their operation and output are difficult to monitor, even for those who run the platforms. In his New York Times article, ‘How to Monitor Fake News[iv], Wheeler suggests that the solution to this problem is an “opening up” of the results of algorithms. Social media entities should have a mandate to use open application programming interfaces to allow third parties to monitor algorithm effects. This would involve third parties building software to evaluate the results of algorithms without infringing on the intellectual property of social media giants or violating the privacy of users.
This type of ‘watchdog’ activity is already active and effective in the media industry and could go a long way towards creating a safer social media space. The time for acknowledging the dangers in our online world – both those that are overt and those that play out subtly in the subconscious of users – is long past. Now is the time for action. In social media companies, there needs to be a move to identify and counteract biases in the system. More importantly, as individual users, we must cultivate awareness of how our biases effect engagement on social media platforms and develop strategies to discern valuable content from that which simply appeals to our preferred world view.
[i] Ciampaglia, G. L. and Menczer, F. (2018). Misinformation and biases infect social media, both intentionally and accidentally. The Conversation [online], 20 June. Available from: https://theconversation.com/misinformation-and-biases-infect-social-media-both-intentionally-and-accidentally-97148 [accessed 12 November 2020]
[ii] Newberry, C. (2019). 37 Instagram Stats that Matter to Marketers in 2020. Hootsuite [online], 22 October. Available from: https://blog.hootsuite.com/instagram-statistics/ [accessed 18 November 2020]
[iii] Forbes Agency Council. (2016). Eight Content Marketing Trends You Can Use To Help Your Clients. Forbes [online], 16 September. Available from: https://www.forbes.com/sites/forbesagencycouncil/2016/09/16/eight-content-marketing-trends-you-can-use-to-help-your-clients/?sh=2f00ff097ace [accessed 18 November 2020]
[iv] Wheeler, T. (2018). How to Monitor Fake News. The New York Times [online], 20 February. Available from: https://www.nytimes.com/2018/02/20/opinion/monitor-fake-news.html [accessed 22 November 2020]