The Social Media Echo Chamber: Man or Machine?

Published 29 Apr 2021

By Aileen Wang



In the days of snail mail, the idea of knowing what was happening on the other end of the world in anything close to real time was a pipe dream, or the stuff of adventure novels. Ignorance breeds apathy, and thus, for many centuries, few people in Japan cared or knew about the Lancasters and Plantaganets battling it out in faraway England, just as the English neither knew nor cared about Oda Nobunaga’s territorial conquests. This spatial bubble might as well have been made of granite – for a long time, people simply didn’t know enough to have an opinion on anything other than their immediate neighbours.

With the advent of the Internet, information became available in never-before known volumes, at never-before known speeds. Suddenly, it was possible to watch a football match in America from a sofa in an Australian lounge, hear the pronouncements of so-and-so important figure miles away in another city, and mind-bogglingly, see from the safety of Earth the first footprint left by a human landing on the moon. Time and space had lost their power, cudgelled by the Internet into docile submission. Information was everywhere, for everyone.

Optimistic thinkers of the previous decade have eagerly anticipated a ‘democratization of knowledge’. Surely the new volume of information would lead to a more informed, more open to compromise, more critical generation? If nothing else, simply the fact that so much is now accessible by rights implies that people will know more, and from more perspectives, possessing a more holistic view of the global society than those that inhabited isolated islands which constituted the world of the past.

However, in recent years, more and more has the ‘social media echo chamber’ appeared as a term in discourse. The Internet, argues proponents of this theory, has failed to create an equal platform to share knowledge. Rather than throwing open boundaries, we’ve ended up in equally insular communities, simply moved online. Trapped within the echoes of our opinions, under the dominion of social media algorithms that only reflect what we want to see back at us, we have no more access to the broader world than a Middle Age peasant who spends his whole life in a valley.

While the last comparison is an obvious exaggeration, the theory itself is compelling. Algorithms and recommendations do in fact work by recommending what our friends like, what we like, and therefore very rarely offer different opinions or topics than those that we already hold. Driven by these algorithms that run in the backend of the large majority of our Internet experience, do we not simply end up shouting and echoing back more and more extreme views of the things we already believe, while increasingly demonizing the things we don’t? How deep are we already in a technological dystopia, almost comic in its endless arguments and clamorous calls for attention?

What is an echo chamber?

To start off with, before we can look into the issue properly, we should define what exactly the issue is. Here, there’s a subtle distinction to make. Is it that we don’t have access to information that differs from what we already know? Or is that this information has been discredited and actively excluded from the bubble we inhabit? The first is an epistemic bubble, the second is a proper echo chamber. The inhabitant of the epistemic bubble might be surprised to find that other information and viewpoints exist outside of their own; the inhabitant of the echo chamber knows well that other information and viewpoints exist, but actively distrusts and refuses to believe them.

Usually, the usage of the term ‘echo chamber’ covers both of these instances, as well as what is more common: a mix of the two. However, I thought it might be appropriate to separate them. After all, the algorithm is more culpable in the first instance, whereas the audience is an active agent as well in the second. When looking at the role AI and machine-learning algorithms have to play, it is an important distinction to make.

The role of algorithms

Major social media platforms such as Facebook, Twitter and Instagram all work in a way such that things people in your immediate circle share or like will be shown to you as well. Inevitably, real-life congregations of opinions end up repeated online. The likelihood of someone in your immediate circle of friendships holding a drastically different, polarising opposite opinion to you is almost nil – so from this source, there comes very little division.

Moreover, as Eli Pariser details in his book ‘The Filter Bubble’, major search engines, social media and media platforms all tailor their results and recommendations based on what data we have previously generated. From the innocuous list of recommendations on Netflix to the search results we see after Googling something, everything is personalized based on what has been mined about our interests, tastes, opinions, needs.

To that extent, algorithms drive and shape our Internet experience with an invisible hand, exploiting our confirmation bias with news and information that increasingly corresponds to our own tastes. Thus, as our Internet experience continues, the chance of encountering something that disagrees with our views becomes smaller and smaller, while our opinions are left to go unbounded and un-contradicted, confirmed on all sides by seemingly the entirety of the world.

Thus far, algorithms seem to have created an epistemic bubble. At a glance, we don’t know, and simply cannot access by random chance any more, information that contradicts our views. The answer, then, seems very simple. Open all the doors, let some light in, and surely with the introduction of new viewpoints, the bubble will naturally pop.

But when has anything ever been simple?

The role of the human – ourselves behind the screen

In a complicated world such as our own, issues very rarely go uncontested. Increasingly, studies have found that algorithms in fact play only a limited role in the chamber we construct around ourselves. If we blinker ourselves to only Facebook, only Twitter, only certain channels of social media, we can construct the false hypothesis that people truly are polarised simply because they live in epistemic bubbles, sadly ignorant of the outside world.

However, people are complex creatures. Very rarely does someone only have one channel of media – and this has a negating effect on the seemingly all-powerful algorithm. Someone who uses Facebook, for example, may also have a Twitter account, an Instagram account, check the local news on the news app on their device, talk to a friend, and have something catch their eye on television in a restaurant. In short, people consume a lot of media, and the sheer volume serves to confuse any straightforward relationship between algorithm and echo chambers.

It’s not that people don’t know, aren’t aware, or even don’t interact with people and information that contradicts their own views. Very rarely will people be actually be caught in a situation where they have no knowledge of arguments opposing their own views, simply due to the diversity of media consumption that characterizes the majority of the current, technologically-savvy generation. In fact, people are all too often exposed to views that contradict their own.

Rather than the comparatively fragile epistemic bubble, It’s that they distrust and actively undermine these views, and refuse to engage with them, rather than being ignorant of them. Dr Blank goes so far as to conclude in a paper published with Elizabeth Dubois in the journal Information, Communication and Society that ‘social media and [the] Internet [are] not [a] cause of political polarization.’ The same phenomenon that plays out in physical communities repeats itself online: people interpret information in a way that confirms what they already know, and are generally more receptive to what doesn’t contradict their pre-existing beliefs. The first phenomenon is a psychological mechanism called ‘biased assimilation’, whereas the second is ‘selective retention’. In other words, what drives echo chambers isn’t machine: it’s human.

This shouldn’t come as a surprise if we go back to our definitions. After all, nowhere is it specified that echo chambers necessarily exist online – simply that the online environment has become a breeding ground for them. Echo chambers just as easily exist in other spheres offline. Any community hazards the risk of becoming an echo chamber. At its heart, it’s a human problem, driven by human tendencies.

Conclusion

Of course, let’s not jump the barrel. Algorithms aren’t off the hook yet. Filtering and personalization do their part to make it that much easier to trap oneself in an echo chamber. Filter bubbles created by algorithms make it that much harder to find the information to get an objective viewpoint, or at least to properly inform oneself on all sides of an issue. Certainly, algorithms have complicated an already unappetizing task: checking to see if you’ve got it wrong. However, we can’t say that we’re completely innocent, either.

While research in this area is complicated by the necessary limitations to what empirical observations can gather, as well as the complex ways that people interact with media of all types, such that no definite answer can be currently provided, I would hazard that the social media echo chamber phenomenon lies somewhere between man and machine. Social media algorithms might strengthen the walls, but the echo chamber itself is generated from a very human starting point.

So where does this leave us? Well, first of all, it means we can’t rest easy either in dismissal or despair. The issue is both real, and it is something that we can act upon. While tempting to give ourselves up and blame the algorithmic master at work behind our media, or to dismiss the issue as overblown and hot air, both would be an easy way out. As a world moving into an increasingly technology-filled era, it’s not just a useful skill but a survival trait to understand and navigate the flood of online information, such that we don’t unwittingly trap ourselves in a community of mirrors. Checking to see if the flaws or the validity of something that we naturally are inclined to believe is never a good feeling – but all the more necessary for it. Echo chambers have a human hand in their construction: those same hands can pull those walls apart.


Tags: Opinion Applications of Data Science