TECH - Driven to Divisiveness by Algorithms

Kurtis Bell

The word ‘doomscrolling’ has entered the public lexicon over the past year or so. It’s when people spend more time on their phones endlessly scrolling through the stream of awfulness that has taken over the internet. And it’s increased as we’ve coped with a global pandemic. 

maxim-ilyahov-0aRycsfH57A-unsplash.jpg

But not everyone has been exposed to the same tweets, status updates, or Insta snaps, and the process of deciding who sees what has become the next battleground between social media giants and governments. 

You may not have considered how Twitter, Facebook, Instagram and all the rest decide what posts they’re going to present you with when you open their app or website. We’re well past the point where an army of employees could even hope to make that decision, so instead it’s a job for thousands of computers using algorithms. When you see the word algorithm, just think 'a list of instructions'. 

Unfortunately for the Prime Minister algorithms are not 'mutant', but instead they do just what they’re told to do, as the exam results fiasco last summer showed. 

Every company that uses some form of automated system to decide what content goes in front of their users has an algorithm making those decisions. These algorithms have in some cases gotten so advanced, that they effectively train themselves on what content to promote, based on guidance from a human telling it "Yes, more like this" or "No, less of that." 

They can tweak and adjust it to determine what material rises to the top and the user will see, with different users getting different recommendations depending on the profile on them that’s built up over time. 

The problems start to come in when developers are making decisions about what those algorithms are going to promote. Those developers will follow their managers, who follow their managers and so on, until you get to the CEO, their beliefs and the responsibility of being beholden to shareholders. 

In the case of Facebook, they explicitly choose to continue to use the algorithm settings they were using that promoted extremism to its users. An internal presentation at the company said, "Our algorithms exploit the human brain’s attraction to divisiveness." All of this in the interest of more user time spent on their platform and more advertising revenue for their shareholders. 

If you’ve ever had that moment when scrolling through your news feed or timeline and saw a story or post that made you feel a strong emotion, chances are the algorithm put it there in front of you, on purpose. It turns out that divisive content generates the most clicks.

justin-taylor-X3byn4s5Ke8-unsplash.jpg

The events of January 2021- at the US Capitol in Washington DC - have shown us the real world consequences of a what a diet of extreme content will do to a person when it spills over from the digital environment into the real world. It also seems to have finally caught the attention of law makers, especially in the US, who are now calling on Facebook in particular to take responsibility for the information that the service delivers to its users. 

It’s quite similar to the current pandemic situation except, in this case, it’s extreme emotions that are the most contagious and infectious.

So, what can we do about this? 

As students we’re in the prime demographic for social media, being exposed to what they want us to see at the time in our lives when we’re developing our opinions and beliefs on the word. 

Tim Harford, in an interview about his new book, How to Make the World Add Up, offered a suggestion. When you read a tweet, a story or view a picture, try to sit with whatever emotion or feeling you have for a second. Try to recognise, for example, "Ok, this story about Brexit is making me feel anxious/excited/disgusted." 

Sit with that feeling for just a moment before you reach for the share button, and hopefully we can stop the epidemic of misinformation in its tracks. 


IMG_1494.jpeg

Kurtis Bell is an Aerospace Engineering student at Queen’s