At Face Value: How AI technology Is Worsening Systemic Oppression, yet may also offer a solution

Image: Rich Smith

How AI emphasises our own racial bias in the news and Social Media

AI

Both noticeably and implicitly, algorithms are controlling our lives. From location-based dinner suggestions on our smartphones to ‘Discover Weekly’ on Spotify, recommender systems, and the algorithms that drive them, permeate nearly every online network; tracking and influencing our online behaviour. The omnipresence of algorithms in our daily lives has brought about an increased awareness and scepticism of where these suggestions come from, forcing us to consider: ‘how does Spotify know me so well?’ and ‘is my phone listening in to my conversations?’.

Although this could border on conspiracy, these people have a point. Algorithms continually absorb our online data and record our every move; what sites we search, what news sites we rely on and in what format, how long we spend on a page reading it and so on. With this seemingly meaningless data, algorithms can estimate with incredible accuracy our age, gender, political viewpoint, education level, location (if you allow this on your privacy settings) and even our race. This generates a full-detail profile of you as an individual and therefore tailoring advertising and content personally for you.

News and Race

The recent upsurge in support for the worldwide longstanding Black Lives Matter movement, sparked by the brutal death of George Floyd in police custody, has raised further awareness of the racial biases prevalent in our society. More than ever, people are becoming conscious of the way that minority ethnicities are not only disproportionately represented on an institutional level, but how they are often negatively portrayed by the media.

Although it is widely known that social media apps use algorithms to tailor content to its users, it is often overlooked that authoritative sources of information, such as digital news outlets, also dictate what stories we are presented with. As of late, news articles have resurfaced revealing the inherent racial stereotypes dominant in our society. Contrasting media treatment of Kate Middleton and Meghan Markle during pregnancy emphasises this point. The Daily Mail has depicted Kate Middleton ‘tenderly cradling’ her bump as the ideal British woman, whereas the images of Meghan Markle in a similar posturing were spun to ask whether it was ‘Pride, Vanity, Acting – or a new age bonding technique?’.

I am not claiming that the journalist who wrote the Meghan article is an actively racist person, but it makes it clear: we cannot take the content we are presented with online as face value especially in the news. We must constantly be aware of the implicit racial bias within us, even if we are outwardly anti-racist.

More recently, racial bias in the news has been exposed through journalistic coverage of the Black Lives Matter protests across the UK. Despite thousands flocking to beaches nationwide in the past few months, news outlets have been quick to fear-monger by claiming that a potential increase in Coronavirus cases will be undeniably linked to the protests.

Online media

Online media is intrinsically linked with ‘confirmation bias’, a neurological trait common to all humans. This subconscious bias forces us to look for evidence which confirms what we already believe—to see facts and statements that further support our predetermined ideologies—and to disregard any evidence that supports a different view.

In this way, we are continually choosing what we see online, which then influences our algorithm recommendations. As Nick Diakopoulos claims about algorithms in journalism, “[algorithms] maximize for clicks by excluding other kinds of content, helping reinforce an existing worldview by diminishing a reader’s chance of encountering content outside of what they already know and believe.” This ‘filter bubble’ (a term taken from the 2011 book by Eli Pariser) generates a dangerous feedback loop which feeds your inherent bias, whether bad or good. This shockingly means that; everyone, dependent on the data collected by algorithms, receives different news, therefore leading to slanted versions of events and potential racial biases, without the reader even being aware of it.