Algorithms are amplifying inequalities

Image by Gerd Altmann from Pixabay

By: Calum Paton

All

*Support NewsLeaf.com by becoming a Patron today by clicking here.*

News Leaf Newsletter

Subscribe to our newsletter to get notified about the latest articles from News Leaf.
Loading

The government’s decision to revert to predicted grades following the A-level results fiasco was based upon the significant impact that the grades algorithm had on university places for low-income students, entrenching the disadvantages that many already faced. With algorithms increasingly commonplace in decision making – particularly in the world of political advertising and finance – this issue is only likely to deepen in the coming years.

Before results were even awarded, there were concerns raised that the algorithm would disproportionately downgrade the results of students from schools in more deprived areas, with the algorithm used to determine grades factoring in the average results at the school in previous years.

This means that students – even excellent students, predicted the top grades – faced significant downgrading based on whether their school had performed worse on average in the past versus the national average, a problem that disproportionately impacted students from lower-income areas.

Already, algorithms control far more of our lives and our decisions than we would like to admit. One of the major political and social science debates for scholars is the Structure vs Agency debate; whether people are products of the environment in which they live, or are fully in control of their actions and their lives. It is becoming increasingly clear that we are beholden to not just the environment we live in, but to algorithms fuelled by metadata.

One of the clearest examples of this is based on our consumption of news. Instead of selecting which outlet to get our information, algorithms feed us with the information it believes we most want; Google’s news service converts your search trends into news articles on your feed, whilst Facebook has become notorious for its creation of news-based echo chambers, spawning increasingly histrionic conspiracy theories as consumers fall deeper into the rabbit hole.

Having worked on political campaigns, I am acutely aware of the impact that algorithms and data have on the consumption of political campaign ads; former Barack Obama campaign manager, Jim Messina, and Donald Trump’s recently fired campaign manager, Brad Parscale, both pioneered a targeted, data-driven form of political advertising that has since become the norm.

Also read:

Algorithms have allowed voters to read from their own fact sheets and believe in their own truths by showing thousands of different versions of adverts to thousands of different categories of voters; determined by a series of data points, such as income, job, or even favourite foods.

In his book, The People vs Tech, Jonathon Bartlett creates a vision where smart fridges feed data back to political advertisers, making it acutely aware to campaigns when an individual is hungry and therefore more susceptible to having negative views on refugees; the perfect time to show them an advert of Syrian children huddled on a dinghy, framed against the white cliffs of Dover, motivating their support for an ethno-nationalist. The result is an increasingly divided body politic, set against ever-rising inequalities.

Although this is potentially perilous for our democracy, the possibility of algorithms entrenching divides in our society runs much deeper, with fears about the use of algorithms for determining access to credit. Although positive for creating more accurate predictions about an individual’s ability to pay off debts – resulting in less defaulting and unpaid debt – these algorithms would potentially cut off access to credit for swathes of low-income people, entrenching unprecedented wealth inequality.

Technology such as Artificial Intelligence (AI) that uses algorithms to determine outcomes has been criticised as both heavily flawed and inherently unfair. A significant example, coming from the United States, has been the use of AI to determine treatments that patients should receive, with a 2019 study highlighting that the system has inbuilt racial biases that resulted in African Americans receiving lower-quality care.

In the United States, where health insurance determines treatment, algorithms are increasingly used to determine who should be insured and at what price, based on their likelihood of developing certain medical conditions. Although from an economic point of view it makes sense to charge more to individuals at higher risk of requiring treatment for certain conditions (such as those who live a lifestyle conducive to heart disease) the use of such algorithms will remove the moral assertions that determine human compassion and bake in inequalities, such as low-income individuals being at significantly higher risk for heart disease, certain types of cancer, and other conditions.

The result is an even deeper poverty trap, where low-income individuals are at higher risk and have to pay more to receive treatment, but are also less able to access credit that might cover such costs, or even able to find a job – another study highlighted the impact of algorithmic hiring on individuals with protected characteristics, such as race, gender identity and disability.

Algorithms are not inherently evil, they can be extremely positive if the right data is fed into them, but the problem is that the right data is not necessarily fed into them.

They are perhaps best viewed as megaphones – whatever you feed into the microphone will be amplified by the speaker – feed in data that is fraught with social inequalities and you will engrain these when the algorithm spits data out the other end.

When you feed in a postcode lottery of educational attainment, the algorithm will spit out an inherently unequal bell curve of results that determines your outcome not by ability, but by the different data points that feed into the mathematical sequence, with the added data-pizzazz seal of legitimacy.

These algorithms are not the problem. Their use in our lives is unavoidable and the sliding scale towards diminished agency is virtually unavoidable, but a failure to recognise how they will amplify the current inequalities in society today will only result in a more deeply divided and unequal society.

We must address the inequality within society and be critical of the metadata that is fed into these mathematical sequences or accept that the exam results fiasco is just the first high-profile example of our lives being beholden to data-driven algorithms and the big-data legitimacy trap.

______________________________________________________________________________

Calum Paton is a History and Politics graduate from the University of Warwick. His writing predominantly focuses on American and British politics. Twitter: @Paton_Calum

Leave a Reply

Your email address will not be published. Required fields are marked *