Analysis | The Digitalisation of Public Services and Widening Inequality | PPR

The Digitalisation of Public Services and Widening Inequality

As the public sector finds new, digital ways to do the jobs previously done by humans alone – more and more of our day to day lives are lived out online. Bethany Waterhouse-Bradley  |  Tue Jan 12 2021
The Digitalisation of Public Services and Widening Inequality

As the public sector goes through the process of digitalisation – finding new, digital ways to do the jobs previously done by humans alone – more and more of our day to day lives are lived out online.

Sure, it might be convenient to shop for groceries online, but now there is an expectation private and sensitive issues be carried out through online, sometimes automated, systems. Things like banking, applying for jobs, seeking support from health professionals, and registering for benefits and social security are taking place online. The information collected online is also used to make important decisions that affect our lives. This means that a computer programme could be responsible for making a decision about whether you are eligible for benefits, get invited for a job interview, or get a line of credit.

There are a lot of things to consider about automation and public services. People are increasingly being asked to hand over data, self-monitor or submit to surveillance by the state in order to receive benefits, security or services. There are a lot of examples of how this is more likely to have serious consequences for working-class or minority groups. In the US, the digitisation and automation of welfare systems has led to significant drops in the number of people who are eligible for benefits who actually receive them, and tools used to decide prison sentences have been shown to be more likely to give harsh sentences to people of colour. Facial recognition technology has repeatedly been shown to be unreliable, but is still being used by police and security services in the UK. Even when users have been able to appeal computer decisions, the lengthy process has led to further financial hardship, debt and poor health.

The digitalisation of public services and inequality

Data analysis is being used to predict things like potential offending behaviours and deciding whether someone should go to prison. Predictive tools make decisions about whether or not people can get much needed home improvement loans, what children might be referred to social services, and even medical diagnoses. These are huge decisions, but often the tools being used to make them are not good at making predictions that are accurate or fair for people from ethnic minority groups, those who identify as LGBTQ+, women, or anyone who falls outside a very narrow definition of ‘average’. Because the decisions are based on historical information (decisions made by humans), the chances of neutral or unbiased outcomes are low. Instead, they are likely to reinforce the existing inequalities – repeating at scale the discrimination and biases in society.

There is also little information being shared with the public about how private companies and the government are obtaining, storing, and using our data. We are not being told how decisions about our lives and futures are being made. A UN report on extreme poverty highlighted concerns about the potential impact of automation on the poorest in the UK, and several reports have found that the UK government is not transparent how it is using AI in the public sector. This is a particular problem for those of us who have little choice in whether we ‘consent’ to handing over personal data – like foreign nationals, people with chronic health or disabilities, and those who rely on the social security system.

So what can we do about it? The idea of AI, algorithms or statistics may be intimidating to most of us, but it is important to get an understanding of the basics so that we know how it is likely to affect our lives. It is important to acknowledge that these tools are simply that – tools. They are not in themselves good or bad, they simply do the tasks they are asked to do, the way they are asked to do them. But in an environment where austerity, efficiencies, and conditional benefits are dominant, automation helps to scale up the negative impacts of these systems, as well as making them more difficult to challenge. The very people we expect to be holding tech companies accountable oversee the system in which these tools are being placed (the welfare state, the police) – meaning ‘biased’ technology is not as much of a problem as the bias in institutions using that technology.

That’s why it is important that the average person has access to information on how and when AI and machine learning are being used and what affect it will have on their lives, and that we push for ways to opt-out, for rapid (and human) appeals, and for oversight which makes space for the people most affected to have their say.