Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

DWP using machine algorithm to decide whether people should receive universal credit

Warnings vulnerable people in danger of being ‘unfairly penalised’ by AI system

May Bulman
Social Affairs Correspondent
Monday 11 July 2022 19:41 BST
Comments
Related video: DWP shares incorrect American Sign Language video in support of British Sign Language Bill

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The government is trialling a machine algorithm to predict whether universal credit claimants should recieve benefits based on their perceived likelhood of committing fraud in the future.

Campaigners have warned that marginalised or vulnerable groups are in danger of being unfairly penalised and having their benefits stopped before they are even paid out under the algorithm, which the Department for Work and Pensions (DWP) has been trialling over the past year.

The department’s 2021-22 accounts, published last Thursday, revealed that it had trialled a “risk model” to “detect fraud” in universal credit advances claims by analysing information from historical fraud cases to predict which cases are likely to be fraudulent in the future.

The document states that this analysis was performed by a “machine learning algorithm”, which “builds a model based on historic fraud and error data in order to make predictions, without being explicitly programmed by a human being”.

In 2021-22 the model has been run to detect fraud in advances claims already in payment, and the department expects to trial the model on claims before any payment has been made early in 2022-23.

“If successful this could improve its ability to prevent fraud before these benefts are paid out, avoiding the need to seek recovery,” the accounts state.

A separate report by the National Audit Office on the DWP’s accounts, also published last Thursday, revealed that the DWP was aware of the potential for such a model to generate “biased outcomes” that could have an “adverse impact on certain claimants”.

“For instance, it is unavoidable that some cases flagged as potentially fraudulent will turn out to be legitimate claims. If the model were to disproportionately identify a group with a protected characteristic as more likely to commit fraud, the model could inadvertently obstruct fair access to benefits,” the report states.

It also pointed out the potential for legal risks if the department were found in breach of its obligations regarding transparency or data protection.

Ariane Adam, legal director of the Public Law Project, said: “Departments across government need to commit to a great deal more than just being ‘aware’ of the risks. We need a clear commitment that all government departments will be transparent about how they use algorithms.”

She said the lack of transparency around the new algorithm was “very problematic”.

“Despite many requests under the Freedom of Information Act, the DWP has previously refused to provide details about its use of automation to assess universal credit applications,” she said.

“Without transparency there can be no evaluation, and without evaluation it is not possible to tell if a system works reliably, lawfully or fairly.”

Ms Adam added that there was a “massive risk” that the policy would have a discriminatory impact.

“Using algorithms fed by historic big data to make decisions on welfare benefit claims carries a danger of unfairly penalising and discriminating against marginalised or vulnerable groups,” she said.

“In the midst cost-of-living crisis, people could have benefits stopped before they are even paid out because a computer algorithm said ‘no’.”

A DWP spokesperson said: “We do not use artificial intelligence to make decisions on how a universal credit claim should progress and continue to work hard to be transparent as possible about our claims process without compromising our ability to identify fraud.

“It is right that we keep up with fraud in today’s digital age so we can prevent, detect and deter those who would try to cheat the system and more importantly, improve our support for genuine claimants.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in