When your mailbag handles 25,000 letters and emails every day, it’s daunting to decide which answer comes first. When it comes to domestic help from some of the country’s most vulnerable people, the stakes only get higher.
This is the challenge for the Department for Work and Pensions (DWP) as correspondence floods with benefit claimants and claimants – of which there are more than 20 million in the UK, including pensioners. The DWP believes it has found a solution to use artificial intelligence to read everything first – including handwritten missives.
Human readings used to take weeks and could leave the most vulnerable waiting too long for help. But “White Mail” is an AI that can do the same work in a day, supposedly prioritizing the most at-risk cases for officials to reach first.
Implicitly, it released other people, so its accuracy and the number of judgments, but both matters remain opaque. Despite a ministerial mandate, it is one of the numerous public sector algorithms still registered in the central government transparency register AIS.
White Mail has been piloted since at least 2023, when then-welfare minister Mel Stride said that “the most rapid way to direct those in need is to the relevant person who can help them”. However, documents released to the Guardian under the Freedom of Information Act show applicants are not being told about the use.
An internal privacy brokerage assessment states that letter writers “do not need to be informed of their involvement in the initiative.”
The assessment says correspondence may include National Insurance numbers, dates of birth, addresses, telephone details, email addresses, benefit entitlement details, health insurance information, bank account details, racial and sexual characteristics and details about children such as their birth dates and special needs.
People who work with performance acquaintances are now raising “serious concerns” about how the system handles sensitive personal data.
Meagan Levin, the policy and public affairs manager for Turn2us, a charity that helps people experiencing financial insecurity, said the system “raises concerns, particularly around the lack of transparency and the treatment of highly sensitive personal data, including medical records and financial details. The processing of such information without the knowledge and consent of applicants is deeply disturbing. “
According to the information published so far, the data will be encrypted before the originals are deleted and held by the DWP and the cloud computing provider. The name of the provider is one of many pieces of information about the modified system.
The DWP’s data protection tracking assessment also says advising people about this type of processing of their data “is not necessary as these solutions will increase the efficiency of processing”.
Officials say it will be added to existing systems and flags correspondence, which will then be reviewed by agents to determine whether a correspondent is actually potentially vulnerable. The DWP said the AI did not make a decision and no data was processed.
The huge text is also used by the DWP to determine findings and produce a “thematic analysis”, although little more has been published about what form these take and how these findings have been used.
“Some cases inevitably prioritize others.”The DWP should publish data on the performance of the system and put in place safeguards, including regular audits and accessible appeals processes, to protect vulnerable claimants.
“Transparency and accountability must be at the heart of an AI system to ensure it supports, rather than harms, those who rely on it.”
The DWP has been contacted for comment.