X

UK agrees to redesign 'racist' algorithm that decides visa applications

It's a win for campaigners challenging Home Office use of the algorithm, which for the past five years has been helping decide whether people are granted visas.

Katie Collins Senior European Correspondent
Katie a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand.
Katie Collins
5 min read
gettyimages-1155893936

The UK visa algorithm will be overhauled by the end of October.

Daniel Leal-Olivas/Getty Images

The UK government said Tuesday that it'll stop grading  visa  applications with an algorithm critics have called racist. From Friday of this week, a temporary system will be put in place to grade applications while the algorithm undergoes a redesign before being reintroduced by the end of October.

"We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure," said a Home Office spokeswoman in a statement.

The decision to suspend the use of the "streaming tool," which has been used by the UK Home Office since 2015, comes in direct response to a legal threat by tech accountability organization Foxglove and the Joint Council for the Welfare of Immigrants (JCWI). Together they allege the tool is racist due to its use of nationality as a basis on which to decide whether applicants are high risk.

Racial bias in algorithms is a well-documented issue in facial recognition technology, but it's also widely considered to be a problem in algorithms across the technology industry. The legal challenge by Foxglove and the JCWI comes at a time when governments around the world are increasingly requesting that private tech companies be radically transparent about the way their algorithms are built and how they work.

Critics of the UK government's lack of transparency believe this is hypocritical, as well as undemocratic. Decisions made by the algorithm could have far-reaching implications, they argue.

"It's about who gets to go to the wedding or the funeral and who misses it," one of Foxglove's directors, Cori Crider, said in an interview. "It's who gets to come and study, and who doesn't. Who gets to come to the conference, and get the professional opportunities and who doesn't.

"Potentially life-changing decisions are partly made by a computer program that nobody on the outside was permitted to see or to test," she said.

The streaming tool works by sorting through visa applications using a traffic light system to "assign risk" and siphon off flagged applications for human review, according to Chai Patel, legal director at the JCWI. 

If an application is categorized as red, human reviewers are given a long time to decide whether to grant a visa, which he said "gives them time to look for reasons to refuse." Their decision is then reviewed again by a second person if they decide to grant a visa to one of these applicants regardless of their high-risk status, but not if the visa application is denied.

Conversely, Chai added, if the algorithm categorizes applications as green, decisions have to be made more quickly, and are reviewed by a second person only if they're refused.

The tool is designed to continuously learn and adapt to decisions made about other applications, using nationality as a major factor. "That creates a feedback loop where if your nationality is high risk, you're more likely to be refused, and then in the future that's going to be used as a reason to increase the risk for your nationality," said Patel. 

Plus, he added, because it uses historic Home Office data to make decisions, it "sits on top of a system that was already extremely biased."

Carly Kind from independent AI ethics body the Ada Lovelace Institute said over email that it's well established that AI and algorithms have the potential to amplify existing assumptions and discriminatory attitudes.

"When algorithmic systems are deployed in systems or organizations that have historical problems with bias and racism -- such as the Home Office and the UK's immigration system, as was well established in the Windrush Review -- there is a real risk that the algorithm will entrench existing social biases," she said.

It's not clear where the Home Office streaming tool originated, though researchers from Foxglove and the JCWI believe it was built in house by the government rather than brought in from a private company. They allege that the government is being purposefully opaque about the algorithm because it discriminates based on the nationality of the applicant, and that it doesn't want to release a list of the countries it considers high risk into the public domain.

If that's the case, Foxglove and the JCWI say, the system could be contravening the UK Equality Act. Together they filed a judicial review claim back in June to challenge the legality of the streaming tool.

In spite of the Home Office responding directly to their complaint in a letter on Tuesday, it denied that any of the concerns they've raised are valid. It also stressed that it's already started to move away from using the tool for some types of visa application.

"We do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still on-going it would not be appropriate for the Department to comment any further," said the Home Office spokeswoman.

In the longer letter, signed by an unnamed Treasury solicitor, the Home Office says it'll take into consideration the suggestions made by Foxglove and the JCWI during the redesign process, but it didn't elaborate on exactly what this might mean.

According to Kind, performing a Data Protection Impact Assessment would be a good start -- and is in fact required by law before public bodies deploy any technical systems. But even DPIAs, she added, are "not sufficient to give algorithmic systems the seal of approval." 

She listed a number of steps the Home Office should take if it wants to do its due diligence during and following the redesign process, including:

  • efforts to scrutinize the impacts of the system.
  • external scrutiny in the form of regulator oversight. 
  • evaluation and public assessment of the success of the tool.
  • provisions to ensure accountability and redress for those affected by the system.

Crider added that she hopes to see much more transparency from the Home Office in the future, as well as a consultation before the redesigned system is introduced.

"With all of this kind of decision by algorithm, we need to first have democratic debates about whether automation is appropriate, how much automation is appropriate and then how to design it so that it doesn't just replicate the biases of the world as it is," she said.