Two: Testing Systems

Restricted access
Rights and permissions Cite this chapter

At dawn on 30 June 2014, Raja Noman Hussain awoke to find about 15 immigration and police officers raiding his house.1 Raja, a 22-year-old Pakistani man, had arrived in the UK several years earlier to study. Now he was being accused of cheating in an English language proficiency test approved by the Home Office, which he had sat in 2012 to meet a condition of his visa. After confirming his ID, the officers told him to grab some clothes, handcuffed him, and took him into immigration detention. Raja spent the next four months in detention, during which time he estimates he met over 100 other international students who had also been detained on the same basis. What followed was six years of legal battles over the cheating allegation, which disrupted his studies, estranged him from his family, and cost him around £30,000. Finally, in early 2021, Raja succeeded in clearing his name and confirming his right to be in the UK.

Raja was one of the tens of thousands of students whose visas were revoked or curtailed – and studies disrupted or ended – after the Home Office accused them of cheating in a government-approved English language test. This scandal eventually hit the headlines. The ensuing appeals and judicial reviews – which became known as the ‘ETS cases’ – have cost the government millions of pounds.2 What is less appreciated about this debacle is that much of it centred on a failed automated system: a voice recognition algorithm which the government used to identify suspected cheats. This chapter explores that side of the story.

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 152 114 2
Full Text Views 3 3 0
PDF Downloads 3 3 0