Is using an A.I. detector lawful?
It’s 100% legal to use A.I. tools for shortlisting. Using a tool like GPTZero is lawful if you treat its analysis as one piece of evidence, you stay transparent, you keep a human in the loop, and you audit for bias. Problems arise when the detector becomes a ‘black-box gatekeeper’ or you fail to meet data-protection or equality duties. Let’s look at the GDPR issues that arise when you use A.I. detection software.
Transparency
When you use GPTZero or Copyleaks, you are processing personal data because you are uploading and analysing someone’s information. Under UK GDPR you need a lawful ground to process personal data. The lawful ground here is ‘legitimate interest’ – you’re verifying candidate authenticity. But you need more than just a lawful ground to comply with the UK GDPR. Your privacy notice must state that you screen for A.I. writing, it must outline how the tool works in plain language, and you must tell applicants how to challenge a result.
Keeping a human in the loop
You don’t need to review every application. Just show meaningful human involvement. UK GDPR only requires full human review if the candidate asks or if a decision with significant impact (like rejection from a job) is made solely by automation. This human review can be achieved through sampling. The recruiting manager can review a summary of A.I. shortlisting decisions, checking for unusual rejection patterns or disproportionate rejection rates among certain demographics.
Using bias testing
The Equality Act is also relevant. Because detectors sometimes misclassify those who have English as a second language as written by A.I., or misclassify neurodivergent writing, you must test regularly for disparate impact and adjust thresholds or add alternative assessments if a protected group is hit harder. Amazon had to abandon its CV-screening software after it was found to downgrade CVs with words like “women’s” (e.g. women’s chess club captain) and gave lower scores to graduates of women-only colleges. Under the Equality Act, any rule or process that puts a protected group at a disadvantage risks an indirect discrimination claim. The aim is to check whether the tool’s decisions consistently favour or disadvantage people from certain groups—such as sex, ethnicity, disability, or age – who gets shortlisted and who doesn’t? If say, the algorithm shortlists 80% of male applicants but only shortlists 30% of female applicants – that’s a red flag. If you find bias, the tool may need adjusting—or replacing. You should also document everything – this is your audit trail, which is essential if you’re ever challenged by a regulator or candidate.
Carrying out a Data Protection Impact Assessment (DPIA)
Using A.I. to screen job applicants is likely to be classed as ‘high-risk processing’ under the UK GDPR, especially if done without human input. That means you should carry out a Data Protection Impact Assessment (DPIA). A DPIA shows you’ve considered the risks to candidates and how you will reduce them. Set out the tool’s purpose, the data it uses, and safeguards like bias checks and human oversight. Store the DPIA centrally so it’s ready if the ICO, a candidate, or tribunal asks. A good DPIA also helps you catch risks early.
Please give one of our expert team a call or get in touch via our Contact Form.