Usually, an employment relationship ends because the employee resigns to go to another job or retires. But sometimes it ends because of dismissal or redundancy. With A.I. in the mix, we have new twists on both the reason for discipline or dismissal (such as an employee misusing A.I. tools) and the process of dismissal (such as an employer relying on A.I. algorithms to select someone for redundancy). Let’s look at both:

(a) Employee Misuse of A.I. – Disciplinary Action

If an employee breaks your A.I. rules, despite clear policies and training, treat it as misconduct and follow normal disciplinary procedures. It’s essential to take these breaches seriously. Even if no harm is caused (for example, someone uploads client data but nothing bad happens), you should still discipline them. Dismissal might not be necessary, but a formal warning should be given. Why?

  • to make clear that the rules matter and must be followed; and,
  • because if you let it go now, and later discipline someone else for the

same thing, they could argue they’ve been treated unfairly or even bring a

discrimination or unfair dismissal claim.

Make sure your A.I. rules are clear, and employees have seen and understood them – in writing and through training. If they haven’t, enforcement becomes difficult. But if they have, then the key legal question in any dismissal will be whether it was within the band of reasonable responses.

Example: An employee uses a tool like www.fyxer.com to reply to all emails and barely does any real work. If their work is poor and they hid how little they were doing, it might look like a performance issue, but it’s probably really misconduct. They could be in breach of their duty to use reasonable skill and care, and potentially their duty of good faith. Using A.I. isn’t wrong in itself but using it to mislead your employer about your actual effort probably is.

(b) Dismissal by Algorithm

What if the employer is using A.I. to make dismissal decisions, rather than the employee misusing A.I.? It’s already happening. Two recent examples:

  • Uber: In 2021, Uber Eats courier Pa Edrissa Manjang had his account deactivated after repeated ‘failed’ selfie verifications by Microsoft’s facial recognition A.I. Mr Manjang is black, and the system struggled to match his selfies with his profile photo. There was no human review, and because of the repeated mismatches by the algorithm, his account was deactivated. With backing from the Equality and Human Rights Commission, he challenged the decision as indirectly racially discriminatory, noting that facial recognition A.I. is less accurate for darker skin tones. Uber settled in 2024, reactivated his account, and paid compensation.
  • Estée Lauder: Three make-up artists in the UK were made redundant after failing a video assessment marked by an algorithm. They were never told how or why they failed. The company later reached a settlement.

These are extreme cases where no human judgment was involved. Legally, that’s a serious problem for three reasons:

(a) UK GDPR (Article 22): Significant decisions made solely by automated means (like firing someone) are usually unlawful. A human must review the outcome if requested, and the process must have proper safeguards such as general human overview.

(b) Employment Rights Act 1996: Dismissals must be for a fair reason and follow a fair procedure. Even if the reason is valid (such as performance), a fully automated process with no hearing or human involvement would almost certainly, be procedurally unfair.

(c) Equality Act 2010: If an apparently neutral tool (like facial recognition) disproportionately affects people with a protected characteristic (like race), it’s indirectly discriminatory unless justified. Employers relying on third-party A.I. systems may struggle to justify outcomes they don’t fully understand or control. That said, pure automation is rare. The more likely situation is a hybrid process, where A.I. contributes to the decision but doesn’t make it alone.  It is reasonable to use A.I. to score written work in redundancy cases if employees can appeal the scores. Yes, A.I. can make mistakes – but so do human managers, and errors in redundancy scores don’t automatically make the process unfair.

What about whistleblowing?

Suppose an employee refuses to use an A.I. system and raises concerns: This might breach GDPR and may for example involve sensitive health data, so it’s in the public interest to raise it. If they’ve voiced their concerns and disclosed them to management, that could qualify as a protected disclosure under whistleblowing law. Disciplining them because they’ve spoken up would be unlawful. But it’s a highly ‘fact sensitive’ issue. Distinguish that situation from an employee who is disciplined for refusing to use an A.I. system, rather than for speaking up about its risks. That might not be protected, because that’s not a disclosure of information.

At present, there are no UK tribunal decisions squarely dealing with an A.I.-driven dismissal. But it’s coming. For now, fairness, transparency, and human oversight remain the key principles.

Please give one of our expert team a call or get in touch via our Contact Form.