Bias in UK Government AI System: An AI system used by the UK government to detect welfare fraud is exhibiting bias based on factors like age, disability, marital status, and nationality, according to a fairness analysis obtained via the Freedom of Information Act.
Internal Assessment Findings:
The system, designed to identify fraudulent claims for universal credit, disproportionately flags individuals from certain demographic groups.
This “statistically significant outcome disparity” was identified in February’s fairness analysis, despite earlier assurances from the Department for Work and Pensions (DWP) that the system posed no discrimination concerns.
Concerns Over Limited Analysis:
The DWP has yet to assess the AI system’s potential bias regarding race, sex, sexual orientation, religion, pregnancy, and gender reassignment status.
Critics argue that the absence of a comprehensive bias analysis undermines the system’s fairness and inclusivity.
Campaigners’ Criticism:
Some have condemned the government’s “hurt first, fix later” approach, demanding transparency about which demographics are most affected.
While human oversight remains in place for final decisions, campaigners called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system.