Algorithmic bias arises when an AI system produces distorted or unfair outcomes for groups or individuals. LUCI studies logical methods for analysing fairness, comparing model behaviour, and tracking bias amplification across the full machine-learning pipeline.
BRIO line
The BRIO project studies bias-detection software grounded in logical tools such as TPTND, together with data fairness analysis, model fairness analysis, and risk analysis.
Why it matters
As AI systems increasingly affect healthcare, criminal justice, education, and credit, formal methods are essential for building more equitable and trustworthy systems.