Ainhize Barrainkua will defend his thesis on Friday, January 30th
- The defence will take place at Ada Lovelace Aretoa, Faculty of Informatics (EHU - Donostia) at 11:30
Ainhize Barrainkua obtained her Bachelor's degrees in Physics and Electronic Engineering from the University of the Basque Country UPV/EHU in 2020. She received her Master's degree in Computer Engineering and Artificial Intelligence in 2021 at the same university. Since late 2021, she is a Ph.D. student at the Basque Center for Applied Mathematics (BCAM) under the supervision of Prof. Jose A. Lozano and Dr. Novi Quadrianto. Her ongoing research primarily revolves around integrating and modeling uncertainty within methodologies from the field of algorithmic fairness.
Her thesis, titled “New Perspectives on Machine Learning Fairness: Algorithmic Design and Evaluation” is supervised by Prof. Jose A. Lozano (Scientific Director of BCAM and UPV/EHU Professor) and Dr. Novi Quadrianto (U. Sussex and BCAM). It is scheduled to be defended on January 30th, 2026, at Ada Lovelace Aretoa, Faculty of Informatics (EHU - Donostia) at 11:30 a.m.
On behalf of all members of BCAM, we would like to wish her all the best for the future, both professionally and personally.
Abstract
Algorithmic systems are increasingly embedded in critical decision-making processes across finance, hiring, healthcare, and criminal justice. While these systems promise efficiency and consistency, they also risk perpetuating societal biases, often producing unequal outcomes for different demographic groups. Traditional approaches proposed to assess fairness and mitigate bias assume idealized conditions, including full access to sensitive attributes and identically distributed training and deployment data. In practice, these assumptions are frequently violated: sensitive information may be unavailable due to legal or privacy constraints, and real-world deployment environments often differ substantially from training conditions. Thus the environments in which automated decision-making systems operate in reality, introduce new uncertainties for fairness assessment and bias mitigation, posing significant challenges for existing approaches that are often ill-suited to function under such conditions.
This thesis tackles these limitations by systematically integrating such sources of uncertainty and non-ideal conditions into both the evaluation and enhancement of the fairness guarantees of classifiers. We first introduce a framework for reliably auditing fairness guarantees, addressing the inherent instability of fairness metrics and enabling robust comparisons across classifiers. Next, we provide a comprehensive review of methods for ensuring fairness under distribution shift, offering a novel taxonomy and a broad perspective on related work and open challenges. We then develop a fairness-enhancing intervention for settings where sensitive attributes are unavailable, providing theoretical guarantees that extend fairness assessment to contexts with partial access to demographic information. Finally, we examine fairness in recourse-aware systems, which deliver actionable feedback to individuals affected by negative algorithmic predictions. We demonstrate that widely used metrics for evaluating fairness in recourse frequently fail to capture significant fairness issues, and introduce a novel framework that provides a holistic perspective on biases, accompanied by a practical mitigation intervention to promote more equitable outcomes in these pipelines.
Through a combination of theoretical analysis, practical algorithms, and extensive empirical evaluation, this thesis contributes a comprehensive framework for fair decision-making under more realistic conditions. The proposed methods extend the applicability of fairness interventions beyond idealized settings, providing robust, actionable tools for designing and auditing algorithms that operate responsibly in complex, real-world environments.
Related news
Sobre el centro
BCAM se suma a Emakumeak Zientzian 2026 para visibilizar el talento femenino en la ciencia
La gente del BCAM