Bio:
Jean-Michel Loubes (https://perso.math.univ-toulouse.fr/loubes/ ) is a French mathematician and professor specializing in statistics and machine learning. He holds a professorship at the Université Toulouse III - Paul Sabatier and is affiliated with the Institut de Mathématiques de Toulouse (IMT). He is at this moment Research Director at INRIA. His research focuses on mathematical statistics, machine learning, optimal transport, and the fairness and robustness of artificial intelligence systems.
Loubes completed his PhD in Applied Mathematics at Université Toulouse III in 2001, with a dissertation titled "Adaptive M-estimation" under the co-direction of Michel Ledoux and Sara van de Geer. He has held positions as a CNRS researcher at Université Paris-Sud and Université Montpellier II before becoming a professor at Université Toulouse III in 2007.
In addition to his academic roles, Loubes has been actively involved in bridging academia and industry. He served as the regional manager for Occitanie of the CNRS's Agence de Valorisation des Mathématiques (AMIES) from 2010 to 2016.
He is also the holder of the "Fair and Robust Learning" Chair at the Artificial and Natural Intelligence Toulouse Institute (ANITI), where his research addresses issues of fairness and robustness in artificial intelligence.
Throughout his career, Loubes has contributed significantly to the fields of statistics and machine learning, with numerous publications and citations. His work often explores the application of optimal transport theory in machine learning and the development of fair and robust AI systems.
Abstract:
As Artificial Intelligence (AI) systems continue to permeate our daily lives, ensuring their fairness has become both a legal necessity and an ethical imperative. This course provides a comprehensive exploration of bias in AI, beginning with core definitions and the evolving legal and regulatory landscape. Participants will investigate how bias originates in data and algorithms, and learn to evaluate and measure it through established fairness metrics. Special emphasis is placed on Optimal Transport (OT) theory and its role in detecting and mitigating bias, both post-hoc ("a posteriori") and before model training ("a priori") or when training ("in-processing"). In addition, the course delves into explaining the underlying causes of bias, enabling practitioners to make AI systems more interpretable. Finally, participants will learn to conduct comprehensive audits of AI algorithms, ensuring these systems adhere to fairness principles. By uniting theoretical constructs, practical tools, and ethical considerations, the course empowers students to develop and deploy AI solutions that promote equitable outcomes for all.