Mario Martinez will defend his thesis on Wednesday, July 2nd

La defensa tendrá lugar en la Sala Ada Lovelace de la Facultad de Informática del Campus de Donostia de la UPV/EHU a las 11:00 h.

Mario Martinez ha sido estudiante de doctorado en el Basque Center for Applied Mathematics (BCAM) desde junio de 2021, trabajando dentro del grupo de Machine Learning bajo la supervisión del Dr. Iñaki Inza y el Dr. Jose A. Lozano. Su investigación se centra en el avance de la inteligencia artificial en contextos clínicos, particularmente a través del aprendizaje con información privilegiada (LUPI). Explora técnicas como la destilación de conocimiento y el aprendizaje multitarea para mejorar la clasificación supervisada.

Martinez obtuvo su Grado en Física por la Universidad de Murcia en 2019 y un Máster en Ciencia de Datos por la Universitat Oberta de Catalunya en 2021.

Su tesis, titulada Avances en el aprendizaje con información privilegiada para la clasificación supervisada, está dirigida por Dr. Jose A. Lozano (Director Científico de BCAM) y el Dr. Iñaki Inza (EHU).

En nombre de todos los miembros de BCAM, deseamos a Mario lo mejor para el futuro, tanto profesional como personalmente.

Abstract

What if machine learning models could train with extra “privileged knowledge”? This dissertation explores how the Learning Using Privileged Information paradigm can significantly boost performance in supervised classification. Firstly, two logistic regression-based methods learned using privileged information are presented. Secondly, a privileged knowledge distillation approach is proposed: a teacher model (trained with both regular and privileged features) transfers knowledge to a student model using only regular features. However, the teacher model may not be entirely reliable or error-free. Therefore, the proposed distillation framework includes a mechanism to guide the student correctly. Finally, a multi-task privileged framework is introduced. Thereby, one task predicts privileged features from regular ones, and another uses regular and the predicted privileged features to perform the final prediction. Moreover, this framework is also addressed using knowledge distillation techniques. It is important to note that privileged information does not inherently guarantee improved model performance. Consequently, each chapter introduces different approaches designed to maximize the advantages of privileged information and to provide a clearer understanding of its impact on model performance. All methods are validated on various datasets, demonstrating significant improvements over current state-of-the-art techniques. The work contributes both theoretical insights and practical solutions for leveraging privileged information in real-world scenarios.