Mario Martinez will defend his thesis on Wednesday, July 2nd
The defence will take place at Ada Lovelace Room at the Faculty of Informatics of the Donostia UPV/EHU Campus at 11:00 h
Mario Martinez has been a PhD student at the Basque Center for Applied Mathematics (BCAM) since June 2021, working within the Machine Learning group under the supervision of Dr. Iñaki Inza and Dr. Jose A. Lozano. His research is focused on advancing artificial intelligence in clinical contexts, particularly through learning using privileged information (LUPI). He explores techniques such as knowledge distillation and multi-task learning to enhance supervised classification. Martinez obtained a BSc in Physics from the University of Murcia in 2019 and an MSc in Data Science from the Open University of Catalonia in 2021.
His thesis, titled Advances in learning using privileged information for supervised classification is under the supervision of Dr. Jose A. Lozano (Scientific Director of BCAM) and Dr. Iñaki Inza (EHU).
On behalf of all members of BCAM, we would like to wish Mario all the best for the future, professionally and personally.
Abstract
What if machine learning models could train with extra “privileged knowledge”? This dissertation explores how the Learning Using Privileged Information paradigm can significantly boost performance in supervised classification. Firstly, two logistic regression-based methods learned using privileged information are presented. Secondly, a privileged knowledge distillation approach is proposed: a teacher model (trained with both regular and privileged features) transfers knowledge to a student model using only regular features. However, the teacher model may not be entirely reliable or error-free. Therefore, the proposed distillation framework includes a mechanism to guide the student correctly. Finally, a multi-task privileged framework is introduced. Thereby, one task predicts privileged features from regular ones, and another uses regular and the predicted privileged features to perform the final prediction. Moreover, this framework is also addressed using knowledge distillation techniques. It is important to note that privileged information does not inherently guarantee improved model performance. Consequently, each chapter introduces different approaches designed to maximize the advantages of privileged information and to provide a clearer understanding of its impact on model performance. All methods are validated on various datasets, demonstrating significant improvements over current state-of-the-art techniques. The work contributes both theoretical insights and practical solutions for leveraging privileged information in real-world scenarios.
Related news