ML
Machine Learning
Our goal is to develop novel and efficient machine learning algorithms able to deal with new data-related practical problems. We also pursue the mathematical modeling of these algorithms in order to provide theoretical guarantees of their performance.
In the machine learning research line we deal with data problems coming from different scenarios: industry, biosciences, health, economy, etc. We pursue the developments of new machine learning algorithms that can efficiently tackle these problems. Particularly, we consider problems that account for a variety of data types: from time series, to steaming data or images and speech, and a wide range of modelization techniques and mathematical formalisms such as: probabilistic graphical models, Bayesian approaches, deep learning, etc.
The research carried out in the machine learning line is inspired in problems that appear in other scientific, technological or economical disciplines. We develop new machine learning methods and algorithms related with the main data analysis activities such as clustering, supervised classification, feature subset selection, etc. to solve this kind of problems. Based on the specific characteristics of the problem at hand, we design tailored but general algorithms that extract as much information as possible from the available data providing efficient machine learning models that solve the problem.
In addition to that, we also develop mathematical tools able to model the behavior and performance of the algorithms: studying their convergence, the estimation of the performance, the behavior of the algorithms in terms of computational time and memory requirements, etc.
During the last years the machine learning line has worked with different machine learning problems and algorithms. Particularly, we can emphasize the work done in the area of time series mining and data streaming, the adaptation of classical clustering algorithms such as k-means or k-medoids to massive data environments, the probabilistic modeling of permutations and ranked data or the developments in anomaly detection, and the analysis of crowd learning environments.
In terms of formalisms, we strongly rely on probabilistic modeling, using different tools and techniques such as probabilistic graphical models and Gaussian process to name, which in most cases are learned under a Bayesian perspective. We also pursue the use of deep learning when we consider it the most appropriate technique for the problem at hand.

Hierarchical sequence optimization for spacecraft transfer trajectories based on the employment of meta-heuristics
Description: This video shows the simulation of hierarchical sequence optimization for spacecraft transfer trajectories based on the employment of meta-heuristics. Three types of evolutionary algorithms including ìGenetic Algorithmî, ìParticle Swarm Optimizationî and ìEstimation of Distribution Algorithmsî are used in optimal guidance approach utilizing low-thrust trajectories. Different initial orbits are considered for each algorithm while the orbits are expected to have the same shape and orientation at the end of space mission.

Multi-impulse Long-Range Space Rendezvous via Evolutionary Discretized Lambert Approach
Transfering satellites between space orbits is a challenging task. Due to the complexity of the space systems, finding optimal transfer trajectories requires efficient approaches. This video shows the simulation of the satellite motion in a space mission, where a new evolutionary algorithm is developed and utilized for the orbit transfer. The satellite performs multiple transfers between intermediate orbits until it reaches the desired destination. Novel heuristic mechanisms are used in the development of the optimization algorithm for spacecraft trajectory design. Simulation results indicate the effectiveness of the algorithm in finding optimal transfer trajectories for the satellite.