Meta-learning to initialize genetic algorithm population for hyperparameter tuning of deep neural networks

No Thumbnail Available
Tatis Posada, David
Embargoed Until
Arzuaga, Emmanuel
College of Engineering
Department of Electrical and Computer Engineering
Degree Level
Artificial intelligence models such as Deep Neural Networks (DNN) are designed taking into account the data, available computational resources and the problem to be solved (i.e. prediction or classification). Before training the models, it is necessary to define a series of hyperparameters that are defined as parameters of a Deep Neural Network that are not modified during the training process. Hyperparameters have a big impact on model results, so they must be carefully selected. Possible hyperparameters when designing a DNN model include the learning rate, number of epochs, batch size, activation function, number of hidden layers, units, dropout for regularization, and more. Having such a large search space, it is necessary to implement strategies and algorithms that find better combinations of hyperparameters, reducing the number of trained models. The objective of this thesis is to design, implement and study an algorithm for hyperparameter tuning of Deep Neural Networks based on a meta-algorithm that learns from previous experiments (meta-dataset) to initialize the population of a genetic algorithm. The genetic algorithm iterates to find the combination of hyperparameters that improve the performance of a model in a cost-effective way. For this, different Deep Neural Networks architectures are implemented and applied to active research problems (such as materials properties prediction and image segmentation) to create a meta-dataset that will be used to train a Machine Learning model to predict how good a combination of hyperparameters can be at a given model, task and data. In addition, a genetic algorithm for hyperparameter tuning is implemented which is contrasted by using and not using the meta learner to initialize its population.

Los modelos de inteligencia artificial como las Redes Neuronales Profundas (RNP) se dise˜nan teniendo en cuenta los datos, los recursos computacionales disponibles y el problema a resolver (es decir, predicci´on o clasificaci´on). Antes de entrenar los modelos, es necesario definir una serie de hiperpar´ametros que se definen como par´ametros de una RNP que no se modifican durante el proceso de entrenamiento. Los hiperpar´ametros tienen un gran impacto en los resultados del modelo, por lo que deben seleccionarse cuidadosamente. Los posibles hiperpar´ametros al dise˜nar un modelo RNP incluyen la tasa de aprendizaje, la cantidad de ´epocas, el tama˜no del lote, la funci´on de activaci´on, la cantidad de capas ocultas, las unidades, el abandono para la regularizaci´on y m´as. Al tener un espacio de b´usqueda tan grande, es necesario implementar estrategias y algoritmos que encuentren mejores combinaciones de hiperpar´ametros, reduciendo la cantidad de modelos entrenados. El objetivo de esta tesis es dise˜nar, implementar y estudiar un algoritmo para el ajuste de hiperpar´ametros de RNP basado en un meta-algoritmo que aprende de experimentos previos (meta-dataset) para inicializar la poblaci´on de un algoritmo gen´etico. El algoritmo gen´etico itera para encontrar la combinaci´on de hiperpar´ametros que mejoran el rendimiento de un modelo de manera costo-efectiva. Para ello, se implementan diferentes arquitecturas de Redes Neuronales Profundas y se aplican a problemas de investigaci´on activa (como predicci´on de propiedades de materiales y segmentaci´on de im´agenes) para crear un meta-dataset que se utilizar´a para entrenar un modelo de Machine Learning para predecir qu´e tan buena es una combinaci ´on de hiperpar´ametros. Adem´as, se implementa un algoritmo gen´etico para el ajuste de hiperpar´ametros que se contrasta con y sin el meta-algoritmo para inicializar su poblaci´on.
Artificial Intelligence,
Deep Neural Networks,
Hyperparameter Tuning,
Genetic Algorithms
Usage Rights
Except where otherwise noted, this item’s license is described as Attribution-NonCommercial-NoDerivatives 4.0 International
Tatis Posada, D. (2023). Meta-learning to initialize genetic algorithm population for hyperparameter tuning of deep neural networks [Thesis]. Retrieved from