Ez egy előző félévben kiírt, archivált téma.
A témára magyar hallgatókat is várunk.
Deep (convolutional) neural networks are often sensitive to the right choice of architectural properties and training parameters, e.g., number of layers, size of convolutional filters and filter banks, learning rate, dropout probability, etc. Slight differences in such hyper-parameters might translate into profound effects on prediction performance. Therefore, it is important to find hyper-parameter configurations that are optimal for the model and task at hand. Within this subject we explore methods for efficient search in the hyper-parameter space with the aim to improve results of existing computer vision models.