Ez egy előző félévben kiírt, archivált téma.
A témára magyar hallgatókat is várunk.
A Deep Learning workflow consists of two main phases: A training phase, in which a Neural Network is optimized to solve a specific task, and an inference phase, in which the network is actually used. Training usually happens on very powerful computers equipped with graphics cards, while the inference may happen on small, low powered and low cost embedded devices with very specific CNN accelerators. While the networks are trained using floating point numbers, many embedded devices only work with fixed-point representations. Floating point weights need to be converted to fixed-point format. A conversion strategy needs to be developed: How many bits should be assigned to the fractional part of each layer? Can a dynamic fixed-point conversion be done? How is the accuracy of a network impacted by this?