Hyperparameter Tuning, Batch Normalization and Programming Frameworks

Hyperparameter Tuning

As discussed earlier, hyperparameters control the parameters of a deep network.

It is, therefore, important to set the right values for these hyperparameters. Doing so can be a time-consuming process.

In traditional ML, we had much fewer hyperparameters, allowing us to use grid search. However, in Deep Learning, we have a large number of hyperparameters and must instead perform a search over a random set of values for the hyperparameters. A coarse-to-fine approach may be employed, where we first search over random values and then narrow the search to a region where more suitable values exist.

Note that we must use an appropriate scale while choosing hyperparameters randomly.

r = -4 * np.random()  # random power between -4 and 0
x = 10**r  # random value between 10^-4 and 10^0

If we have limited computational resources, we must restrict ourselves to hyperparameter tuning on a single model over several hours/days. However, if we have sufficient computational resources, we can afford to try out different hyperparameter settings on models in parallel, and choose the one that works best.

Batch Normalization

It was earlier discussed that normalizing the inputs could speed up training.

Batch normalization aims at normalizing the z values of each layer which then get passed through an activation function and become the input for the next layer of a neural network. This speeds up training.

Softmax Regression

Linear Regression is used for Binary Classification, whereas Softmax Regression can be used for multi-class classification.

Say we have C class labels. Softmax Regression must output C probabilities, one for each class.

So, in the last layer, we use the softmax activation function, which is as follows:

The class with the highest a value i.e. highest probability is the predicted class.

For Softmax Regression, we have the following loss and cost functions:

Programming Frameworks

There are several deep learning frameworks that make it easier to apply deep learning, without having to implement everything from scratch. Some of them include:

  • Caffe/Caffe2

  • TensorFlow

  • Torch

  • Keras

  • Theano

  • CNTK

  • DL4J

  • Lasagne

  • mxnet

  • PaddlePaddle

Last updated