Hyperparameter Tuning, Batch Normalization and Programming Frameworks
Hyperparameter Tuning
As discussed earlier, hyperparameters control the parameters of a deep network.
It is, therefore, important to set the right values for these hyperparameters. Doing so can be a time-consuming process.
Some hyperparameters include (the learning rate), (for the momentum algorithm), the number of layers, the number of hidden units, the mini-batch size, learning rate decay, for Adam Optimization etc.
In traditional ML, we had much fewer hyperparameters, allowing us to use grid search. However, in Deep Learning, we have a large number of hyperparameters and must instead perform a search over a random set of values for the hyperparameters. A coarse-to-fine approach may be employed, where we first search over random values and then narrow the search to a region where more suitable values exist.
Note that we must use an appropriate scale while choosing hyperparameters randomly.
For example, to get a set of random values between 0.0001 and 1 i.e. between , do:
If we have limited computational resources, we must restrict ourselves to hyperparameter tuning on a single model over several hours/days. However, if we have sufficient computational resources, we can afford to try out different hyperparameter settings on models in parallel, and choose the one that works best.
Batch Normalization
It was earlier discussed that normalizing the inputs could speed up training.
Batch normalization aims at normalizing the z values of each layer which then get passed through an activation function and become the input for the next layer of a neural network. This speeds up training.
Given some intermediate values :
where are learnable parameters that are used to ensure that z doesn't have zero mean and unit variance, which is caused by normalization.
(We use to avoid division by 0).
Note that batch normalization is usually applied on mini-batches and when we use batch normalization, we can eliminate the b values for each layer. We must, however, also calculate during backpropagation and update as well, while updating W values for each layer.
While testing, since we have only one test example at a time, we can't calculate mean and standard deviation. Instead, we must use and estimated using an exponentially weighted average across mini-batches seen during training.
Softmax Regression
Linear Regression is used for Binary Classification, whereas Softmax Regression can be used for multi-class classification.
Say we have C class labels. Softmax Regression must output C probabilities, one for each class.
So, in the last layer, we use the softmax activation function, which is as follows:
First calculate
Then,
The class with the highest a value i.e. highest probability is the predicted class.
For Softmax Regression, we have the following loss and cost functions:
Programming Frameworks
There are several deep learning frameworks that make it easier to apply deep learning, without having to implement everything from scratch. Some of them include:
Caffe/Caffe2
TensorFlow
Torch
Keras
Theano
CNTK
DL4J
Lasagne
mxnet
PaddlePaddle
Last updated