Training a Neuron
Last updated
Last updated
This is possible when all the data is already available.
Training is done using batches of the data.
Each time the weights are updated, compute gradient for the entire dataset.
One pass through the dataset constitutes an epoch.
Train using one example at a time. Update the weights only based on that example.
This is more efficient than batch training.
The idea is to train in an on-line fashion.