# Classic Networks

## LeNet-5

* It was developed by Yann LeCun et al. in 1998
* The goal of this network was to classify handwritten digits
* It was trained using 32x32x1 grayscale images![](/files/-M5-0VF8lA7F-wL0A4jB)
* This net is smaller compared to today's standards; it had about 60K parameters
* It is interesting to note that as the height and width of the image decreased across the layers, the number of channels increased
* Also, the net had a sigmoid non-linearity after the pooling layers, which is no longer used today after pooling

## AlexNet

![](/files/-M5-0VFAYbqP0XbBlIIy)

* AlexNet was developed by Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever in 2012
* It is much bigger than LeNet-5 and has about 60M parameters

## VGG-16

* It was developed by K. Simonyan and A. Zisserman in 2014
* Instead of having thousands and thousands of parameters, this model used fixed parameters
  * All its convolutional layers had 3x3 filters with stride 1 and "same" padding
  * All its max-pooling layers were 2x2 and had stride 2
* It had a very simplified architecture:

  ![](/files/-M5-0VFCfVh384rHLyOh)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://vikram-bajaj.gitbook.io/deep-learning-specialization-coursera/convolutional-neural-networks/deep-convolutional-models-case-studies/classic-networks.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
