Add section on pooling (#32302)
Intuition is that pooling creates "fuzzier" images through subsampling (also known as down-sampling). Good example where reducing information actually makes an algorithm more efficient by reducing overfitting.
This commit is contained in:
@ -8,6 +8,10 @@ Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networ
|
||||
|
||||
CNNs are biologically inspired models on how mammals visually perceive things. When we see something a layer of neurons are activated in our brain. The very same concept is working here too. The birth of CNN is inspired by Local connection, layering, spatial invariance. Compared to the similar size of feed forward neural network the CNN only require much fewer connections and parameters, hence they are easier to train and also the time consumption is less. CNN is effective for both the high level and low level features in dataset. Another important factor of CNN is the depth of the layers.
|
||||
|
||||
### Pooling
|
||||
|
||||
Many effective CNN models add pooling layers between each convolution layer. A pooling layer subsamples the image in some way, effectively making the image "fuzzier" between convolution layers. One example is called "max pooling," where you run a kernel (can be any size, 2x2 or 3x3 are common choices) over the input layer that only passes the highest value into the next layer. This destroys a lot of information, but reduces computational load and forces the CNN to learn more general relationships (translational invariance) as the number of parameters are reduced. Pooling layers do not have associated weights.
|
||||
|
||||
### Suggested links :
|
||||
- Stanford CS231n [Lecture 5 Convolutional Neural Networks](https://www.youtube.com/watch?v=bNb2fEVKeEo)
|
||||
- Stanford CS231n [Lecture 9 CNN Architectures](https://www.youtube.com/watch?v=DAOcjicFr1Y&t=2384s)
|
||||
|
Reference in New Issue
Block a user