diff --git a/guide/english/machine-learning/neural-networks/convolutional-neural-networks/index.md b/guide/english/machine-learning/neural-networks/convolutional-neural-networks/index.md index 6599399f6d..1417fd7663 100644 --- a/guide/english/machine-learning/neural-networks/convolutional-neural-networks/index.md +++ b/guide/english/machine-learning/neural-networks/convolutional-neural-networks/index.md @@ -8,6 +8,10 @@ Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networ CNNs are biologically inspired models on how mammals visually perceive things. When we see something a layer of neurons are activated in our brain. The very same concept is working here too. The birth of CNN is inspired by Local connection, layering, spatial invariance. Compared to the similar size of feed forward neural network the CNN only require much fewer connections and parameters, hence they are easier to train and also the time consumption is less. CNN is effective for both the high level and low level features in dataset. Another important factor of CNN is the depth of the layers. +### Pooling + +Many effective CNN models add pooling layers between each convolution layer. A pooling layer subsamples the image in some way, effectively making the image "fuzzier" between convolution layers. One example is called "max pooling," where you run a kernel (can be any size, 2x2 or 3x3 are common choices) over the input layer that only passes the highest value into the next layer. This destroys a lot of information, but reduces computational load and forces the CNN to learn more general relationships (translational invariance) as the number of parameters are reduced. Pooling layers do not have associated weights. + ### Suggested links : - Stanford CS231n [Lecture 5 Convolutional Neural Networks](https://www.youtube.com/watch?v=bNb2fEVKeEo) - Stanford CS231n [Lecture 9 CNN Architectures](https://www.youtube.com/watch?v=DAOcjicFr1Y&t=2384s)