diff --git a/guide/english/machine-learning/neural-networks/multi-layer-perceptron/index.md b/guide/english/machine-learning/neural-networks/multi-layer-perceptron/index.md index 17713c7f0a..ee766e1355 100644 --- a/guide/english/machine-learning/neural-networks/multi-layer-perceptron/index.md +++ b/guide/english/machine-learning/neural-networks/multi-layer-perceptron/index.md @@ -3,13 +3,12 @@ title: Multi Layer Perceptron --- ## Multi Layer Perceptron -This is a stub. Help our community expand it. +Multi Layer Perceptron is a type of feed-forward neural network, consisting of many naurons. The layer is essentially dicided into three parts: an Input Layer, the Hidden Layers and the Output Layer. Here is an image of a simple MLP: -This quick style guide will help ensure your pull request gets accepted. +![alt text](https://www.researchgate.net/profile/Junita_Mohamad-Saleh/publication/257071174/figure/download/fig3/AS:297526545666050@1447947264431/A-schematic-diagram-of-a-Multi-Layer-Perceptron-MLP-neural-network.png "Simple Multi Layer Perceptron") - +Here, you can see that the MLP consists of an Input Layer with 3 neurons, then a single Hidden Layer with 4 neurons and finally a Output Layer with 2 neurons. Thus, the network, essentially, takes three values as input and outputs two values. +The weights and the biases of each layer are initialised with random values and through a no of training on a given data, the values are adjusted, using backpropagation, to attain maximum accuracy in the output. -#### More Information: +### More Information: - -