Contribution on the topic. (#31767)

* Contribution on the topic.

Added some basic information on the concept of Multi Layer Perceptron. Added an image for better understanding of the concept.

* Added extra information.

Check out the following piece of data on MLP.

* Update index.md

* Update index.md
This commit is contained in:
Himadri Sankar Chatterjee
2019-07-20 03:08:33 +05:30
committed by Quincy Larson
parent bfc8d3471c
commit 9c8acd5a45

View File

@ -3,13 +3,12 @@ title: Multi Layer Perceptron
---
## Multi Layer Perceptron
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/machine-learning/neural-networks/multi-layer-perceptron/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
Multi Layer Perceptron is a type of feed-forward neural network, consisting of many naurons. The layer is essentially dicided into three parts: an Input Layer, the Hidden Layers and the Output Layer. Here is an image of a simple MLP:
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
![alt text](https://www.researchgate.net/profile/Junita_Mohamad-Saleh/publication/257071174/figure/download/fig3/AS:297526545666050@1447947264431/A-schematic-diagram-of-a-Multi-Layer-Perceptron-MLP-neural-network.png "Simple Multi Layer Perceptron")
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
Here, you can see that the MLP consists of an Input Layer with 3 neurons, then a single Hidden Layer with 4 neurons and finally a Output Layer with 2 neurons. Thus, the network, essentially, takes three values as input and outputs two values.
The weights and the biases of each layer are initialised with random values and through a no of training on a given data, the values are adjusted, using backpropagation, to attain maximum accuracy in the output.
#### More Information:
### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->