* Contribution on the topic. Added some basic information on the concept of Multi Layer Perceptron. Added an image for better understanding of the concept. * Added extra information. Check out the following piece of data on MLP. * Update index.md * Update index.md
15 lines
1.1 KiB
Markdown
15 lines
1.1 KiB
Markdown
---
|
|
title: Multi Layer Perceptron
|
|
---
|
|
## Multi Layer Perceptron
|
|
|
|
Multi Layer Perceptron is a type of feed-forward neural network, consisting of many naurons. The layer is essentially dicided into three parts: an Input Layer, the Hidden Layers and the Output Layer. Here is an image of a simple MLP:
|
|
|
|

|
|
|
|
Here, you can see that the MLP consists of an Input Layer with 3 neurons, then a single Hidden Layer with 4 neurons and finally a Output Layer with 2 neurons. Thus, the network, essentially, takes three values as input and outputs two values.
|
|
The weights and the biases of each layer are initialised with random values and through a no of training on a given data, the values are adjusted, using backpropagation, to attain maximum accuracy in the output.
|
|
|
|
### More Information:
|
|
<!-- Please add any articles you think might be helpful to read before writing the article -->
|