Multi Layer Perceptron is a type of feed-forward neural network, consisting of many naurons. The layer is essentially dicided into three parts: an Input Layer, the Hidden Layers and the Output Layer. Here is an image of a simple MLP:
Here, you can see that the MLP consists of an Input Layer with 3 neurons, then a single Hidden Layer with 4 neurons and finally a Output Layer with 2 neurons. Thus, the network, essentially, takes three values as input and outputs two values.
The weights and the biases of each layer are initialised with random values and through a no of training on a given data, the values are adjusted, using backpropagation, to attain maximum accuracy in the output.