Multilayer perceptron (MLP)#
Multilayer perceptron (MLP) was originally introduced in [Rosenblatt, 1958]. It is a simplest type of feedforward artificial neural network that consists of multiple layers of interconnected artificial neurons (perceptrons).
MLP consitsts of neurons, each neuron holds a number. In the picture above we can see an input layer of neurons \(x_1, \ldots, x_n\) and one neuron \(z\) of the output layer. To calculate \(z\) one needs to apply \(2\) operation:
linear tranformation
activation function
If we have several neurons \(z_1, \ldots, z_m\) in the output layer, then the linear transformation between layers can be written as
Question
Denote \(\boldsymbol x^{\mathsf T} = (x_1, \ldots, x_n)\), \(\boldsymbol z^{\mathsf T} = (z_1, \ldots, z_m)\), \(\boldsymbol W = (w_{ij})\). How to rewrite (35) in matrix form?