Welcome to my solution for this exercise. So, for here, I had you calculate the output of our network using matrix multiplication. So remember, we wanted to use matrix multiplication because it’s more efficient than doing these two separate operations of the multiplication and the sum. But to do the matrix multiplication, we actually needed to change the size of our weights tensor. So, to do that, just do weights.view 5, 1, and so this will change the shape of our weights tensor to be five rows and one column. If you remember, our features has the shape of one row and five columns, so we can do this matrix multiplication. So, there’s just one operation that does the multiplication and the sum and just one go, and then we again add our bias term, pass it through the activation, and we get our output. So, as I mentioned before, you could actually stack up these simple neural networks into a multi-layer neural network, and this basically gives your network greater power to capture patterns and correlations in your data. Now, instead of a simple vector for our weights, we actually need to use a matrix. So, in this case, we have our input vector and our input data x_1, x_2, x_3. You think of this as a vector of just x, which our features. Then we have weights that connect our input to one hidden unit in this middle layers, usually called the hidden layer, hidden units, and we have two units in this hidden layer. So then, if we have our features, our inputs as a row vector, if we multiply it by this first column, then we’re going to get the output, we’re going to get this value of h_1. Then if we take our features and multiply it by the second column, then we’re going to get the value for h_2. So again, mathematically looking at this with matrices and vectors and linear algebra, we see that to get the values for this hidden layer that we do a matrix multiplication between our feature vector, this x_1 to x_n, and our weight matrix. Then as before with these values, we’re going to pass them through some activation function or maybe not an activation function, maybe we just want the row output of our network. So here, I’m generating some random data, some features, and some random weight matrices and bias terms that you’ll be using to calculate the output of a multi-layer network. So, what I’ve built is basically we have three input features, two hidden units and one output unit. So, you can see that I’ve listed it here. So, our features we’re going to create three features and this features vector here, and then we have an input equals three, so the shape is this, and two hidden units, one output unit. So, these weight matrices are created using these values. All right. I’ll leave it up to you to calculate the output for this multi-layer network. Again, feel free to use the activation function defined earlier for the output of your network and the hidden layer. Cheers.