# 6 – 05 RNN FFNN Reminder B V6 Final

Let’s look at a basic model of an artificial neural network, where we have only a single, hidden layer. The inputs are each connected to the neurons in the hidden layer and the neurons in the hidden layer are each connected to the neurons in the output layer where each neuron there represents a single output. We can look at it as a collection of mathematical functions. Each input is connected mathematically to a hidden layer of neurons through a set of weights we need to modify, and each hidden layer neuron is connected to the output layer in a similar way. There is no limit to the number of inputs, number of hidden neurons in a layer, and number of outputs, nor are there any correlations between those numbers, so we can have n inputs, m hidden neurons, and k outputs. In a closer, even more simplistic look, we can see that each input is multiplied by its corresponding weight and added at the next layer’s neuron with a bias as well. The bias is an external parameter of the neuron and can be modeled by adding an external fixed value input. This entire summation will usually go through an activation function to the next layer or to the output. But what is our goal? We can look at our system as a black box that has n inputs and k outputs. Our goal is to design the system in such a way that it will give us the correct output y for a specific input x. Our job is to decide what’s inside this black box. We know that we will use artificial neural networks and need to train it to eventually have a system that will yield the correct output to a specific input. Well, correct, most of the time. Essentially, what we really want is to find the optimal set of weights connecting the input to the hidden layer, and the optimal set of weights connecting the hidden layer to the output. We will never have a perfect estimation, but we can try to be as close to it as we can. To do that, we need to start a process you’re already familiar with, and that is the training phase. So, let’s look at the training phase, where we will find the best set of weights for our system. This phase will include two steps, feedforward and backpropagation, which we will repeat as many times as we need until we decide that our system is as best as it can be. In the feedforward part, we will calculate the output of the system. The output will be compared to the correct output giving us an indication of the error. There are a few ways to determine the error. We will look at it mathematically in a bit. In the backpropagation part, we will change the weights as we try to minimize the error, and start the feedforward process all over again. In the next few videos, we will take a closer look at the mathematical calculations in the feedforward and the backpropagation steps. We will browse through what you already know, but hopefully also give you a deeper understanding.