So in the previous session we learn that we can add to linear models to obtain a third model. As a matter of fact, we did even more. We can take a linear combination of two models. So, the first model times a constant plus the second model times a constant plus a bias and that gives us a non-linear model. That looks a lot like perceptrons where we can take a value times a constant plus another value times a constant plus a bias and get a new value. And that’s no coincidence. That’s actually the building block of Neural Networks. So, let’s look at an example. Let’s say, we have this linear model where the linear equation is 5×1 minus 2×2 plus 8. That’s represented by this perceptron. And we have another linear model with equations 7×1 minus 3×2 minus 1 which is represented by this perceptron over here. Let’s draw them nicely in here and let’s use another perceptron to combine these two models using the Linear Equation, seven times the first model plus five times the second model minus six. And now the magic happens when we join these together and we get a Neural Network. We clean it up a bit and we obtain this. All the weights are there. The weights on the left, tell us what equations the linear models have. And the weights on the right, tell us what the linear combination is of the two models to obtain the curve non-linear model in the right. So, whenever you see a Neural Network like the one on the left, think of what could be the nonlinear boundary defined by the Neural Network. Now, note that this was drawn using the notation that puts a bias inside the node. This can also be drawn using the notation that keeps the bias as a separate node. Here, what we do is, in every layer we have a bias unit coming from a node with a one on it. So for example, the minus eight on the top node becomes an edge labelled minus eight coming from the bias node. We can see that this Neural Network uses a Sigmoid Activation Function and the Perceptrons.