We briefly mentioned multi-class classification in the last video but let me be more specific. It seems that neural networks work really well when the problem consist on classifying two classes. For example, if the model predicts a probability of receiving a gift or not then the answer just comes as the output of the neural network. But what happens if we have more classes? Say, we want the model to tell us if an image is a duck, a beaver, or a walrus. Well, one thing we can do is create a neural network to predict if the image is a duck, then another neural network to predict if the image is a beaver, and a third neural network to predict if the image is a walrus. Then we can just use SoftMax or pick the answer that gives us the highest probability. But this seems like overkill, right? The first layers of the neural network should be enough to tell us things about the image and maybe just the last layer should tell us which animal it is. As a matter of fact, as you’ll see in the CNN section, this is exactly the case. So what we need here is to add more nodes in the output layer and each one of the nodes will give us the probability that the image is each of the animals. Now, we take the scores and apply the SoftMax function that was previously defined to obtain well-defined probabilities. This is how we get neural networks to do multi-class classification.