Welcome back. So, here’s my solution for the validation pass. So, here, our model has been defined, our loss, and our optimizer, and all this stuff. I’ve set to 30 epochs so we can see like how this trains or how the training loss drops, and how the validation loss changes over time. The way this works is that after each pass, after each epoch, after each pass through the training set, then we’re going to do a validation pass. That’s what this else here means. So, basically, four of this stuff and then else basically says after this four loop completes, then run this code. That’s what this else here means. As seen before, we want to turn off our gradients so with torch.no_grad, and then we’re going to get our images and labels from a test set, pass it into the images into our model to get our log probabilities, calculate our loss. So, here, I am going to just be updating our test_loss. So, test_loss is just an integer that’s going to count up our loss on our test set as we’re training, as we’re doing more of these validation passes. So, this way, we can actually track the test loss over all the epochs that we’re training. So, from the law of probabilities, I can get our actual probability distributions using torch.exponential. Again, topk1 gives us our predicted classes and we can measure, we can calculate the equalities. So, here, we can get our probabilities from our log probabilities using torch.exp, taking the exponential of a log gives you back into the probabilities. From that, we do ps.topk, so one, and this gives us the top_class or predicted class from the network. Then, using checking for equality, we can see where our predicted classes match with the true classes from labels. Again, measure our calculator accuracy. So, using torch.mean and changing equals into a FloatTensor. So, I’m going to run this and then let it run for a while, and then we can see what the actual training and validation losses look like as we were training this network. Now, the network is trained, we can see how the validation loss and the training loss actually changed over time like as we continue training on more and more data. So, we see is the training loss drops but the validation loss actually starts going up over time. There’s actually a clear sign of overfitting so our network is getting better and better and better on the training data, but it’s actually starting to get worse on the validation data. This is because as it’s learning the training data, it’s failing to generalize the data outside of that. Okay. So, this is what the phenomenon of overfitting looks like. The way that we combine it, the way that we try to avoid this and prevent it is by using regularization and specifically, dropout. So, deep behind dropout is that we randomly drop input units between our layers. What this does is it forces the network to share information between the weights, and so this increases this ability to generalize to new data. PyTorch adding dropout is pretty straightforward. We just use this nn.Dropout module. So, we can basically create our classifier like we had before using the linear transformations to do our hidden layers, and then we just add self.dropout, nn.Dropout, and then you give it some drop probability. In this case, this is 20 percent, so this is the probability that you’ll drop a unit. In the forward method, it’s pretty similar. So, we just pass in x, which is our input tensor, we’re going to make sure it’s flattened, and then we pass this tensor through each of our fully connected layers into an activation, relu activation and then through dropout. Our last layer is the output layer so we’re not going to use dropout here. There’s one more thing to note about this. So, when we’re actually doing inference, if we’re trying to make predictions with our network, we want to have all of our units available, right? So, in this case, we want to turn off dropout when we’re doing validation, testing, when we’re trying to make predictions. So, to do that, we do model.eval. So, model.eval will turn off dropout and this will allow us to get the most power, the highest performance out of our network when we’re doing inference. Then, to go back in the train mode, use model.train. So, then, the validation pass looks like this now. So, first, we’re going to turn off our gradients. So, with torch.no_grad, and then we set our model to evaluation mode, and then we do our validation pass through the test data. Then, after all this, we want to make sure the model is set back to train mode so we do model.train. Okay. So, now, I’m going to leave it up to you to create your new model. Try adding dropout to it and then try training your model with dropout. Then, again, checkout the training progress of validation using dropout. Cheers.