20 – PyTorch V2 Part 8 Solution V1

Hi everyone, here is my solution for the transfer learning exercise that I had to do. So, this one’s going to be a little different. I’m going to be typing it out as I do it so you can understand my that process is kind of the combination of everything you’ve learned in this lesson. So, the first thing I’m going do is, if I have a GPU available, I’m going to write this code in agnostic way so that I can use the GPU. So, what I’m going to say, device = torch.device and then this is going to be cuda. So, it’s going to run on our GPU if torch.cuda is available, else CPU. So, what this will do is, if our GPU is available, then this will be true and then we’ll return cuda here and then otherwise, it’ll be CPU. So, now we can just pass device to all the tensors and models and then it will just automatically go to the GPU if we have it available. So, next, I’m going to get our pre-trained model. So, here I’m actually going to use ResNet. So, to do this, model dot models. So, we already imported models from torch vision then we can kind of like took out all the ones they have. So, there’s ResNet there. So, I’m just going to use a fairly small one, ResNet 50 and then we want pre-trained true and that should get us our model. So, now if we look, so we can just print it out like this and it will tell us all the different operations and layers and everything that’s going on. So, if we scroll down, we can see that at the end here has fc. So, this is the last layer, this fully connected layer that’s acting like a classifier. So, we can see that it has, it expects 2,048 inputs to this layer and then the out features are 1,000. So, remember that this was trained on ImageNet and so ImageNet is typically trained with 1,000 different classes of images. But here we’re only using cat and dog, so we just need to output features and in our classifier. So, we can load the model like that and now I’m going to make sure that our models’ perimeters are frozen so that when we’re training they don’t get updated. So, I’ll just run this make sure it works. So, now, we can load the model and we can turn off gradients. Turn off gradients for our model. So, then, the next step is we want to define our new classifier which we will be training. So, here, we can make it pretty simple. So, models= nn.sequential. You can define this in a lot of different ways, so I’m just using an industrial sequential here. So, our first layer, so linear, so remember we needed 248 inputs and then let’s say, let’s drop it down to 512 at a ReLu layer, a dropout. Now our output layer, 512 to two and then we’re going to do log softmax. I should change this to be a classifier. Okay. So, that’s to finding our classifier and now we can attach it to our model, so to say, model.fc= classifier. Now, if we look at our model again, so we can scroll down to the bottom here. So, we see now this fully-connected module layer here is a sequential classifier linear operation ReLu, dropout, another linear transformation and then log softmax. So, the next thing we do is define my loss, my criterion. So, this is going to be the negative log like we had loss. Then, define our optimizer, optim.Adam. So, we want to use the parameters from our classifier which is fc here and then set our learning rate. The final thing to do is to move our model to whichever device we have available. So, now we have the model all set up and it’s time to train it. Here’s is the first thing I’m going to do is define some variables that we’re going to be using during the training. So, for example, I’m going to set our epochs, so I’m going to set to one. I’ll be tracking the number of train steps we do, so set that to zero. I’ll be tracking our loss, so also set this to zero, and finally, we want to kind of set a loop for how many steps we’re going to go before we print out the validation loss. So, now we want to loop through our epochs. So, for epoch and range epochs. Now, we’re going to loop through our data for images, labels in trainloader, cumulate steps. So, basically, every time we go through one of these batches, we’re going to increment steps here. So, now that we have our images and our labels, we’re going to want to move them over to the GPU, if that’s available. So, we’re just going to do is images.to (device), labels.to(device). Now we’re just going to write out our training loop. So, the first thing we need to do is zero our gradients. So, it’s very important, don’t forget to do this. Then, get our log probabilities from our model, model passed in the images with the log probabilities, we can get our loss from the criterion in the labels. Then do a backwards pass and then finally with our optimizer we take a step. Here, we can increment are running loss like so. So, this way we can keep track of our training loss as we are going through more and more data. All right. So, that is the training loop. So, now every once in a while so which is set by this like print every variable. We actually want to drop out of the train loop and test our network’s accuracy and loss on our test dataset. So, for step modulo print_every, this is equal to zero, then we’re going to go into our validation loop. So, what we need to do first is set model.eval. So, this’ll turn our model into evaluation inference mode which turns off dropout. So, we can actually accurately use our network for making predictions instead a test loss and accuracy. So, now we’ll get our images and labels from our test data. Now we’ll do our validation loop. So, with our model so we’ll pass in the images. So, these are the images from our test set. So, we’re going to get our logps from our test set so again, get the loss with our criterion and keep track of our loss to test loss plus+= loss.item. So, this will allow us to keep track of our test loss as we’re going through these validation rules. So, next we want to calculate our accuracy. So, probabilities= torch.exponential(logps). So, remember that our model is returning log softmax, so it’s the log probabilities of our classes and to get the actual probabilities, we’re going to use torch.exponential. So we get our top probabilities and top classes from ps.topk(1). So, that’s going to give us our first largest value in our probabilities. Here, we need to make sure we set dimension to one to make sure it’s actually like looking for the top probabilities along the columns. Go to the top classes, now we can check for equality with our labels and then with the equality tensor, we can update our accuracy. So, here remember we can calculate our accuracy from equality. Once we change it to a FloatTensor then we can do torch.mean and get our accuracy and so again just kind of incremented accumulated into this accuracy variable. All right. Now, we are in this loop here so this four step every print_every. So basically, now we have a running loss of our training loss and we have a test loss that we passed our test data through our model and measured the loss in accuracy. So now we can print all this out and I’m just going to copy and paste this because it’s a lot to type. So, basically here, we’re just printing out our epochs. So, we can keep track and know where we are and keep track of that. So, running_loss divided by print_every so basically we’re taking the average of our training loss. So every time we print it out, we’re just going to take the average. Then, test_loss over length the testloader. So basically length test loader tells us how many batches are actually in our test dataset that we’re getting from testloader. So, since we’re we’re summing up all the losses for each of our batches, if we take the total loss and divide by the number of batches and that gives us our average loss, we do the same thing with accuracy. So, we’re summing up the accuracy for each batch here and then we just divide by the total number of batches and that gives us our average accuracy for the test set. Then at the end, we can set our running loss back to zero and then we also want to put our model back into training mode. Great. So, that should be the training code and we’ll see if it works. Now, this should be an if instead of a for. So, here I forgot this happens a lot. I forgot to transfer my tensors over to the GPU. So, hopefully this will work. All right. So, even like pretty quickly, we see that we can actually get our test accuracy on this above 95 percent. So, this is, remember that we’re printing this out every five steps and so this is a total of 15 batches, training batches that were updated in the model. So, we’re able to easily fine tune these classifiers on top and get greater than a 95 percent accuracy on our dataset.

%d 블로거가 이것을 좋아합니다: