10 – PyTorch V2 Part 3 Solution 2 V1

Hi again. So, here’s my solution for the train pass that I had you implement. So, here we’re just defining our model like normal and then our negative log-likelihood loss using stochastic gradient descent and pass in our parameters. Then, here is our training pass. So, for each image in labels in trainloader, we’re going to flatten it and then zero out the gradients using optimizer. zero_ grad. Pass our images forward through the model and the output and then from that, we can calculate our loss and then do a backward pass and then finally with the gradients, we can do this optimizer step. So, if I run this and we wait a little bit for to train, we can actually see the loss dropping over time, right? So, after five epochs, we see that the first one, it starts out fairly high at 1.9 but after five epochs, continuous drop as we’re training and we see it much lower after five epochs. So, if we kept training then our network would learn the data better and better and the training loss would be even smaller. So, now with our training network, we can actually see what our network thinks it’s seen in these images. So, for here, we can pass in an image. In this case, it’s the image of a number two and then this is what our network is predicting now. So, you can see pretty easily that it’s putting most of the probability, most of its prediction into the class for the digit two. So we try it again and put in passes in number eight and again, it’s predicting eight. So, we’ve managed to actually train our network to make accurate predictions for our digits. So next step, you’ll write the code for training a neutral network on a more complex dataset and you’ll be doing the whole thing, defining the model, running the training loop, all that. Cheers.

Dr. Serendipity에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

Continue reading