5 – PyTorch V2 Part 2 V1

Hello everyone and welcome back. So, in this notebook and series of videos, I’m going to be showing you a more powerful way to build neural networks and PyTorch. So, in the last notebook, you saw how you can calculate the output for network using tensors and matrix multiplication. But PyTorch has this nice module, nn, that has a lot of my classes and methods and functions that allow us to build large neural networks in a very efficient way. So, to show you how this works, we’re going to be using a dataset called MNIST. So, MNIST it’s a whole bunch of grayscale handwritten digits. So ,0, 1, 2, 3, 4 and so on through nine. Each of these images is 28 by 28 pixels and the goal is to actually identify what the number is in these images. So, that dataset consists of each of these images and it’s labeled with the digit that is in that image. So, ones are labeled one, twos are labeled two and so on. So, what we can do is we can actually show our network and image and the correct label and then it learns how to actually determine what the number and the image is. This dataset is available through the torchvision package. So, this is a package that sits alongside PyTorch, that provides a lot of nice utilities like datasets and models for doing computer vision problems. We can run this cell to download and load the MNIST dataset. What it does is it gives us back an object which I’m calling trainloader. So, with this trainloader we can turn into an iterator with iter and then this will allow us to start getting good at it or we can actually just use this in a loop, in a for loop and so we can get our images and labels out of this generator with four image, comma label and trainloader. One thing to notice is that when I created the trainloader, I set the batch size to 64. So, what that means and every time we get a set of images and labels out, we’re actually getting 64 images out from our data loader. So, then if you look at the shape and the size of these images, we’ll see that they are 64 by one by 28 by 28. So, 64 images and then one color channels so it’s grayscale, and then it’s 28 by 28 pixels is the shape of these images and so we can see that here. Then our labels have a shape of 64 so it’s just a vector that’s 64 elements which with a label for each of our images and we can see what one of these images looks like this is a nice number four. So, we’re going to do here is build a multi-layer neural network using the methods that we saw before. By that I mean you’re going to initialize some weight matrices and some bias vectors and use those to calculate the output of this multi-layer network. Specifically, we want to build this network with 784 input units, 256 hidden units, and 10 output units. So, 10 output units, one for each of our classes. So, the 784 input units, this comes from the fact that with this type of network is called a fully connected network or a dense network. We want to think of our inputs as just one vector. So, our images are actually this 28 by 28 image, but we want to put a vector into our network and so what we need to do is actually convert this 28 by 28 image into a vector and so, 784 is 28 times 28. When we actually take this 28 by 28 image and flatten it into a vector then it’s going to be 784 elements long. So, now what we need to do is take each of our batches which is 64 by one by 28 by 28 and then convert it into a shape that is to another tensor which shapes 64 by 784. This is going to be the tensor that’s the input to our network. So, go and give this a shot. So again, build the networks 784 input units, 256 hidden units and 10 output units and you’re going to be generating your own random initial weight and bias matrices. Cheers.

%d 블로거가 이것을 좋아합니다: