Hello everyone and welcome to this lesson on deep learning with PyTorch. So, here I’ve built a bunch of Jupyter Notebooks that will lead you through actually writing the code to implement deep learning networks and Python. So, here we’re using the PyTorch framework which is somewhat newer than TensorFlow and Keras. It’s being developed by the Facebook AI research team, and completely open-source. And the reason that we’ve chosen PyTorch over TensorFlow and Keras is that it’s actually more coherent with Python itself like in the way you program in Python, but also with the concepts of deep learning. So you’ll see as you work through these notebooks, that the concepts that you’ve learned about deep learning like backpropagation are actually very natural in PyTorch, whereas with TensorFlow and Keras, you tend to have to write your code in a weird way that doesn’t actually match the conceptual mapping of deep learning. You can also buy these notebooks in the classroom, in a workspace below the videos. And you can download the notebooks from GitHub if you’d like to work on them on your own computer. Before we get into the actual code itself, there’s one thing you need to understand, tensors. So, tensors are basically a generalization of vectors and matrices. So, for instance, a vector is a 1-dimensional tensor. So, basically, you just have this kind of line of values. A matrix is a 2-dimensional tensor, where you have this rectangle. So you have rows, and you have columns. So, basically, it’s these numbers that are arrayed in a 2-dimensional space because we have like an x-coordinate and we have a y-coordinate. And then something like a color image is a 3-dimensional tensor. So, if you remember about color images, like for every pixel, you can denote it with an x and a y, but it also has a red, green, and a blue component. And so, these tensors are kind of main data structure that you’ll be using in PyTorch. They helps a lot if you’re able to kind of visualize them in your mind as you’re working with them because you’ll be doing a lot of things, where you’re looking at shapes and linear algebra operations on them. So, it is just good to understand what these are and then how they’re actually kind of flowing through your network. So, now we can actually start looking at how you use PyTorch to build neural networks. First though, I want you to understand how you actually work with tensors in PyTorch. I was saying tensors are the basic data structure in PyTorch. So, to build your neural networks, a lot of times you’re going to be working with these tensors. And so, it’s a good idea to actually understand how they work, and your gain is conceptual understanding of them. So, first things first, going to import Numpy and PyTorch. So, import torch to use your PyTorch modules. So, first off, I’m just going to create a random tensor. And we see, we just had these random numbers, and is 3 by 2. And then we can print other one, the same size. So, to get the size of a tensor, you just do x.size. And so, this will give us a 3 by 2 as a size. And this will create a array of a 2-dimensional tensor of all ones with the same size as x. Then we can add these together. And this is pretty much what you should be used to using Numpy. So, in a lot of ways, using PyTorch is very similar to Numpy, which makes the learning curve very nice, because if you have experienced Numpy, then using PyTorch becomes very natural. So, for instance, you can index your tensors, so get out the first row, and you can use slices. So, if you want all the rows for the second column, it will do that. You see now tensors in PyTorch have two forms of methods. So, one form creates a new tensor. So, z.add. So this just adds the number 1 to our tensor z. This gives us this. And what this does is it creates a new tensor. So, basically, a copy of z and then add one to it. And just to check, okay. So this is the old tensor, and this is the new tensor that was made. However, pretty much all the methods on these tensors also have an in-place version. So, what that means is that if we add one again, then it looks like we get a new tensor, but it’s actually changed our tensor. So, what it means in-place is it changes the values in the memory like for the memory of this tensor. So, without an underscore, it creates a new tensor. And with its underscore, it does the operation in-place. And so, you keep the same tensor. It just changes the values in memory that a tensor is pointing to. So, something you’ll be doing really often with PyTorch is looking at the size and shape of your tensors and then reshaping your tensors. So, again, to just check out the size or the shape of one of your tensors, you just do.size, and this tells you that it’s a 3 by 2 tensor. And say, we want to resize it to make it 2 by 3. So, it’s still z.resize_(2,3). So you’ll notice that we have an underscore here, which means that this method, this reshaping is done in place. So if you look at z again, then it is now a 2 by 3 matrix, when originally it was a 3 by 2 matrix. Finally, one of my favorite features of PyTorch is that it’s really easy to convert from a Numpy array into a Torch tensor. So this makes it really, really nice when you’re working with data, because most of the time, you’re preprocessing, and things like that are going to be done in Numpy. And then you’ll build your network at a PyTorch. And then you’ll put that back in a Numpy to do like the rest of your analysis are, or creating figures, or whatever kind of connected up to the rest of your program with Numpy. So, first, I’m just going to create a random array with Numpy. Random 4 by 3 array looks like that. And so, to change this array into a Torch tensor, you say from_numpy and you pass the array. And that gives us a Torch tensor with the same values. And then, if we want to take a PyTorch tensor, and we want to change it to a Numpy array, you say b, which gives a tensor,.numpy. So, the Numpy method will return you a Numpy array. So, one thing to know, when you take a Numpy array and you change it to a tensor, the Numpy array and the tensor, they share memory. So what this means is that if you change values for your tensor in-place, it’s also going to change the values of your Numpy array. So, for instance, if we multiply in-place by two, we get that. And then if we look back at our Numpy array, it’s also changed. So, just something to be aware of as you’re using this, that if you are linking your Torch tensors and your Numpy arrays, that they’re going to be sharing the same memory. So, just something to look out for, so you don’t cause yourself bugs. I’ll see you next video. Cheers.