3 – PyTorch Script Tracing V1

Welcome to this lesson on using some really cool new features in Pytorch. So, this is specifically features that are being introduced and Pytorch 1.0. What they allow you to do is, export your train models and Pytorch that you’ve trained in Python, and export those to a version that you can then load up in C++. This is specifically useful for deploying your models in production. So, production environments tend do require very low latencies and strict deployment requirements. For the most part, these environments are built with C++. So, if you’re going to deploy your model into production, for example maybe like as some mobile device or a Web App or an embedded system like a self-driving car, then typically your production environment is going to be in C++, and so you need to have your model in some format that is able to be loaded into that C++ environment. Now, in Pytorch 1.0, we have these new capabilities where we can convert our model which we’ve built and trained in Python into sort of an intermediate representation, that is enabled by this new thing called Torch Script. So, Torch Script is an intermediate representation that can be compiled and serialized by the Torch Script Compiler. So, the idea here is that you first develop and build your network in Python, and you train it there, and find the best hyperparameters and such like kind of that whole workflow. But when you’re ready to deploy it, you will convert your Pytorch model into the Torch Script representation, from there you can compile it into a C++ representation. So, there are two ways of converting your Pytorch model to Torch Script. So, the first one is known as Tracing. So, the idea behind tracing is that you build your model, and then you pass some example data through your model like doing a forward pass through it. Behind the scenes what Pytorch is doing is it’s keeping track of all the operations that are being performed on your input tensor. In this way, it can actually build this graph of operations that are being performed on your inputs. Then, once it has that graphic can convert it to Torch Script. To do this, you’ll be using a new module called JIT, so torch.jit. So, JIT stands for Just In Time compiler. So, we have an example given here. So, we’re going to be using an example model, so resnet18. This is a convolutional network used for classifying images. We can get this model just from torch vision. So, we’ll just use this as an example. Since we have our model, so this can be anything you have built and trained yourself or this is just an example here, but this same code applies to anything you would build yourself. We need an example input. So, it doesn’t really matter the actual values and the tensor itself. It just has to have the same shape or what you would normally pass through your model’s forward method. So, in this case, resnet 18 is an image classifier. Right. So it expects images. So, in general, you would have some number of images per batch. So, here we’re just going to use one image, three color channels, red, green, and blue, and then it typically accepts images of size 224 by 224. So, we could just construct a random tensor that has the shape, one by three by 224 by 224. Now, with our model and our example, we use torch.jit.trace. So, we pass in our model, and we pass in the example, and it returns to us a Script module. The Script module is this Torch Script representation. But you can use it exactly like a regular Pytorch model, so if you pass in a tensor like an image, it’s going to give you the output that you would expect from a normal Pytorch module.

%d 블로거가 이것을 좋아합니다: