Pytorch Cheat Sheet

Posted on  by 



Pytorch入门 Cheat Sheet by Yuxuejie0912 - Cheatography.com Created Date: 5650Z. The dispatcher is an internal component of PyTorch which is responsible for figuring out what code should actually get run when you call a function like torch::add. PyTorch Cheat Sheet. Quick overview to essential PyTorch elements. Tutorials on GitHub. Access PyTorch Tutorials from GitHub.

By Uniqtech

Pytorch Cheatsheet. For more updated examples, see: Pytorch Kaggle Starter. Get A Weekly Email With Trending Projects For These Topics. Unsubscribe easily at any time.

Getting started with Pytorch using a cohesive, top down approach cheatsheet. This cheatsheet should be easier to digest than the official documentation and should be a transitional tool to get students and beginners to get started reading documentations soon. This article is being improved continuously. It is frequently updated and will remain under construction until it is significantly improved. Your feedback is appreciated hi@uniqtech.co and mistakes, typos will be promptly corrected.

Big news: we got published on Medium Machine Learning and Data Science homepage. Please clap ← and comment to show your support. This cheatsheet below is primarily narrative. A PDF JPEG version of a detailed cheatsheet will be released soon, posted in this article. Updated June 18, 2019 to make this cheat sheet / tutorial more cohesive, we will insert code snippets from a medal winning Kaggle kernel to illustrate important Pytorch concepts — Malaria Detection with Pytorch, an image classification, computer vision Kaggle kernel [see Source 3 below] by author devilsknightand vishnu aka qwertypsv.


Pytorch Defined in Its Own Words


Pytorch is “An open source deep learning platform that provides a seamless path from research prototyping to production deployment.”

According to Facebook Research [Source 1], PyTorch is a Python package that provides two high-level features:
Tensor computation (like NumPy) with strong GPU acceleration
Deep neural networks built on a tape-based autograd system
You can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed.

Soumith Chintala, Facebook Research Engineer and creator of Pytorch gave some interesting facts about Pytorch: autograd used to be written in python, but the majority (of code) was changed to C++ (for production readiness). He thinks interesting Pytorch 1.0 features are hybrid front end, parsing model for production, using Jit compiler to get models production ready for example. Source: Chintala’s interview with Udacity learning.

Key Features


Component | Description [Source 2]

torch: a Tensor library like NumPy, with strong GPU support
torch.autograd : a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit : a compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn: a neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing: Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils: DataLoader and other utility functions for convenience

Key features: Hybrid Front-End, Distributed Training, Python-First, Tools & Libraries.

These features are elegantly illustrated with side-by-side code example on the features page!


The features page on pytorch documentation shows elegant code sample to illustrated each feature. Also note Python 3 short hand for dot product such as “@”

Hybrid Front-End allows switching between eager mode and (computation) graph mode. Tensorflow used to be graph mode only, which was considered fast and efficient but very hard to modify, prototype and research. This gap is closing since Tensorflow now also offers eager mode (no more session run.)

Distributed Training: supports GPU, CPU and easy switching between the two. (Tensorflow supports TPU in addition. Its own Tensor Processing Unit.)
Python-First: built for python developers. Easily create neural network, run deep learning in Pytorch. Tools & Libraries include robust computer vision libraries (convolutional neural networks and pretrained models), NLP and more.

Pytorch also includes great features like torch.tensor instantiation and computation, model, validation, scoring, Pytorch feature to auto calculate gradient using autograd which also does all the backpropagation for you, transfer learning ready preloaded models and datasets (read our super short effective article on transfer learning), and let’s not forget GPU using CUDA.

When Should You Use Pytorch

AWS google GCP GPU supports Pytorch as first class citizen

Pytorch added production and cloud partner support for 1.0 for AWS, Google Cloud Platform, Microsoft Azure. You can now use Pytorch for any deep learning tasks including computer vision and NLP, even in production.

Because it is so easy to use and pythonic to Senior Data Scientist Stefan Otte said “if you want to have fun, use pytorch”.

Pytorch is also backed by Facebook AI research so if you want to work for Facebook data and ML, you should know Pytorch. If you are great with Python and want to be an open source contributor Pytorch is also the way to go.

Transfer Learning

Transfer learning use models to predict the type of the dataset that it wasn’t trained on. It can significantly improve training time and accuracy. It can also help with the situation where available training data is limited. Pytorch has a page dedicated to pre-trained models and its performance across industry standard benchmark datasets. Read more in our transfer learning with Pytorch article.


Read about all the available models on Pytorch documentation. Note the top-1-error, top-5-error i.e. the performance of the models are also available.

Data Science, Academic Research | Presentation Using Jupyter Notebook

Since Python has a huge developer community, it is easier to find Python talent among students and researchers to transition into Data Science and academic research, even writing production code using Pytorch. Eliminate the need to learn another language. Many data analysts and scientists are already familiar with Jupyter Notebook, on which Pytorch operates perfectly. Read more in our deploying Pytorch model to Amazon Web Service SageMaker.

Pytorch is a Deep Learning Framework

Pytorch is a deep learning framework just like Tensorflow, which means: for traditional machine learning models, use another tool for now. Scikit-learn a Pythonic deep learning framework with extremely easy-to-use API. The documentation is quite good, each page has an example with code snippets at the bottom. Check it out. Did you know many Kaggle users including masters still use sklearn train_test_split() to split and scaler to pre-process data, sklearn Gradient Boosting Tree or Support Vector Machine to benchmark performance, and the top notch high-performance XGBoost is notably missing.

Tensorflow.js and Tensorflow.lite gives Tensorflow wings in the browser and on mobile devices. Apple just announced CreateML for Swift June 2019. Mobile support is not native yet in Pytorch. Do not dispair. Scroll down to read about ONNX an exchange format that is supported by almost of all of the popular frameworks. Pytorch also has a tutorial on moving a model to mobile, though this road is still bit detoured compared to Tensorflow.

Installation Pytorch


For installation tips use the official Pytorch documentation first. https://pytorch.org/get-started/locally/ The above screenshot is an example of available installations.

Using Anaconda to install Pytorch is a great start across all systems including and Windows. We were able to install Pytorch with Anaconda on a gaming computer and start to use its CUDA GPU feature right away. Read our Anaconda Cheatsheet here.


Hello World in Pytorch is as easy as launching a Google Colab (yes, right on Google’s turf), and import torch , check out this shared view only notebook. Modern hosted data science notebooks like Kaggle Kernel and Google Colab all come with Pytorch pre-intalled.

Look Ma: deep learning with no server!

Prefer Jupyter Notebook based tutorials instead? Getting started using the Udacity Intro to Pytorch repo, found at the bottom of this article.

Prefer other installation methods? Binaries, from source, and docker image, see Source 2.

Code Snippets from Source 3


import torch is needed for core pytorch tasks. You also see torch neural nets module nn, optimizer optim and computer vision module torchvision, data transformer pipeline transforms , datasets and existing models being imported. In this case the Kaggle team also used a SubsetRandomSampler, you will see in a later snippet how it feeds into the data transformation and loading pipeline.

Data Transformation


Code Snippets from Source 3

Note that the train, test and validation transformers are similar but different. To augment data, training data is randomly rotated, resized and cropped, even vertically flipped (in this case a flipped Malaria cell does not negatively affect classification results). Because test and validation data should mimic real world data, no random noise or flipping is introduced. Just as it is, with center crop. Note that the size must match constantly. Deep Learning is a lot of matrix multiplication. Dimension size always matters. In image classification tasks, we typically want to normalize images according to the pre-trained model or existing dataset we will be using.


Why the normalization? And what are those strange numbers? Mean and standard deviation.


Definitely don’t forget ToTensor to transform all to a pytorch tensor.

Why? Because Pytorch expects it. Read this thread by Chimtala smth creator of Pytorch [Source 5].

input image is first loaded to range [0, 1] and then this normalization is applied to RGB image as described here .. torch vision — Datasets, Transforms and Models specific to Computer Vision

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.

The images have to be loaded in to a range of [0, 1] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]

An example of such normalization can be found in the imagenet example here [Source 4]

Loading Data Using Train and Test Loaders


Code Snippet from Source 3

datasets.ImageFolder and the torch.utils.data.DataLoader work together to load train, valid and test data separately, based on batch_size, sampling after data transformation in the previous section. Each dataset has its own loader.


Pytorch Model in a Nutshell


Using Sequential is one easy way to quickly define a model. A named ordered dictionary holds all the layers that are encapsulated in nn.Sequential, which is then stored in the model.classifier variable. This is a quick way to define the bare bone of a model but not necessarily the most Pythonic. It helps us illustrate a Python model is consisted of fully connected Linear Layers with shape specified in (row, col) tuples. ReLU activation layers, Dropout with 20% probability and an output Softmax function or LogSoftmax function. Don’t worry about that now. All you need to know is that Softmax is usually the last layer of a Deep Learning model of multi-class classification tasks. The famous ImageNet dataset has 1000+ classes, so the output of Softmax has at least 1000+ components.

A collection of fully connected layers with ReLU activation in between some dropouts and at last, another fully connected linear layer which feeds into a Softmax activation is very typical of a vanilla Deep Learning Neural Network.


In Pytorch it is easy to view the structure of your model just use print(model_name). More on that later.

Code Snippet in Source 3

As previously mentioned, in transfer learning in the why pytorch section, we can use a pretrained model such as resnet50. Turning gradient off for all layers except the last newly added, fully connected layer.


Note that the last layer is 2048 by 2 because we are classifying just two classes: true or false, malaria or not.

The model variable returns a massive ResNet model structure with our customized last layer.


In the code snippet below, we omitted a lot of details to make the model architecture fit on the screen. If you visit the source 3 kernel link, you will see quite a few bottlenecks and sequential layers.


Here we showed that the last customized layer is indeed 2048 by 2. Note that the layer before the average pooling and relu is 2048, hence 2048 by 2.


Pytorch Training Loop


The training loop is perhaps the most characteristic of Pytorch as a deep learning framework. In Sklearn can make it go away with fit and in Tensorflow with transform. In Pytorch this part is much more involved.

model.train() tells your model that you are training the model. So effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly.

More details: It sets the mode to train (see source code). You can call either model.eval() or model.train(mode=False) to tell that you are testing. It is somewhat intuitive to expect trainfunction to train model but it does not do that. It just sets the mode.

model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval model instead of training mode.

torch.no_grad() impacts the autograd engine and deactivate it. It will reduce memory usage and speed up computations but you won’t be able to backprop (which you don’t want in an eval script). — albanD

Code Snippet Training Loop Source 3


Check CUDA availability, set criterion to CrossEntropyLoss(), and optimizer to Stochastic Gradient Descent for example before starting to train. Note that the learning rate starts very small at about lr=0.001.

Note that the training loop typically takes in number of epochs, model architecture optimizer and criterion. The loss has to be zero’ed out first at the start of every loop! The big loop iterates through the number of epochs. In each epoch, we put the model into training model model.train(), move model and data to CUDA if needed, optimizer.zero_grad() zero out the gradient before starting, predict output, use criterion to calculate loss, loss.backward to calculate the delta weight change based on optimizer, optimizer.step() to take one backward step to update weight. When validating, turns the model to eval model model.eval(). Save the model if validation loss has decreased, keep track of the lowest validation loss.


Developer Tools

Training Loops:

Train and Forward

Free GPU in the Cloud

Thanks you new cloud technology like Google Colab. Once you set up the notebook, you can continue train and monitor on your mobile devices! Cloud choices including Google Colab, Kaggle, AWS. Local choices including your own laptop and your gaming computer.

We repurposed an msi NVIDIA GTX 1060 previously for Assassin’s Creed Origin :D If you want to know more let us know.

Be able to train on a GPU locally has been a major advantage. We were able to iterate through parameter tuning combinations fast without interuption. Google Colab has a 12-hour timeout as well as a 12GB quota limit. If you are an advanced user, be sure to avoid constantly downloading the dataset, instead store it in your Google Drive. After deleting your model in Google Drive, be sure to empty trash to actually delete it.

Regardless of where the model is trained, if the training loss has gone down a lot near zero, but the validation loss does not decrease (there’s no test dataset), you may want to watch out your model is overfitting and it may be memorizing training data. Halt and tune your parameters. Even if you achieve 99% accuracy your model may not generalize, hence it’s a possibility that it cannot be used else where. This paragraph of information is especially relevant for Udacity students doing scholarship challenges and nanodegrees. Be very suspicious of 99% accuracy, but do a brief dance to celebrate first.

Forwarding


Pytorch forward pass will actually calculate y = wx + b before that we are just writing placeholders.

Useful Pytorch Libraries and Modules and Installations.


The Torchvision module is quite powerful. It has image processing and data processing codes, Convolutional Neural Nets (CNN), and other pretrained models like ResNet and VGG.

Transformation

  • transforms.ToTensor() convert data array to Pytorch tensor. Advantage include easy to use in CUDA, GPU training.
  • Developers can build transformation and transformation pipelines in Pytorch. Pipeline is a fancy way to say transformations that are chained together, and performed sequentially.

Pytorch Convolutional Neural Networks (CNN)


This section is under construction… check back soon.

Use Convolutional Neural Networks to complete computer vision, image recognition tasks. The transformation and data augmentation APIs are very important, especially when training data is limited. Alternatively one can also use many of the preloaded model architectures — read our transfer learning section.


Maxpooling layer discards detailed spatial information contained in the original image.

Inspect Pytorch Models


First init the model.


In Pytorch, use print(model_name) to print out the model and architecture of the model. You can easily see what the model is all about.


Note that each layer is named (numbered and can be queried by this index).We used ellipsis to omit model details so that the VGG model can fit on the screen.

Pro tip: inspecting the model architecture is a must. Transfer learning, in a nutshell, is about modifying the last few classifier layers. Read more in our transfer learning article.

Pro tip: for Tensorflow use keras model.summary() to review the entire model architecture. It even outputs number of parameters and dimensions.

Advanced Features


Pytorch Versions

For Udacity projects, not all nanodegrees have moved to Pytorch 1.0 It is important to use the right version for the right project. You may also need to change the Kernel in Jupyter Notebook to use corresponding version of Python. Udacity projects have moved onto Python3. Not all projects in the real world has migrated to Python 3. But it’s about time to move on from Python 2.

Using CUDA

Though we decided to put CUDA in the advanced section, but the reality is CUDA is so easy to use. Just use it… today! With Anaconda, Pytorch, and CUDA, we were able to turn a gaming computer with an NVDIA graphics card into a home deep learning powerhouse. No configuration needed! It just works. The framework just works on a windows machine! (It is an msi NVIDIA GTX 1060 previously for Assassin’s Creed Origin :D If you want to know more let us know.)


One common error of using CUDA with Pytorch is not moving both the model and the data to CUDA. And when needed move both of them back to CPU. Generally your model and data should always live in the same space.

Deploying Pytorch in Production: There are two methods of turning existing Pytorch models to production ready deployment trace and scripting. Tracing does not support complex models with control flow in the code. Scripting supports Pytorch codes with control flow but supports only a limited number of Python modules.

Choosing the best Softmax result: in multi-class classification, the activation Softmax function is often used. Pytorch has a dedicated function to extract top results — the most likely class from Softmax output. torch.topk(input, k, dim) returns the top probability.


Consume Pytorch Models on Other Platforms


“Export models in the standard ONNX (Open Neural Network Exchange) format for direct access to ONNX-compatible platforms, runtimes, visualizers, and more.” — Pytorch 1.0 Documentation

More on Pytorch Transfer Learning

To use an existing model is equivalent to freeze some of its layers and parameters and not train those. Turn off training autograd by setting require_grad to False.


Hyperparameter Tuning

In addition to using the right optimizer and adjusting learning rate according. You can use the learning rate scheduler to dynamically adjust your learning rate.

Cheat


Model and Check Point Saving

Save and Load Model Checkpoint


Pro tip: Did you know you can save and load models locally and in google drive? This way you don’t have to start from scratch every time. For example, if you already trained 5 epochs. You can save the weights and train another 5 epochs. Now you did 10 epochs total! Very convenient. The free GPU resources time out and get erased very often. Remember incremental training is possible.

You can also save a checkpoint and load it locally. You may see both extension .pt and .pth .


Code Snippet from Source 3

Cheat


Note you must save any checkpoint on Google Colab to your Google Drive, else your data may be erased every 12 hours or sooner. Though the GPU access is free, the storage is temporary on Google Colab.

Making Predictions with Pytorch


Here’s what we don’t like about Pytorch. Making predictions with Pytorch seems to be a bit patched together. Writing your own training loop seems easy-to-customize, though it is harder to write than Tensorflow, it makes sense to trade a bit of discomfort for customization. But prediction is strange which some functions that appear to be hacked together. See these code snippets below to see what we mean:

Code Snippets from Source 3:


output.data.max, np.sum(np.squeeze()) , cpu().numpy(), .unsqueeze(0), torch.argmax …WTF?! It’s super frustrating.

The important take away is: we are working with logits converted to probabilities here, and there are tensors that need to be turned into matrices and extra brackets removed. We need to know which one is most likely class using max or argmax. We need to move tensors back to CPU so cpu() and tensor needs to be turned into ndarray for ease of computation so numpy().

Pytorch is a deep learning framework, and deep learning is frequently about matrix math, which is about getting the dimensions right, so squeeze and unsqueeze have to be used to make dimensions match.


This part of the cheatsheet will save you a lot of headache and you are welcome! np.squeeze() removed the extra set of [] and an extra dimension out of [[1,2,3]] of shape (1, 3). Now it is just (3,). A

And unsqueeze made [1] shape of (1,) to [[1]] shape (1,1).

Further Reading

  • Pytorch Data Science Nanodegree Deep Learning Intro to Pytorch notebooks and tutorials by Udacity.
  • Source 1: https://research.fb.com/downloads/pytorch/
  • Source 2: Pytorch Github main https://github.com/pytorch/pytorch#getting-started also makes a great cheat sheet.
  • Extremely awesome forum including many active core contributors, supplements the documentations https://discuss.pytorch.org/
  • Source 3: Malaria Detection with Pytorch kaggle Kernel https://www.kaggle.com/devilsknight/malaria-detection-with-pytorch
  • Source 4: https://github.com/pytorch/examples/blob/97304e232807082c2e7b54c597615dc0ad8f6173/imagenet/main.py#L197-L198
  • Source 5: Pytorch community forum Pytorch image transform normalization https://discuss.pytorch.org/t/whats-the-range-of-the-input-value-desired-to-use-pretrained-resnet152-and-vgg19/1683


Uniqtech: We write beginner friendly cheatsheets like this all the time. Follow our profile and our most popular Data Science Bootcamp publication. Check out our one page article on Transfer Learning in Pytorch, Pytorch on Amazon SageMaker, and Anaconda Cheatsheet for Data Science. We are primarily on Medium, a community we love and find strong affinity in. Medium treats its writers well, and has a phenomenal reader community. If you would like to find out about our upcoming Data Science Bootcamp course releasing Fall 2019, scholarship for high quality articles, or want to write for us, contribute feedback please email us hi@uniqtech.co. Thank you Medium community!

Key author and contributor Sun.

Original. Reposted with permission.

Related:

Python 3 Quick Reference Card

Tensor creators

Concat and stack

Padding

When something is a leaf

  • Tensors that have requires_grad False will be leaf tensors by convention.
  • For tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user.
  • This means that they are not the result of an operation and so grad_fn is None.

Assignment consideration

Pytorch Cheat Sheet Download

Comparison with NumPy

Finding the max argument

Pytorch Cheat Sheet

Check if matrix is symmetric

Pytorch cheat sheet online

Checking SVD decomposition

Formula: input=U×diag(S)×V.t()

Note: SVD on CPU uses the LAPACK routine SVD on GPU uses the MAGMA routine.

QR decomposition

Formula: input=QR

LU factorization of a (system of linear eq. solver)

Solving: a@x=b ,LU contains L and U factors for LU factorization of a.

nn.Module

Print the module

Check inner modules

This modules() method should provide more info than children().

Creating optimizer modules:

Adam, RMSProp, AdaGrad, SGD…

Initialize optimizer with empty tensor and convert every param to param groups

Creating loss functions NLLLoss, MSELoss, CrossEntropyLoss…

Using pre-trained models:

Setting the model in train or eval mode:

Condition based:

Creating the device on GPU:0:

Save and load a tensor:

Writing PyTorch tensor to a file:

Getting actual size of the model:

Pytorch Cheat Sheet Pdf Download

Deconvolution

Convolution and max-pooling

addcdiv_

Pytorch Cheat Sheet Pdf

What just happened?

The Cheat Sheet Website

x[0] was 1, but we added to that 2*y[0]/z[0], so we added 4. Now the operation is in place so x[0] will end as 5. Note: addcdiv_ will do per element division.





Coments are closed