Points on PyTorch

Today, December 8th, 2018, PyTorch 1.0 stable has been released. It is a milestone and I’d like to keep notes on PyTorch as I learn and use PyTorch. The resource mainly comes from PyTorch official tutorial and Intro to Deep Learning with PyTorch on Udacity.

Tensors

Resizing the tensor

There are a few options to use: .reshape(), .resize() and .view().

  • w.reshape(a, b) will return a new tensor with the same data as w with size (a, b)sometimes, and sometimes a clone, as in it copies the data to another part of memory
  • w.resize_(a, b) returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory.
  • w.view(a, b) will return a new tensor with the same data as w with size (a, b)

The above three methods are introduced in Intro to Deep Learning with PyTorch. PyTorch official tutorial only introduces w.view(), so generally I’d use w.view() for resizing.

Convenience of -1

When resizing the tensor, -1 is helpful to determine the only unknown size when we already know the other sizes. It can be inferred from other dimensions.

E.g.,

1
2
x = torch.randn(4, 4)
y = x.view(-1, 8)

The size of y is torch.Size([2, 8]), just as we want.

In-place operation

An in-place operation is an operation that changes directly the content of a given tensor without making a copy. In-place operations in PyTorch are always postfixed with a _, like .add_(). The .resize_() mentioned above is also an in-place operation.

NumPy to Torch and back

PyTorch has a great feature for converting between NumPy arrays and Torch tensors. To create a tensor from a NumPy array, use torch.from_numpy(). To convert a tensor to a NumPy array, use the .numpy() method.

1
2
3
4
import numpy as np
a = np.random.rand(4, 3)
b = torch.from_numpy(a)
b.numpy()

The memory is shared between the NumPy array and Torch tensor.

Neural Network

Autograd

The autograd package provides automatic differentiation for all operations on Tensors. If the attribute requires_grad of torch.Tensor is set as True, it starts to track all operations on it. When you finished your computation you can call .backward() and have all the gradients computed automatically. The gradient for this tensor will be accumulated into .grad attribute.

For more information, refer to autograd tutorial and autograd doc.

Constructing Neural Networks

1
2
3
4
5
6
7
8
9
10
11
12
13
import torch.nn as nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(784, 256)
self.output = nn.Linear(256, 10)
def forward(self, x):
x = F.sigmoid(self.hidden(x))
x = F.softmax(self.output(x), dim=1)
return x

It is mandatory to inherit from nn.Module when creating a class for our network. The name of the class itself can be anything.

PyTorch networks created with nn.Module must have a forward method defined. It takes in a tensor x and passes it through the operations you defined in the __init__ method. And the backward function (where gradients are computed) is automatically defined for you using autograd.

Another way is mentioned in the course: nn.Sequential. (See Doc)