EE-559 Deep learning 1b. PyTorch Tensors Fran cois Fleuret - - PowerPoint PPT Presentation

ee 559 deep learning 1b pytorch tensors
SMART_READER_LITE
LIVE PREVIEW

EE-559 Deep learning 1b. PyTorch Tensors Fran cois Fleuret - - PowerPoint PPT Presentation

EE-559 Deep learning 1b. PyTorch Tensors Fran cois Fleuret https://fleuret.org/dlc/ [version of: June 14, 2018] COLE POLYTECHNIQUE FDRALE DE LAUSANNE PyTorchs tensors Fran cois Fleuret EE-559 Deep learning / 1b.


slide-1
SLIDE 1

EE-559 – Deep learning

  • 1b. PyTorch Tensors

Fran¸ cois Fleuret https://fleuret.org/dlc/

[version of: June 14, 2018]

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

slide-2
SLIDE 2

PyTorch’s tensors

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 2 / 37

slide-3
SLIDE 3

A tensor is a generalized matrix, a finite table of numerical values indexed along several discrete dimensions.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 3 / 37

slide-4
SLIDE 4

A tensor is a generalized matrix, a finite table of numerical values indexed along several discrete dimensions.

  • A 1d tensor is a vector (e.g. a sound sample),
  • A 2d tensor is a matrix (e.g. a grayscale image),
  • A 3d tensor is a vector of identically sized matrices (e.g. a multi-channel

image),

  • A 4d tensor is a matrix of identically sized matrices (e.g. a sequence of

multi-channel images),

  • etc.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 3 / 37

slide-5
SLIDE 5

A tensor is a generalized matrix, a finite table of numerical values indexed along several discrete dimensions.

  • A 1d tensor is a vector (e.g. a sound sample),
  • A 2d tensor is a matrix (e.g. a grayscale image),
  • A 3d tensor is a vector of identically sized matrices (e.g. a multi-channel

image),

  • A 4d tensor is a matrix of identically sized matrices (e.g. a sequence of

multi-channel images),

  • etc.

Tensors are used to encode the signal to process, but also the internal states and parameters of the “neural networks”.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 3 / 37

slide-6
SLIDE 6

A tensor is a generalized matrix, a finite table of numerical values indexed along several discrete dimensions.

  • A 1d tensor is a vector (e.g. a sound sample),
  • A 2d tensor is a matrix (e.g. a grayscale image),
  • A 3d tensor is a vector of identically sized matrices (e.g. a multi-channel

image),

  • A 4d tensor is a matrix of identically sized matrices (e.g. a sequence of

multi-channel images),

  • etc.

Tensors are used to encode the signal to process, but also the internal states and parameters of the “neural networks”. Manipulating data through this constrained structure allows to use CPUs and GPUs at peak performance.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 3 / 37

slide-7
SLIDE 7

A tensor is a generalized matrix, a finite table of numerical values indexed along several discrete dimensions.

  • A 1d tensor is a vector (e.g. a sound sample),
  • A 2d tensor is a matrix (e.g. a grayscale image),
  • A 3d tensor is a vector of identically sized matrices (e.g. a multi-channel

image),

  • A 4d tensor is a matrix of identically sized matrices (e.g. a sequence of

multi-channel images),

  • etc.

Tensors are used to encode the signal to process, but also the internal states and parameters of the “neural networks”. Manipulating data through this constrained structure allows to use CPUs and GPUs at peak performance. Compounded data structures can represent more diverse data types.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 3 / 37

slide-8
SLIDE 8

PyTorch is a Python library built on top of Torch’s THNN computational backend. Its main features are:

  • Efficient tensor operations on CPU/GPU,
  • automatic on-the-fly differentiation (autograd),
  • optimizers,
  • data I/O.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 4 / 37

slide-9
SLIDE 9

PyTorch is a Python library built on top of Torch’s THNN computational backend. Its main features are:

  • Efficient tensor operations on CPU/GPU,
  • automatic on-the-fly differentiation (autograd),
  • optimizers,
  • data I/O.

“Efficient tensor operations” encompass both standard linear algebra and, as we will see later, deep-learning specific operations (convolution, pooling, etc.)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 4 / 37

slide-10
SLIDE 10

PyTorch is a Python library built on top of Torch’s THNN computational backend. Its main features are:

  • Efficient tensor operations on CPU/GPU,
  • automatic on-the-fly differentiation (autograd),
  • optimizers,
  • data I/O.

“Efficient tensor operations” encompass both standard linear algebra and, as we will see later, deep-learning specific operations (convolution, pooling, etc.) A key specificity of PyTorch is the central role of autograd: tensor operations are specified dynamically as Python operations. We will come back to this.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 4 / 37

slide-11
SLIDE 11

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-12
SLIDE 12

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-13
SLIDE 13

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-14
SLIDE 14

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-15
SLIDE 15

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

The default tensor type torch.Tensor is an alias for torch.FloatTensor , but there are others with greater/lesser precision and on CPU/GPU. It can be set to a different type with torch.set default tensor type

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-16
SLIDE 16

>>> from torch import Tensor >>> x = Tensor (5) >>> x.size () torch.Size ([5]) >>> x.fill_ (1.125) 1.1250 1.1250 1.1250 1.1250 1.1250 [torch. FloatTensor

  • f size 5]

>>> x.sum () 5.625 >>> x.mean () 1.125 >>> x.std () 0.0

The default tensor type torch.Tensor is an alias for torch.FloatTensor , but there are others with greater/lesser precision and on CPU/GPU. It can be set to a different type with torch.set default tensor type In-place operations are suffixed with an underscore.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 5 / 37

slide-17
SLIDE 17

torch.Tensor.narrow creates a new tensor which is a sub-part of an existing tensor, by constraining one of the indexes. It shares its content with the

  • riginal tensor, and modifying one modifies the other.

>>> a = Tensor (4, 5).zero_ () >>> a [torch. FloatTensor

  • f size 4x5]

>>> a.narrow (1, 2, 2).fill_ (1.0) 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x2]

>>> a 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 6 / 37

slide-18
SLIDE 18

torch.Tensor.narrow creates a new tensor which is a sub-part of an existing tensor, by constraining one of the indexes. It shares its content with the

  • riginal tensor, and modifying one modifies the other.

>>> a = Tensor (4, 5).zero_ () >>> a [torch. FloatTensor

  • f size 4x5]

>>> a.narrow (1, 2, 2).fill_ (1.0) 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x2]

>>> a 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 6 / 37

slide-19
SLIDE 19

torch.Tensor.narrow creates a new tensor which is a sub-part of an existing tensor, by constraining one of the indexes. It shares its content with the

  • riginal tensor, and modifying one modifies the other.

>>> a = Tensor (4, 5).zero_ () >>> a [torch. FloatTensor

  • f size 4x5]

>>> a.narrow (1, 2, 2).fill_ (1.0) 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x2]

>>> a 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 6 / 37

slide-20
SLIDE 20

torch.Tensor.narrow creates a new tensor which is a sub-part of an existing tensor, by constraining one of the indexes. It shares its content with the

  • riginal tensor, and modifying one modifies the other.

>>> a = Tensor (4, 5).zero_ () >>> a [torch. FloatTensor

  • f size 4x5]

>>> a.narrow (1, 2, 2).fill_ (1.0) 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x2]

>>> a 1 1 1 1 1 1 1 1 [torch. FloatTensor

  • f size 4x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 6 / 37

slide-21
SLIDE 21

PyTorch provides interfacing to standard linear operations, such as linear system solving or Eigen-decomposition.

>>> y = Tensor (3).normal_ () >>> y

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3]

>>> m = Tensor (3, 3).normal_ () >>> q, _ = torch.gels(y, m) >>> torch.mm(m, q)

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3x1]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 7 / 37

slide-22
SLIDE 22

PyTorch provides interfacing to standard linear operations, such as linear system solving or Eigen-decomposition.

>>> y = Tensor (3).normal_ () >>> y

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3]

>>> m = Tensor (3, 3).normal_ () >>> q, _ = torch.gels(y, m) >>> torch.mm(m, q)

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3x1]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 7 / 37

slide-23
SLIDE 23

PyTorch provides interfacing to standard linear operations, such as linear system solving or Eigen-decomposition.

>>> y = Tensor (3).normal_ () >>> y

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3]

>>> m = Tensor (3, 3).normal_ () >>> q, _ = torch.gels(y, m) >>> torch.mm(m, q)

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3x1]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 7 / 37

slide-24
SLIDE 24

PyTorch provides interfacing to standard linear operations, such as linear system solving or Eigen-decomposition.

>>> y = Tensor (3).normal_ () >>> y

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3]

>>> m = Tensor (3, 3).normal_ () >>> q, _ = torch.gels(y, m) >>> torch.mm(m, q)

  • 1.6978
  • 0.6911
  • 1.1713

[torch. FloatTensor

  • f size 3x1]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 7 / 37

slide-25
SLIDE 25

Example: linear regression

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 8 / 37

slide-26
SLIDE 26

Given a list of points (xn, yn) ∈ R × R, n = 1, . . . , N, can we find the “best line” f (x; a, b) = ax + b going “through the points”

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 9 / 37

slide-27
SLIDE 27

Given a list of points (xn, yn) ∈ R × R, n = 1, . . . , N, can we find the “best line” f (x; a, b) = ax + b going “through the points”, e.g. minimizing the mean square error argmin

a,b

1 N

N

  • n=1
  • axn + b

f (xn;a,b)

−yn 2.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 9 / 37

slide-28
SLIDE 28

Given a list of points (xn, yn) ∈ R × R, n = 1, . . . , N, can we find the “best line” f (x; a, b) = ax + b going “through the points”, e.g. minimizing the mean square error argmin

a,b

1 N

N

  • n=1
  • axn + b

f (xn;a,b)

−yn 2. Such a model would allow to predict the y associated to a new x, simply by calculating f (x; a, b).

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 9 / 37

slide-29
SLIDE 29

bash > cat systolic -blood -pressure -vs -age.dat 39 144 47 220 45 138 47 145 65 162 46 142 67 170 42 124 67 158 56 154 64 162 56 150 59 140 34 110 42 128 48 130 45 135 17 114 20 116 19 124 36 136 50 142 39 120 21 120 44 160 53 158 63 144 29 130 25 125 69 175

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 10 / 37

slide-30
SLIDE 30

80 100 120 140 160 180 200 220 240 10 20 30 40 50 60 70 80 Systolic Blood Pressure (mmHg) Age (years)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 11 / 37

slide-31
SLIDE 31

     x1 y1 x2 y2 . . . . . . xN yN     

  • data∈RN×2

     x1 1.0 x2 1.0 . . . . . . xN 1.0     

  • x∈RN×2

a b

  • α∈R2×1

≃      y1 y2 . . . yN     

  • y∈RN×1

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 12 / 37

slide-32
SLIDE 32

     x1 y1 x2 y2 . . . . . . xN yN     

  • data∈RN×2

     x1 1.0 x2 1.0 . . . . . . xN 1.0     

  • x∈RN×2

a b

  • α∈R2×1

≃      y1 y2 . . . yN     

  • y∈RN×1

import torch , numpy data = torch. from_numpy (numpy.loadtxt(’systolic -blood -pressure -vs -age.dat ’)).float () nb = data.size (0) x, y = torch.Tensor(nb , 2), torch.Tensor(nb , 1) x[: ,0] = data [: ,0] x[: ,1] = 1 y[: ,0] = data [: ,1] alpha , _ = torch.gels(y, x) a, b = alpha [0,0], alpha [1, 0]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 12 / 37

slide-33
SLIDE 33

     x1 y1 x2 y2 . . . . . . xN yN     

  • data∈RN×2

     x1 1.0 x2 1.0 . . . . . . xN 1.0     

  • x∈RN×2

a b

  • α∈R2×1

≃      y1 y2 . . . yN     

  • y∈RN×1

import torch , numpy data = torch. from_numpy (numpy.loadtxt(’systolic -blood -pressure -vs -age.dat ’)).float () nb = data.size (0) x, y = torch.Tensor(nb , 2), torch.Tensor(nb , 1) x[: ,0] = data [: ,0] x[: ,1] = 1 y[: ,0] = data [: ,1] alpha , _ = torch.gels(y, x) a, b = alpha [0,0], alpha [1, 0]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 12 / 37

slide-34
SLIDE 34

     x1 y1 x2 y2 . . . . . . xN yN     

  • data∈RN×2

     x1 1.0 x2 1.0 . . . . . . xN 1.0     

  • x∈RN×2

a b

  • α∈R2×1

≃      y1 y2 . . . yN     

  • y∈RN×1

import torch , numpy data = torch. from_numpy (numpy.loadtxt(’systolic -blood -pressure -vs -age.dat ’)).float () nb = data.size (0) x, y = torch.Tensor(nb , 2), torch.Tensor(nb , 1) x[: ,0] = data [: ,0] x[: ,1] = 1 y[: ,0] = data [: ,1] alpha , _ = torch.gels(y, x) a, b = alpha [0,0], alpha [1, 0]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 12 / 37

slide-35
SLIDE 35

     x1 y1 x2 y2 . . . . . . xN yN     

  • data∈RN×2

     x1 1.0 x2 1.0 . . . . . . xN 1.0     

  • x∈RN×2

a b

  • α∈R2×1

≃      y1 y2 . . . yN     

  • y∈RN×1

import torch , numpy data = torch. from_numpy (numpy.loadtxt(’systolic -blood -pressure -vs -age.dat ’)).float () nb = data.size (0) x, y = torch.Tensor(nb , 2), torch.Tensor(nb , 1) x[: ,0] = data [: ,0] x[: ,1] = 1 y[: ,0] = data [: ,1] alpha , _ = torch.gels(y, x) a, b = alpha [0,0], alpha [1, 0]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 12 / 37

slide-36
SLIDE 36

80 100 120 140 160 180 200 220 240 10 20 30 40 50 60 70 80 Systolic Blood Pressure (mmHg) Age (years) Model Data

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 13 / 37

slide-37
SLIDE 37

Manipulating high-dimension signals

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 14 / 37

slide-38
SLIDE 38

Data type CPU tensor GPU tensor 32-bit float torch.FloatTensor torch.cuda.FloatTensor 64-bit float torch.DoubleTensor torch.cuda.DoubleTensor 8-bit int (unsigned) torch.ByteTensor torch.cuda.ByteTensor 64-bit int (signed) torch.LongTensor torch.cuda.LongTensor

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 15 / 37

slide-39
SLIDE 39

Data type CPU tensor GPU tensor 32-bit float torch.FloatTensor torch.cuda.FloatTensor 64-bit float torch.DoubleTensor torch.cuda.DoubleTensor 8-bit int (unsigned) torch.ByteTensor torch.cuda.ByteTensor 64-bit int (signed) torch.LongTensor torch.cuda.LongTensor

>>> x = torch. LongTensor (12) >>> type(x) <class ’torch.LongTensor ’> >>> x = x.float () >>> type(x) <class ’torch.FloatTensor ’> >>> x = x.cuda () >>> type(x) <class ’torch.cuda.FloatTensor ’>

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 15 / 37

slide-40
SLIDE 40

Data type CPU tensor GPU tensor 32-bit float torch.FloatTensor torch.cuda.FloatTensor 64-bit float torch.DoubleTensor torch.cuda.DoubleTensor 8-bit int (unsigned) torch.ByteTensor torch.cuda.ByteTensor 64-bit int (signed) torch.LongTensor torch.cuda.LongTensor

>>> x = torch. LongTensor (12) >>> type(x) <class ’torch.LongTensor ’> >>> x = x.float () >>> type(x) <class ’torch.FloatTensor ’> >>> x = x.cuda () >>> type(x) <class ’torch.cuda.FloatTensor ’>

Tensors of the torch.cuda types are physically in the GPU memory, and

  • perations on them are done by the GPU.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 15 / 37

slide-41
SLIDE 41

Data type CPU tensor GPU tensor 32-bit float torch.FloatTensor torch.cuda.FloatTensor 64-bit float torch.DoubleTensor torch.cuda.DoubleTensor 8-bit int (unsigned) torch.ByteTensor torch.cuda.ByteTensor 64-bit int (signed) torch.LongTensor torch.cuda.LongTensor

>>> x = torch. LongTensor (12) >>> type(x) <class ’torch.LongTensor ’> >>> x = x.float () >>> type(x) <class ’torch.FloatTensor ’> >>> x = x.cuda () >>> type(x) <class ’torch.cuda.FloatTensor ’>

Tensors of the torch.cuda types are physically in the GPU memory, and

  • perations on them are done by the GPU. We will come back to that later.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 15 / 37

slide-42
SLIDE 42

The default tensor type can be set with torch.set default tensor type and used through torch.Tensor

>>> x = torch.Tensor () >>> x [torch. FloatTensor with no dimension] >>> torch. set_default_tensor_type (’torch.LongTensor ’) >>> x = torch.Tensor () >>> x [torch. LongTensor with no dimension]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 16 / 37

slide-43
SLIDE 43

The default tensor type can be set with torch.set default tensor type and used through torch.Tensor

>>> x = torch.Tensor () >>> x [torch. FloatTensor with no dimension] >>> torch. set_default_tensor_type (’torch.LongTensor ’) >>> x = torch.Tensor () >>> x [torch. LongTensor with no dimension]

For concision we often start our code with from torch import Tensor and use Tensor in place of torch.Tensor .

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 16 / 37

slide-44
SLIDE 44

The default tensor type can be set with torch.set default tensor type and used through torch.Tensor

>>> x = torch.Tensor () >>> x [torch. FloatTensor with no dimension] >>> torch. set_default_tensor_type (’torch.LongTensor ’) >>> x = torch.Tensor () >>> x [torch. LongTensor with no dimension]

For concision we often start our code with from torch import Tensor and use Tensor in place of torch.Tensor . Also, the new operator of a tensor allows to create one of same type

>>> y = torch. ByteTensor (10) >>> u = y.new (3).fill_ (123) >>> u 123 123 123 [torch. ByteTensor

  • f size 3]

This is key to writing functions able to handle all the tensor types.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 16 / 37

slide-45
SLIDE 45

2d tensor (e.g. grayscale image)

[•, ·] [·, •] Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 17 / 37

slide-46
SLIDE 46

3d tensor (e.g. rgb image)

[·, •, ·] [·, ·, •] [•, ·, ·] Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 17 / 37

slide-47
SLIDE 47

4d tensor (e.g. sequence of rgb images)

[•, ·, ·, ·]

. . .

[·, ·, •, ·] [·, ·, ·, •] [·, •, ·, ·] Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 17 / 37

slide-48
SLIDE 48

Here are some examples from the vast library of tensor operations: Creation

  • torch.Tensor()
  • torch.Tensor(size)
  • torch.Tensor(sequence)
  • torch.eye(n)
  • torch.from numpy(ndarray)

Indexing, Slicing, Joining, Mutating

  • torch.Tensor.view(*args)
  • torch.Tensor.expand(*sizes)
  • torch.cat(inputs, dimension=0)
  • torch.chunk(tensor, chunks, dim=0)[source]
  • torch.index select(input, dim, index, out=None)
  • torch.t(input, out=None)
  • torch.transpose(input, dim0, dim1, out=None)

Filling

  • Tensor.fill (value)
  • torch.bernoulli(input, out=None)
  • torch.normal()

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 18 / 37

slide-49
SLIDE 49

Pointwise math

  • torch.abs(input, out=None)
  • torch.add()
  • torch.cos(input, out=None)
  • torch.sigmoid(input, out=None)
  • (+ many operators)

Math reduction

  • torch.dist(input, other, p=2, out=None)
  • torch.mean()
  • torch.norm()
  • torch.std()
  • torch.sum()

BLAS and LAPACK Operations

  • torch.eig(a, eigenvectors=False, out=None)
  • torch.gels(B, A, out=None)
  • torch.inverse(input, out=None)
  • torch.mm(mat1, mat2, out=None)
  • torch.mv(mat, vec, out=None)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 19 / 37

slide-50
SLIDE 50

x = torch.LongTensor([ [ 1, 3, 0 ], [ 2, 4, 6 ] ]) x.t()

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 20 / 37

slide-51
SLIDE 51

x = torch.LongTensor([ [ 1, 3, 0 ], [ 2, 4, 6 ] ]) x.view(-1)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 20 / 37

slide-52
SLIDE 52

x = torch.LongTensor([ [ 1, 3, 0 ], [ 2, 4, 6 ] ]) x.view(3, -1)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 20 / 37

slide-53
SLIDE 53

x = torch.LongTensor([ [ 1, 3, 0 ], [ 2, 4, 6 ] ]) x.narrow(1, 1, 2)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 20 / 37

slide-54
SLIDE 54

x = torch.LongTensor([ [ 1, 3, 0 ], [ 2, 4, 6 ] ]) x.view(1, 2, 3).expand(3, 2, 3)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 20 / 37

slide-55
SLIDE 55

x = torch.LongTensor([ [ [ 1, 2, 1 ], [ 2, 1, 2 ] ], [ [ 3, 0, 3 ], [ 0, 3, 0 ] ] ]) x.narrow(0, 0, 1)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 21 / 37

slide-56
SLIDE 56

x = torch.LongTensor([ [ [ 1, 2, 1 ], [ 2, 1, 2 ] ], [ [ 3, 0, 3 ], [ 0, 3, 0 ] ] ]) x.narrow(2, 0, 2)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 21 / 37

slide-57
SLIDE 57

x = torch.LongTensor([ [ [ 1, 2, 1 ], [ 2, 1, 2 ] ], [ [ 3, 0, 3 ], [ 0, 3, 0 ] ] ]) x.transpose(0, 1)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 21 / 37

slide-58
SLIDE 58

x = torch.LongTensor([ [ [ 1, 2, 1 ], [ 2, 1, 2 ] ], [ [ 3, 0, 3 ], [ 0, 3, 0 ] ] ]) x.transpose(0, 2)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 21 / 37

slide-59
SLIDE 59

x = torch.LongTensor([ [ [ 1, 2, 1 ], [ 2, 1, 2 ] ], [ [ 3, 0, 3 ], [ 0, 3, 0 ] ] ]) x.transpose(1, 2)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 21 / 37

slide-60
SLIDE 60

PyTorch offers simple interfaces to standard image data-bases.

import torch import torchvision # Get the CIFAR10 train images , download if necessary cifar = torchvision .datasets.CIFAR10 (’./ data/cifar10/’, train=True , download=True) # Converts the numpy tensor into a PyTorch

  • ne

x = torch.from_numpy (cifar. train_data ).transpose (1, 3).transpose (2, 3) # Prints

  • ut

some info print(str(type(x)), x.size (), x.min (), x.max ())

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 22 / 37

slide-61
SLIDE 61

PyTorch offers simple interfaces to standard image data-bases.

import torch import torchvision # Get the CIFAR10 train images , download if necessary cifar = torchvision .datasets.CIFAR10 (’./ data/cifar10/’, train=True , download=True) # Converts the numpy tensor into a PyTorch

  • ne

x = torch.from_numpy (cifar. train_data ).transpose (1, 3).transpose (2, 3) # Prints

  • ut

some info print(str(type(x)), x.size (), x.min (), x.max ())

prints

Files already downloaded and verified <class ’torch.ByteTensor ’> torch.Size ([50000 , 3, 32, 32]) 0 255

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 22 / 37

slide-62
SLIDE 62

PyTorch offers simple interfaces to standard image data-bases.

import torch import torchvision # Get the CIFAR10 train images , download if necessary cifar = torchvision .datasets.CIFAR10 (’./ data/cifar10/’, train=True , download=True) # Converts the numpy tensor into a PyTorch

  • ne

x = torch.from_numpy (cifar. train_data ).transpose (1, 3).transpose (2, 3) # Prints

  • ut

some info print(str(type(x)), x.size (), x.min (), x.max ())

prints

Files already downloaded and verified <class ’torch.ByteTensor ’> torch.Size ([50000 , 3, 32, 32]) 0 255 50, 000 . . . 32 32 3

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 22 / 37

slide-63
SLIDE 63

# Narrow to the first images, make the tensor Float, and move the # values in [-1, 1] x = x.narrow(0, 0, 48).float().div(255) # Save these samples as a single image torchvision.utils.save_image(x, ’images-cifar-4x12.png’, nrow = 12)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 23 / 37

slide-64
SLIDE 64

# Switch the row and column indexes x.transpose_(2, 3) torchvision.utils.save_image(x, ’images-cifar-4x12-rotated.png’, nrow = 12)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 24 / 37

slide-65
SLIDE 65

# Kill the green (1) and blue (2) channels x.narrow(1, 1, 2).fill_(-1) torchvision.utils.save_image(x, ’images-cifar-4x12-rotated-and-red.png’, nrow = 12)

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 25 / 37

slide-66
SLIDE 66

Broadcasting

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 26 / 37

slide-67
SLIDE 67

Broadcasting automagically expands dimensions of size 1 by replicating coefficients, when it is necessary to perform operations.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 27 / 37

slide-68
SLIDE 68

Broadcasting automagically expands dimensions of size 1 by replicating coefficients, when it is necessary to perform operations.

>>> A = Tensor ([[1] , [2], [3], [4]]) >>> A 1 2 3 4 [torch. FloatTensor

  • f size 4x1]

>>> B = Tensor ([[5 ,

  • 5, 5, -5, 5]])

>>> B 5 -5 5 -5 5 [torch. FloatTensor

  • f size 1x5]

>>> C = A + B >>> C 6 -4 6 -4 6 7 -3 7 -3 7 8 -2 8 -2 8 9 -1 9 -1 9 [torch. FloatTensor

  • f size 4x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 27 / 37

slide-69
SLIDE 69

A = Tensor ([[1] , [2], [3], [4]]) B = Tensor ([[5 ,

  • 5, 5, -5, 5]])

C = A + B

4 3 2 1 A 5 −5 5 −5 5 B

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 28 / 37

slide-70
SLIDE 70

A = Tensor ([[1] , [2], [3], [4]]) B = Tensor ([[5 ,

  • 5, 5, -5, 5]])

C = A + B

Broadcasted 4 3 2 1 A 5 −5 5 −5 5 B 4 4 4 4 4 3 3 3 3 3 2 2 2 2 2 1 1 1 1 1 5 −5 5 −5 5 5 −5 5 −5 5 5 −5 5 −5 5 5 −5 5 −5 5

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 28 / 37

slide-71
SLIDE 71

A = Tensor ([[1] , [2], [3], [4]]) B = Tensor ([[5 ,

  • 5, 5, -5, 5]])

C = A + B

Broadcasted 4 3 2 1 A 5 −5 5 −5 5 B 4 4 4 4 4 3 3 3 3 3 2 2 2 2 2 1 1 1 1 1 5 −5 5 −5 5 5 −5 5 −5 5 5 −5 5 −5 5 5 −5 5 −5 5 9 −1 9 −1 9 8 −2 8 −2 8 7 −3 7 −3 7 6 −4 6 −4 6 C = A + B

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 28 / 37

slide-72
SLIDE 72

Precisely, broadcasting proceeds as follows:

  • 1. If one of the tensors has fewer dimensions than the other, it is reshaped by

adding as many dimensions of size 1 as necessary in the front; then

  • 2. for every mismatch, if one of the two sizes is one, the tensor is expanded

along this axis by replicating coefficients. If there is a tensor size mismatch for one of the dimension and neither of them is one, the operation fails.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 29 / 37

slide-73
SLIDE 73

>>> x = Tensor ([1, 2, 3, 4, 5]) >>> y = Tensor (3, 5).fill_ (2.0) >>> z = x + y >>> z 3 4 5 6 7 3 4 5 6 7 3 4 5 6 7 [torch. FloatTensor

  • f size 3x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 30 / 37

slide-74
SLIDE 74

>>> x = Tensor ([1, 2, 3, 4, 5]) >>> y = Tensor (3, 5).fill_ (2.0) >>> z = x + y >>> z 3 4 5 6 7 3 4 5 6 7 3 4 5 6 7 [torch. FloatTensor

  • f size 3x5]

>>> a = Tensor (3, 1, 5).fill_ (1.0) >>> b = Tensor (1, 3, 5).fill_ (2.0) >>> c = a * b + a >>> c (0 ,.,.) = 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 (1 ,.,.) = 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 (2 ,.,.) = 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [torch. FloatTensor

  • f size 3x3x5]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 30 / 37

slide-75
SLIDE 75

Tensor internals

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 31 / 37

slide-76
SLIDE 76

A tensor is a view of a storage, which is a low-level 1d vector.

>>> q = Tensor (2, 4).zero_ () >>> q.storage () 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> s = q.storage () >>> s[4] = 1.0 >>> s 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> q 1 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 32 / 37

slide-77
SLIDE 77

A tensor is a view of a storage, which is a low-level 1d vector.

>>> q = Tensor (2, 4).zero_ () >>> q.storage () 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> s = q.storage () >>> s[4] = 1.0 >>> s 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> q 1 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 32 / 37

slide-78
SLIDE 78

A tensor is a view of a storage, which is a low-level 1d vector.

>>> q = Tensor (2, 4).zero_ () >>> q.storage () 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> s = q.storage () >>> s[4] = 1.0 >>> s 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> q 1 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 32 / 37

slide-79
SLIDE 79

A tensor is a view of a storage, which is a low-level 1d vector.

>>> q = Tensor (2, 4).zero_ () >>> q.storage () 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> s = q.storage () >>> s[4] = 1.0 >>> s 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 [torch. FloatStorage

  • f size 8]

>>> q 1 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 32 / 37

slide-80
SLIDE 80

Multiple tensors can share the same storage. It happens when using operations such as narrow() , view() , expand() or transpose() .

>>> r = q.view(2, 2, 2) >>> r (0 ,.,.) = (1 ,.,.) = 1 [torch. FloatTensor

  • f size 2x2x2]

>>> r[1, 1, 0] = 1.0 >>> q 1 1 [torch. FloatTensor

  • f size 2x4]

>>> r.narrow (0, 1, 1).fill_ (3.0) (0 ,.,.) = 3 3 3 3 [torch. FloatTensor

  • f size 1x2x2]

>>> q 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 33 / 37

slide-81
SLIDE 81

Multiple tensors can share the same storage. It happens when using operations such as narrow() , view() , expand() or transpose() .

>>> r = q.view(2, 2, 2) >>> r (0 ,.,.) = (1 ,.,.) = 1 [torch. FloatTensor

  • f size 2x2x2]

>>> r[1, 1, 0] = 1.0 >>> q 1 1 [torch. FloatTensor

  • f size 2x4]

>>> r.narrow (0, 1, 1).fill_ (3.0) (0 ,.,.) = 3 3 3 3 [torch. FloatTensor

  • f size 1x2x2]

>>> q 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 33 / 37

slide-82
SLIDE 82

Multiple tensors can share the same storage. It happens when using operations such as narrow() , view() , expand() or transpose() .

>>> r = q.view(2, 2, 2) >>> r (0 ,.,.) = (1 ,.,.) = 1 [torch. FloatTensor

  • f size 2x2x2]

>>> r[1, 1, 0] = 1.0 >>> q 1 1 [torch. FloatTensor

  • f size 2x4]

>>> r.narrow (0, 1, 1).fill_ (3.0) (0 ,.,.) = 3 3 3 3 [torch. FloatTensor

  • f size 1x2x2]

>>> q 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 33 / 37

slide-83
SLIDE 83

The first coefficient of a tensor is the one at storage offset() in storage() .

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-84
SLIDE 84

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-85
SLIDE 85

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-86
SLIDE 86

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

q =

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-87
SLIDE 87

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

x[0,0] x[0,1] x[1,0] x[1,1] x[2,0] x[2,1]

q =

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-88
SLIDE 88

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

x[0,0] x[0,1] x[1,0] x[1,1] x[2,0] x[2,1]

  • ffset

q =

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-89
SLIDE 89

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

x[0,0] x[0,1] x[1,0] x[1,1] x[2,0] x[2,1]

  • ffset

stride(0) stride(0)

q =

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-90
SLIDE 90

The first coefficient of a tensor is the one at storage offset() in storage() . To increment index k by 1, you have to move by stride(k) elements in the storage.

>>> q = torch.arange (0, 20).storage () >>> x = torch.Tensor ().set_(q, storage_offset = 5, size = (3, 2), stride = (4, 1)) >>> x 5 6 9 10 13 14 [torch. FloatTensor

  • f size 3x2]

x[0,0] x[0,1] x[1,0] x[1,1] x[2,0] x[2,1]

  • ffset

stride(0) stride(0) stride(1) stride(1) stride(1)

q =

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 34 / 37

slide-91
SLIDE 91

We can explicitly create different “views” of the same storage

>>> n = torch.linspace (1, 4, 4) >>> n 1 2 3 4 [torch. FloatTensor

  • f size 4]

>>> Tensor ().set_(n.storage (), 1, (3, 3), (0, 1)) 2 3 4 2 3 4 2 3 4 [torch. FloatTensor

  • f size 3x3]

>>> Tensor ().set_(n.storage (), 1, (2, 4), (1, 0)) 2 2 2 2 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 35 / 37

slide-92
SLIDE 92

We can explicitly create different “views” of the same storage

>>> n = torch.linspace (1, 4, 4) >>> n 1 2 3 4 [torch. FloatTensor

  • f size 4]

>>> Tensor ().set_(n.storage (), 1, (3, 3), (0, 1)) 2 3 4 2 3 4 2 3 4 [torch. FloatTensor

  • f size 3x3]

>>> Tensor ().set_(n.storage (), 1, (2, 4), (1, 0)) 2 2 2 2 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 35 / 37

slide-93
SLIDE 93

We can explicitly create different “views” of the same storage

>>> n = torch.linspace (1, 4, 4) >>> n 1 2 3 4 [torch. FloatTensor

  • f size 4]

>>> Tensor ().set_(n.storage (), 1, (3, 3), (0, 1)) 2 3 4 2 3 4 2 3 4 [torch. FloatTensor

  • f size 3x3]

>>> Tensor ().set_(n.storage (), 1, (2, 4), (1, 0)) 2 2 2 2 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 35 / 37

slide-94
SLIDE 94

We can explicitly create different “views” of the same storage

>>> n = torch.linspace (1, 4, 4) >>> n 1 2 3 4 [torch. FloatTensor

  • f size 4]

>>> Tensor ().set_(n.storage (), 1, (3, 3), (0, 1)) 2 3 4 2 3 4 2 3 4 [torch. FloatTensor

  • f size 3x3]

>>> Tensor ().set_(n.storage (), 1, (2, 4), (1, 0)) 2 2 2 2 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 35 / 37

slide-95
SLIDE 95

We can explicitly create different “views” of the same storage

>>> n = torch.linspace (1, 4, 4) >>> n 1 2 3 4 [torch. FloatTensor

  • f size 4]

>>> Tensor ().set_(n.storage (), 1, (3, 3), (0, 1)) 2 3 4 2 3 4 2 3 4 [torch. FloatTensor

  • f size 3x3]

>>> Tensor ().set_(n.storage (), 1, (2, 4), (1, 0)) 2 2 2 2 3 3 3 3 [torch. FloatTensor

  • f size 2x4]

This is in particular how transpositions and broadcasting are implemented.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 35 / 37

slide-96
SLIDE 96

This organization explains the following (maybe surprising) error

>>> x = Tensor (100 , 100) >>> y = x.t() >>> y.view (-1) Traceback (most recent call last): File "<stdin >", line 1, in <module > RuntimeError : input is not contiguous at /home/fleuret/misc/git/pytorch/torch/lib/TH/ generic/THTensor.c:231 >>> y.stride () (1, 100)

t() creates a tensor that shares the storage with the original tensor. It cannot be “flattened” into a 1d contiguous view without a memory copy.

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 36 / 37

slide-97
SLIDE 97

Practical session: https://fleuret.org/dlc/dlc-practical-1.pdf

Fran¸ cois Fleuret EE-559 – Deep learning / 1b. PyTorch Tensors 37 / 37

slide-98
SLIDE 98

The end