PyTorch Functions
Tensor Operation
Tensor.scatter_(dim, index, src, reduce=None) → Tensor
dim (int) – the axis along which to index
index (LongTensor) – the indices of elements to scatter, can be either empty or of the same dimensionality as
src
. When empty, the operation returnsself
unchanged.reduce (str, optional) – reduction operation to apply, can be either
'add'
or'multiply'
.
The operation replaces values in self by values in src in ways specified by index.
>>> src = torch.arange(1, 11).reshape((2, 5))
>>> src
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
>>> index = torch.tensor([[0, 1, 2, 0]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)
tensor([[1, 0, 0, 4, 0],
[0, 2, 0, 0, 0],
[0, 0, 3, 0, 0]])
>>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)
tensor([[1, 2, 3, 0, 0],
[6, 7, 0, 0, 8],
[0, 0, 0, 0, 0]])
The scatter function dim is 0, so we know the index input is specifying the new index in dimension 0, and the rest of the dimensions follow the original dimension in src.
Look at src[0][1]=2, the corresponding index value is index[0][1]=1, so we know that src[0][1] will be placed into self[1][1].
Look at src[0][2]=3, the corresponding index value is index[0][2] = 2, so we know that src[0][2] will be placed into self[2][2]
Because index tensor shape is [1,4], so the second row of source tensors are not placed
# Using scatter_ to create one-hot encoding based on value of y
torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1)
torch.mean(input, dim, keepdim=False, *, dtype=None, out=None) → Tensor
input (Tensor) – the input tensor.
dim (int or tuple of ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has
dim
retained or not.
a = torch.tensor([x for x in range(24)], dtype = float).reshape((2,3,4))
# 1
a.mean(dim = 0).shape
# (3,4)
# 2
a.mean(dim = 1).shape
# (2,4)
# 3
a.mean(dim = (0,2)).shape
# (3)
# think of a as an image with 2 channels, 3 by 4
Reducing over dimension 0. e.g.,: Output[0][0] is the mean of Input[0][0][0] and input[1][0][0]
Reducing over dimension 1. e.g.,: Output[0][0] is the mean of Input[0][0][0], Input[0][1][0] and Input[0][2][0]
Reducing over dimension 0 and 2. e.g.,: Output[0] is the mean of [0][0][0], [0][0][1], [0][0][2], [0][0][3], [1][0][0], [1][0][1], [1][0][2], [1][0][3]
torchvision.transforms.Normalize(mean, std, inplace=False)
mean (sequence) – Sequence of means for each channel.
std (sequence) – Sequence of standard deviations for each channel.
inplace (bool,optional) – Bool to make this operation in-place.
# a is a batch of 2 image, each image is 2 channel
a = torch.tensor([x for x in range(16)], dtype = float).reshape((2,2,2,2))
# mean sequence
m = torch.tensor([0,1])
fn = torchvision.transforms.Normalize(mean = m, std = torch.ones(1))
For every image, the transformer will subtract 0 from every cell in channel 0, and subtract 1 from every cell in channel 1
Last updated