reshaping a tensor with padding in pytorch
A module that might be clearer and more suitable for this question is torch.nn.ConstantPad1d
e.g.
import torch
from torch import nn
x = torch.ones(30, 35, 49)
padded = nn.ConstantPad1d((0, 512 - 49), 0)(x)
While @nemo's solution works fine, there is a pytorch internal routine, torch.nn.functional.pad
, that does the same - and which has a couple of properties that a torch.ones(*sizes)*pad_value
solution does not (namely other forms of padding, like reflection padding or replicate padding ... it also checks some gradient-related properties):
import torch.nn.functional as F
source = torch.rand((5,10))
# now we expand to size (7, 11) by appending a row of 0s at pos 0 and pos 6,
# and a column of 0s at pos 10
result = F.pad(input=source, pad=(0, 1, 1, 1), mode='constant', value=0)
The semantics of the arguments are:
input
: the source tensor,pad
: a list of length2 * len(source.shape)
of the form (begin last axis, end last axis, begin 2nd to last axis, end 2nd to last axis, begin 3rd to last axis, etc.) that states how many dimensions should be added to the beginning and end of each axis,mode
:'constant'
,'reflect'
or'replicate'
. Default:'constant'
for the different kinds of paddingvalue
for constant padding.
The simplest solution is to allocate a tensor with your padding value and the target dimensions and assign the portion for which you have data:
target = torch.zeros(30, 35, 512)
source = torch.ones(30, 35, 49)
target[:, :, :49] = source
Note that there is no guarantee that padding your tensor with zeros and then multiplying it with another tensor makes sense in the end, that is up to you.