How do I flatten a tensor in pytorch?
flatten()
uses reshape()
beneath in C++ PyTorch code.
With flatten()
you may do things like this:
import torch
input = torch.rand(2, 3, 4).cuda()
print(input.shape) # torch.Size([2, 3, 4])
print(input.flatten(start_dim=0, end_dim=1).shape) # torch.Size([6, 4])
while for the same flattening if you would like to use reshape
you would do:
print(input.reshape((6,4)).shape) # torch.Size([6, 4])
But usually you would just do simple flatten like this:
print(input.reshape(-1).shape) # torch.Size([24])
print(input.flatten().shape) # torch.Size([24])
Note:
reshape()
is more robust thanview()
. It will work on any tensor, whileview()
works only on tensort
wheret.is_contiguous()==True
.
TL;DR: torch.flatten()
Use torch.flatten()
which was introduced in v0.4.1 and documented in v1.0rc1:
>>> t = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) >>> torch.flatten(t) tensor([1, 2, 3, 4, 5, 6, 7, 8]) >>> torch.flatten(t, start_dim=1) tensor([[1, 2, 3, 4], [5, 6, 7, 8]])
For v0.4.1 and earlier, use t.reshape(-1)
.
With t.reshape(-1)
:
If the requested view is contiguous in memory
this will equivalent to t.view(-1)
and memory will not be copied.
Otherwise it will be equivalent to t.
contiguous()
.view(-1)
.
Other non-options:
t.view(-1)
won't copy memory, but may not work depending on original size and stridet.resize(-1)
givesRuntimeError
(see below)t.resize(t.numel())
warning about being a low-level method (see discussion below)
(Note: pytorch
's reshape()
may change data but numpy
's reshape()
won't.)
t.resize(t.numel())
needs some discussion. The torch.Tensor.resize_
documentation says:
The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged)
Given the current strides will be ignored with the new (1, numel())
size, the order of the elements may apppear in a different order than with reshape(-1)
. However, "size" may mean the memory size, rather than the tensor's size.
It would be nice if t.resize(-1)
worked for both convenience and efficiency, but with torch 1.0.1.post2
, t = torch.rand([2, 3, 5]); t.resize(-1)
gives:
RuntimeError: requested resize to -1 (-1 elements in total), but the given
tensor has a size of 2x2 (4 elements). autograd's resize can only change the
shape of a given tensor, while preserving the number of elements.
I raised a feature request for this here, but the consensus was that resize()
was a low level method, and reshape()
should be used in preference.
Use torch.reshape
and only a single dimension can be passed to flatten it. If you do not want the dimension to be hardcoded, just -1
could be specified and the correct dimension would be inferred.
>>> x = torch.tensor([[1,2], [3,4]])
>>> x.reshape(-1)
tensor([1, 2, 3, 4])
EDIT:
For your example: