Tensorflow "map operation" for tensor?
As of version 0.8 there is map_fn
. From the documentation:
map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)
map on the list of tensors unpacked from
elems
on dimension 0.This map operator repeatedly applies the callable
fn
to a sequence of elements from first to last. The elements are made of the tensors unpacked fromelems
.dtype
is the data type of the return value offn
. Users must providedtype
if it is different from the data type ofelems
.Suppose that
elems
is unpacked intovalues
, a list of tensors. The shape of the result tensor is[len(values)] + fn(values[0]).shape
.Args:
fn: The callable to be performed.
elems: A tensor to be unpacked to apply
fn
.dtype: (optional) The output type of
fn
.parallel_iterations: (optional) The number of iterations allowed to run in parallel. back_prop: (optional) True enables back propagation. swap_memory: (optional) True enables GPU-CPU memory swapping. name: (optional) Name prefix for the returned tensors.
Returns:
A tensor that packs the results of applying
fn
to the list of tensors unpacked fromelems
, from first to last.Raises:
TypeError: if
fn
is not callable.Example:
elems = [1, 2, 3, 4, 5, 6]
squares = map_fn(lambda x: x * x, elems)
# squares == [1, 4, 9, 16, 25, 36]
```
There are a few answers - none quite as elegant as a map function. Which is best depends a bit on your desire for memory efficiency.
(a) You can use enqueue_many
to throw them into a tf.FIFOQueue
and then dequeue and tf.image.resize_image_with_crop_or_pad
an image at a time, and concat it all back into one big smoosh. This is probably slow. Requires N calls to run for N images.
(b) You could use a single placeholder feed and run to resize and crop them on their way in from your original datasource. This is possibly the best option from a memory perspective, because you never have to store the unresized data in memory.
(c) You could use the tf.control_flow_ops.While
op to iterate through the full batch and build up the result in a tf.Variable
. Particularly if you take advantage of the parallel execution permitted by while, this is likely to be the fastest approach.
I'd probably go for option (c) unless you want to minimize memory use, in which case filtering it on the way in (option b) would be a better choice.
Tensorflow provides a couple of higher-order functions and one of them is tf.map_fn
. The usage is very easy: you define your mappping and apply it to the tensor:
variable = tf.Variable(...)
mapping = lambda x: f(x)
res = tf.map_fn(mapping, variable)