How to do slice assignment in Tensorflow
Currently, you can do slice assignment for variables in TensorFlow. There is no specific named function for it, but you can select a slice and call assign
on it:
my_var = my_var[4:8].assign(tf.zeros(4))
First, note that (after having looked at the documentation) it seems that the return value of assign
, even when applied to a slice, is always a reference to the whole variable after applying the update.
EDIT: The information below is either deprecated, imprecise or was always wrong. The fact is that the returned value of assign
is a tensor that can be readily used and already incorporates the dependency to the assignment, so simply evaluating that or using it in further operations will ensure it gets executed without need for an explicit tf.control_dependencies
block.
Note, also, that this will only add the assignment op to the graph, but will not run it unless it is explicitly executed or set as a dependency of some other operation. A good practice is to use it in a tf.control_dependencies
context:
with tf.control_dependencies([my_var[4:8].assign(tf.zeros(4))]):
my_var = tf.identity(my_var)
You can read more about it in TensorFlow issue #4638.
Answer for TF2:
Unfortunately, there is still no elegant way to do this in Tensorflow 2 (TF2).
The best way I found was to unstack assign and then restack:
x = tf.random.uniform(shape=(5,))
new_val = 7
y = tf.unstack(x)
y[2] = new_val
x_updated = tf.stack(y)
The tf.scatter_update
can modify the tensor in the first dimension. As stated in the documentation,
indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
You can use scatter_nd_update
function to do what you want. As shown below, which I have tested.
a = tf.Variable(tf.zeros([10,36,36]))
value1 = np.random.randn(1,36)
e = tf.scatter_nd_update(a,[[0,1]],value1)
init= tf.global_variables_initializer()
sess.run(init)
print(a.eval())
sess.run(e)
I believe what you need is the assign_slice_update
discussed in ticket #206. It is not yet available, though.
UPDATE: This is now implemented. See jdehesa's answer: https://stackoverflow.com/a/43139565/6531137
Until assign_slice_update
(or scatter_nd()
) is available, you could build a block of the desired row containing the values you don't want to modify along with the desired values to update, like so:
import tensorflow as tf
a = tf.Variable(tf.ones([10,36,36]))
i = 3
j = 5
# Gather values inside the a[i,...] block that are not on column j
idx_before = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [j]), [-1, 1]), tf.reshape(tf.range(j), [-1, 1])])
values_before = tf.gather_nd(a, idx_before)
idx_after = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [36-j-1]), [-1, 1]), tf.reshape(tf.range(j+1, 36), [-1, 1])])
values_after = tf.gather_nd(a, idx_after)
# Build a subset of tensor `a` with the values that should not be touched and the values to update
block = tf.concat(0, [values_before, 5*tf.ones([1, 36]), values_after])
d = tf.scatter_update(a, i, block)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(d)
print(a.eval()[3,4:7,:]) # Print a subset of the tensor to verify
The example generate a tensor of ones and performs a[i,j,:] = 5
. Most of the complexity lies into getting the values that we don't want to modify, a[i,~j,:]
(otherwise scatter_update()
will replace those values).
If you want to perform T[i,k,:] = a[1,1,:]
as you asked, you need to replace 5*tf.ones([1, 36])
in the previous example by tf.gather_nd(a, [[1, 1]])
.
Another approach would be to create a mask to tf.select()
the desired elements from it and assign it back to the variable, as such:
import tensorflow as tf
a = tf.Variable(tf.zeros([10,36,36]))
i = tf.Variable([3])
j = tf.Variable([5])
# Build a mask using indices to perform [i,j,:]
atleast_2d = lambda x: tf.reshape(x, [-1, 1])
indices = tf.concat(1, [atleast_2d(tf.tile(i, [36])), atleast_2d(tf.tile(j, [36])), atleast_2d(tf.range(36))])
mask = tf.cast(tf.sparse_to_dense(indices, [10, 36, 36], 1), tf.bool)
to_update = 5*tf.ones_like(a)
out = a.assign( tf.select(mask, to_update, a) )
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(out)
print(a.eval()[2:5,5,:])
It is potentially less efficient in terms of memory since it requires twice the memory to handle the a
-like to_update
variable, but you could easily modify this last example to get a gradient-preserving operation from the tf.select(...)
node. You might also be interested in looking at this other StackOverflow question: Conditional assignment of tensor values in TensorFlow.
Those inelegant contortions should be replaced to a call to the proper TensorFlow function as it becomes available.