tensorflow: how to assign value to a EagerTensor slice? ----'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment
as in numpy or pytorch ,we can do someting like this, but how to do it with tf2.0. the following code will raise exception as :
'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment
prediction[:,:,0]=tf.math.sigmoid(prediction[:,:,0])
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 25
- Comments: 55 (8 by maintainers)
This is a dramatic flaw of the framework that you can’t do item assignment
@aliutkus sure. go to pytorch, everythin goes fine
PyTorch supports this, and TF needs it.
Can’t believe this is still an issue at 2021.
So choose pytorch
You can do this:
I really think this should be addressed.
We are going into 2022 and this is not addressed until now.
Not solved, TensorFlow’s development sucks!
after 2 years, i still receive some updates, so i think this issue should not be marked as closed, it should be some other status.
such as , feature not supported/ not intended, but can not be marked as closed
what do you thinks guys?
no it’s not you need to use
tensorflow.tensor_scatter_nd_update, it’s so much cooler2022, still got this error
Here’s a small snippet that I used to get around with the problem. It’s bad I know
It’s exactly the snippet I had used so there are some rough edges.
The RFC has been up for a bit. Adding a link here in case anyone wants to provide input.
Comments are invited to the pull request. You can view the design doc here, and also leave comments inline on the document source.
@Valret
It’s in the internal design phase, a public RFC will be sent out either this quarter or next quarter for comments/feedback.
Tensor in TensorFlow is not mutable, assignment to EagerTensor is illegal.
it means EagerTensor is not that Eager as announced. supposed use tf as a static solution for now, if you want dynamic, then go to pytorch
Convert it to a numpy array and then you can do whatever you want, like this:
BUT, do you really need this?
Big thumbs up for this request, having either JAX-like or PyTorch-like in-place assignment is a must. Reimplementing most SOTA models to TensorFlow is a nightmare because of the lack of this feature. Fixing this can be a key change for the research use of TensorFlow.
The fact the tensors are immutable creates serious perfomance problems. I want to preallocate once big tensor and then fill it in the loop to avoid expensive memory allocations for small objects. In tensorflow I can’t do this. For example I am straggling with taking a batch of crops from the same image as this creates a lots of small tensors inside the loop vectorized_map do not helping. All is super slow, And ability to change tensor would solve this issue.
I’ve found that using TensorArray is a good way around this. There is a good example in the TF docs here about accumulating_values_in_a_loop
basically something like this
@aptarmy
IMHO, it would be unwise to wait. Switching to PyTorch - is still valid. 😄
( Also you might be interested to use Keras 3 with torch backend. Maybe that’s all you need. I could run a gradient level custom training in Keras 3 with torch backend which I was unable to run with tensorflow. )
any update on this? issue persists in 2.12.0 on Colab CPU.
@mohantym write a lot then deleted.
better later than never
thank you for your work
@sachinprasadhs ok as mentioned #56381 jax like solutions
but why dont you reply in this issue, so many devs watch this issue ? write down your roadmap on how to solve this issue.
It’s Nov. 2021 and I still have to use concatenation for tenser slice assignment in Tensorflow
Why doesn’t tensorflow support assignment of eagertensor just like pytorch?