tensorflow: TypeError: Cannot convert a symbolic Keras input/output to a numpy array.
I dont quite understand why i’m getting this error. i came back to this project after updating some things and now my code wont work. Below is the code for my model. Any idea how I can avoid this?
from tensorflow import keras
import tensorflow.keras.backend as K
import tensorflow as tf
LEARNING_RATE = 1e-4
HIDDEN_SIZE = 32
CLIPPING = 0.2
LOSS = 1e-5
# PPO loss function
def PPO_loss(advantage, old_prediction):
def loss(y_true, y_pred):
prob = y_true * y_pred
old_prob = y_true * old_prediction
r = prob/(old_prob + 1e-10)
return -K.mean(K.minimum(r * advantage, K.clip(r, min_value=1 - CLIPPING, max_value=1 + CLIPPING) * advantage) + LOSS * -(prob * K.log(prob + 1e-10)))
return loss
class PPO:
def __init__(self, statesize, num_intruders, actionsize, valuesize):
self.statesize = statesize
self.num_intruders = num_intruders
self.actionsize = 5
self.valuesize = valuesize
self.model = self.__build_linear__()
def __build_linear__(self):
# Input of the aircraft of focus
_input = keras.layers.Input(
shape=(self.statesize,), name='input_state')
# This is the input for the n_closest aircraft
_input_context = keras.layers.Input(
shape=(self.num_intruders, 7), name='input_context')
# Empty layer
empty = keras.layers.Input(shape=(HIDDEN_SIZE,), name='empty')
# Input for advantages
advantage = keras.layers.Input(shape=(1,), name="advantage")
# Input old prediction
old_prediction = keras.layers.Input(
shape=(self.actionsize,), name='old_predictions')
# Flatten the context layer (As context is passed as an n*m tensor)
flatten_context = keras.layers.Flatten()(_input_context)
# Hidden Layers
# 1st hidden applies to the context only
h1 = keras.layers.Dense(
HIDDEN_SIZE, activation='relu')(flatten_context)
# Combine the input and the context
combine = keras.layers.concatenate([_input, h1], axis=1)
# Hidden layers 2 & 3 apply to all inputs
h2 = keras.layers.Dense(256, activation='relu')(combine)
h3 = keras.layers.Dense(256, activation='relu')(h2)
# Output layer
out = keras.layers.Dense(self.actionsize+1, activation=None)(h3)
# Policy and value layer processing
policy = keras.layers.Lambda(
lambda x: x[:, :self.actionsize], output_shape=(self.actionsize,))(out)
value = keras.layers.Lambda(
lambda x: x[:, self.actionsize:], output_shape=(self.valuesize,))(out)
# Policy and value outputs
policy_out = keras.layers.Activation(
'softmax', name='policy_out')(policy)
value_out = keras.layers.Activation(
'linear', name='value_out')(value)
# Optimizer
optimizer = keras.optimizers.Adam(lr=LEARNING_RATE)
# Produce the model
model = keras.models.Model(inputs=[
_input, _input_context, empty, advantage, old_prediction], outputs=[policy_out, value_out])
self.estimator = keras.models.Model(
inputs=[_input, _input_context, empty], outputs=[policy_out, value_out])
# Compile the model
model.compile(optimizer=optimizer, loss={'policy_out': PPO_loss(
advantage=advantage, old_prediction=old_prediction), 'value_out': 'mse'})
print(model.summary())
return model
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 75 (7 by maintainers)
The main issue here is that you are using a custom loss callback that takes an argument
advantage
(from your data generator, most likely numpy arrays). In Tensorflow 2 eager execution, theadvantage
argument will be numpy, whereasy_true
,y_pred
are symbolic. The way to solve this is to turn off eager executionSee similar stackoverflow issue
This is one of the solutions if you use tf2.x and you dont want to close tf eager_execution. Convert your loss function to a loss layer, and make the parameters advantage and old_prediction as Input layer. for example, class PPO_loss_layer(tensorflow.keras.layers.Layer): def call(self,y_true, y_pred,advantage, old_prediction): … y_true = Input(…) advantage= Input(…) old_prediction= Input(…) loss_layer = PPO_loss_layer()(y_true, y_pred,advantage, old_prediction) model = Model(inputs=[y_true,advantage,old_prediction],outputs=loss_layer )
Still not resolved, but i found out the root cause was from keras.Input
In my loss function, I used the keras.Input tensor for some calculation. Still finding a way to convert the keras tensor to tf.op.tensor
@rcx986635 Could you maybe upload a more detailed example? 😄 I’m a bit new to this topic and I don’t know if i fully understand your setup for this solution 😃
from tensorflow.python.framework.ops import disable_eager_execution disable_eager_execution() this did the job for me Thanks
The “add_loss” method stated in this answer seems to solve my problem. Below is my code cited from here, and hope it can help🙂
Hi @dhyeythumar ! Thanks, this fixed my problem! Now I can compute on 2.5. So remaining error is just fixed by
encoder = preprocessing.CategoryEncoding(output_mode="binary", num_tokens=len(vocab)+2)
For evereyone who has original bug after migration TF 2.4 => 2.5, you have to change imports like this:
I also have the same issue when definining a custom loss function.
disable_eager_execution()
solves the issue but raises a new one:FailedPreconditionError: Could not find variable training/Adam/beta_1. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Resource localhost/training/Adam/beta_1/class tensorflow::Var does not exist. [[{{node training/Adam/Identity_1/ReadVariableOp}}]]
Disabling the eager execution seems to break other things, as models which previously worked are also broken after disabling it.
Sorry, this is kaggle competition dataset. Attached. train.csv
test.csv
I am also getting similar errors as filed under this issue: Custom loss function is not working
I have found that the custom loss function works with the TensorFlow v1.15.0 but doesn’t work with TensorFlow v2.3.0 & 2.5.0 (both tested)
But it starts working when eager execution is disabled using:
I think this comments might be helpful:
This worked for me! Thank you @abhishekvenkat764
Hi, I just added “
del model
” before instantiating my model which in this case is:And it resolved my issue.
Do give it a shot. I understand it sounds a bit silly, but it worked for me.
What do we have to lose. 😃 .