tensorflow: TypeError: Cannot convert a symbolic Keras input/output to a numpy array.

I dont quite understand why i’m getting this error. i came back to this project after updating some things and now my code wont work. Below is the code for my model. Any idea how I can avoid this?

from tensorflow import keras
import tensorflow.keras.backend as K
import tensorflow as tf

LEARNING_RATE = 1e-4
HIDDEN_SIZE = 32
CLIPPING = 0.2
LOSS = 1e-5


# PPO loss function
def PPO_loss(advantage, old_prediction):
    def loss(y_true, y_pred):
        prob = y_true * y_pred
        old_prob = y_true * old_prediction
        r = prob/(old_prob + 1e-10)

        return -K.mean(K.minimum(r * advantage, K.clip(r, min_value=1 - CLIPPING, max_value=1 + CLIPPING) * advantage) + LOSS * -(prob * K.log(prob + 1e-10)))

    return loss


class PPO:
    def __init__(self, statesize, num_intruders, actionsize, valuesize):
        self.statesize = statesize
        self.num_intruders = num_intruders
        self.actionsize = 5
        self.valuesize = valuesize

        self.model = self.__build_linear__()

    def __build_linear__(self):
        # Input of the aircraft of focus
        _input = keras.layers.Input(
            shape=(self.statesize,), name='input_state')

        # This is the input for the n_closest aircraft
        _input_context = keras.layers.Input(
            shape=(self.num_intruders, 7), name='input_context')

        # Empty layer
        empty = keras.layers.Input(shape=(HIDDEN_SIZE,), name='empty')

        # Input for advantages
        advantage = keras.layers.Input(shape=(1,), name="advantage")

        # Input old prediction
        old_prediction = keras.layers.Input(
            shape=(self.actionsize,), name='old_predictions')

        # Flatten the context layer (As context is passed as an n*m tensor)
        flatten_context = keras.layers.Flatten()(_input_context)

        # Hidden Layers

        # 1st hidden applies to the context only
        h1 = keras.layers.Dense(
            HIDDEN_SIZE, activation='relu')(flatten_context)

        # Combine the input and the context
        combine = keras.layers.concatenate([_input, h1], axis=1)

        # Hidden layers 2 & 3 apply to all inputs
        h2 = keras.layers.Dense(256, activation='relu')(combine)
        h3 = keras.layers.Dense(256, activation='relu')(h2)

        # Output layer
        out = keras.layers.Dense(self.actionsize+1, activation=None)(h3)

        # Policy and value layer processing
        policy = keras.layers.Lambda(
            lambda x: x[:, :self.actionsize], output_shape=(self.actionsize,))(out)
        value = keras.layers.Lambda(
            lambda x: x[:, self.actionsize:], output_shape=(self.valuesize,))(out)

        # Policy and value outputs
        policy_out = keras.layers.Activation(
            'softmax', name='policy_out')(policy)
        value_out = keras.layers.Activation(
            'linear', name='value_out')(value)

        # Optimizer
        optimizer = keras.optimizers.Adam(lr=LEARNING_RATE)

        # Produce the model
        model = keras.models.Model(inputs=[
                                   _input, _input_context, empty, advantage, old_prediction], outputs=[policy_out, value_out])

        self.estimator = keras.models.Model(
            inputs=[_input, _input_context, empty], outputs=[policy_out, value_out])

        # Compile the model

        model.compile(optimizer=optimizer, loss={'policy_out': PPO_loss(
            advantage=advantage, old_prediction=old_prediction), 'value_out': 'mse'})

        print(model.summary())
        return model

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 75 (7 by maintainers)

Most upvoted comments

The main issue here is that you are using a custom loss callback that takes an argument advantage (from your data generator, most likely numpy arrays). In Tensorflow 2 eager execution, the advantage argument will be numpy, whereas y_true, y_pred are symbolic. The way to solve this is to turn off eager execution

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

See similar stackoverflow issue

This is one of the solutions if you use tf2.x and you dont want to close tf eager_execution. Convert your loss function to a loss layer, and make the parameters advantage and old_prediction as Input layer. for example, class PPO_loss_layer(tensorflow.keras.layers.Layer): def call(self,y_true, y_pred,advantage, old_prediction): … y_true = Input(…) advantage= Input(…) old_prediction= Input(…) loss_layer = PPO_loss_layer()(y_true, y_pred,advantage, old_prediction) model = Model(inputs=[y_true,advantage,old_prediction],outputs=loss_layer )

@AndyTsangChun I also encountered the issue when trying to upgrade TF1 code

Encountering same issue here, I tried with to replace all numpy component to tf in my custom loss function. Still having same error. Tried disable_eager_execution() as well, but encounter another Keras backend with numpy issue when init random weight in some layer, which this issue doesn’t appear previously. FYI, I was actually trying to upgrade my project from TF1 to TF2, that weight initialize issue doesn’t appear while i was using TF1. TF version 2.4.1 Numpy 1.19.5

Still not resolved, but i found out the root cause was from keras.Input

In my loss function, I used the keras.Input tensor for some calculation. Still finding a way to convert the keras tensor to tf.op.tensor

This is one of the solutions if you use tf2.x and you dont want to close tf eager_execution. Convert your loss function to a loss layer, and make the parameters advantage and old_prediction as Input layer. for example, class PPO_loss_layer(tensorflow.keras.layers.Layer): def call(self,y_true, y_pred,advantage, old_prediction): … y_true = Input(…) advantage= Input(…) old_prediction= Input(…) loss_layer = PPO_loss_layer()(y_true, y_pred,advantage, old_prediction) model = Model(inputs=[y_true,advantage,old_prediction],outputs=loss_layer )

@rcx986635 Could you maybe upload a more detailed example? 😄 I’m a bit new to this topic and I don’t know if i fully understand your setup for this solution 😃

I am also getting similar errors as filed under this issue: Custom loss function is not working

I have found that the custom loss function works with the TensorFlow v1.15.0 but doesn’t work with TensorFlow v2.3.0 & 2.5.0 (both tested)

But it starts working when eager execution is disabled using:

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

I think this comments might be helpful:

from tensorflow.python.framework.ops import disable_eager_execution disable_eager_execution() this did the job for me Thanks

The “add_loss” method stated in this answer seems to solve my problem. Below is my code cited from here, and hope it can help🙂

def ppo_loss(y_true, y_pred, oldpolicy_probs, advantages, rewards, values):
    newpolicy_probs = y_pred
    ratio = K.exp(K.log(newpolicy_probs + 1e-10) - K.log(oldpolicy_probs + 1e-10))
    p1 = ratio * advantages
    p2 = K.clip(ratio, min_value=1 - clipping_val, max_value=1 + clipping_val) * advantages
    actor_loss = -K.mean(K.minimum(p1, p2))
    critic_loss = K.mean(K.square(rewards - values))
    total_loss = critic_discount * critic_loss + actor_loss - entropy_beta * K.mean(
        -(newpolicy_probs * K.log(newpolicy_probs + 1e-10)))
    return total_loss

def get_model_actor(input_dims, output_dims):
    state_input = Input(shape=input_dims)
    oldpolicy_probs = Input(shape=(1, output_dims,))
    advantages = Input(shape=(1, 1,))
    rewards = Input(shape=(1, 1,))
    values = Input(shape=(1, 1,))

    n_actions = output_dims
    feature_extractor = MobileNetV2(
        input_shape=(*MOBILENET_IMG_SIZE, 3),
        weights='imagenet', include_top=False)
    for layer in feature_extractor.layers:
        layer.trainable = False
    x = Flatten(name='flatten')(feature_extractor(state_input))
    x = Dense(1024, activation='relu', name='fc1')(x)
    out_actions = Dense(n_actions, activation='sigmoid')(x)
    model_actor = Model(
        inputs=[state_input, oldpolicy_probs, advantages, rewards, values],
        outputs=[out_actions])
    # ==================================================
    # ==================================================
    model_actor.add_loss(ppo_loss(
        y_true=None,
        y_pred=out_actions,
        oldpolicy_probs=oldpolicy_probs,
        advantages=advantages,
        rewards=rewards,
        values=values))
    model_actor.compile(optimizer=Adam(lr=1e-4))
    # ==================================================
    # ==================================================
    return model_actor

Hi @dhyeythumar ! Thanks, this fixed my problem! Now I can compute on 2.5. So remaining error is just fixed by encoder = preprocessing.CategoryEncoding(output_mode="binary", num_tokens=len(vocab)+2)

For evereyone who has original bug after migration TF 2.4 => 2.5, you have to change imports like this:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Embedding, Conv1D, GlobalMaxPooling1D, Flatten, Dropout, Input, Lambda, concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers.experimental import preprocessing
from keras.layers.experimental.preprocessing import TextVectorization

I also have the same issue when definining a custom loss function. disable_eager_execution() solves the issue but raises a new one: FailedPreconditionError: Could not find variable training/Adam/beta_1. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Resource localhost/training/Adam/beta_1/class tensorflow::Var does not exist. [[{{node training/Adam/Identity_1/ReadVariableOp}}]]

Disabling the eager execution seems to break other things, as models which previously worked are also broken after disabling it.

Sorry, this is kaggle competition dataset. Attached. train.csv

test.csv

I am also getting similar errors as filed under this issue: Custom loss function is not working

I have found that the custom loss function works with the TensorFlow v1.15.0 but doesn’t work with TensorFlow v2.3.0 & 2.5.0 (both tested)

But it starts working when eager execution is disabled using:

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

I think this comments might be helpful:

Hi, I just added “del model” before instantiating my model which in this case is:

del model
model = build_model(h, w, channels, actions)

And it resolved my issue.

Do give it a shot. I understand it sounds a bit silly, but it worked for me.

What do we have to lose. 😃 .

This worked for me! Thank you @abhishekvenkat764

Hi, I just added “del model” before instantiating my model which in this case is:

del model
model = build_model(h, w, channels, actions)

And it resolved my issue.

Do give it a shot. I understand it sounds a bit silly, but it worked for me.

What do we have to lose. 😃 .