keras: 08WARNING:tensorflow:Early stopping conditioned on metric `val_binary_accuracy` which is not available. Available metrics are:

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04):
  • TensorFlow backend (yes / no): yes
  • TensorFlow version: 2.0.0
  • Keras version: 2.2.4
  • Python version: 3.7.6
  • CUDA/cuDNN version: 10.1/7.6.5
  • GPU model and memory: nvidia 2080ti

Train on 58271 samples, validate on 10284 samples Epoch 1/50 32/58271 […] - ETA: 1:15:08WARNING:tensorflow:Early stopping conditioned on metric val_binary_accuracy which is not available. Available metrics are:


UnknownError Traceback (most recent call last) <ipython-input-15-af98d4292a82> in <module> 3 earlystop = EarlyStopping(monitor = ‘val_binary_accuracy’,patience =4,mode = ‘max’) 4 #history = model.fit(X_train, Y_train,batch_size=15,validation_data=(X_val,Y_val),class_weight=train_weights,epochs=50,callbacks=[earlystop]) ----> 5 history = model.fit(X_train, Y_train,batch_size=32,validation_split=0.15,class_weight=train_weights,epochs=50,callbacks=[earlystop])

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, –> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self,

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 322 mode=ModeKeys.TRAIN, 323 training_context=training_context, –> 324 total_epochs=epochs) 325 cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN) 326

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs) 121 step=step, mode=mode, size=current_batch_size) as batch_logs: 122 try: –> 123 batch_outs = execution_function(iterator) 124 except (StopIteration, errors.OutOfRangeError): 125 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn) 84 # numpy translates Tensors to values in Eager mode. 85 return nest.map_structure(_non_none_constant_value, —> 86 distributed_function(input_fn)) 87 88 return execution_function

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in call(self, *args, **kwds) 455 456 tracing_count = self._get_tracing_count() –> 457 result = self._call(*args, **kwds) 458 if tracing_count == self._get_tracing_count(): 459 self._call_counter.called_without_tracing()

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds) 518 # Lifting succeeded, so variables are initialized and we can run the 519 # stateless function. –> 520 return self._stateless_fn(*args, **kwds) 521 else: 522 canon_args, canon_kwds = \

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, *args, **kwargs) 1821 “”“Calls a graph function specialized to the inputs.”“” 1822 graph_function, args, kwargs = self._maybe_define_function(args, kwargs) -> 1823 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access 1824 1825 @property

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _filtered_call(self, args, kwargs) 1139 if isinstance(t, (ops.Tensor, 1140 resource_variable_ops.BaseResourceVariable))), -> 1141 self.captured_inputs) 1142 1143 def _call_flat(self, args, captured_inputs, cancellation_manager=None):

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1222 if executing_eagerly: 1223 flat_outputs = forward_function.call( -> 1224 ctx, args, cancellation_manager=cancellation_manager) 1225 else: 1226 gradient_name = self._delayed_rewrite_functions.register()

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager) 509 inputs=args, 510 attrs=(“executor_type”, executor_type, “config_proto”, config), –> 511 ctx=ctx) 512 else: 513 outputs = execute.execute_with_cancellation(

~/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 65 else: 66 message = e.message —> 67 six.raise_from(core._status_to_exception(e.code, message), None) 68 except TypeError as e: 69 keras_symbolic_tensors = [

~/anaconda3/envs/tf/lib/python3.7/site-packages/six.py in raise_from(value, from_value)

UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node model/conv1d/conv1d (defined at /home/subhashnerella/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_3729]

Function call stack: distributed_function

I am not using keras as tensorflow.keras My code is ConvLSTM which was working before but suddenly its not working. I am getting the warning and the error shown above. I tried to reduce the batch size still did not work. Whats the reason for this issue and how can i fix this?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17

Most upvoted comments

I figured it out for my case. It actually wasnt Early stopping, it was modelcheckpoint.

In modelcheckpoint, If you are using the save_freq parameter to set when the modelcheckpoint saves and you align your save_freq so that it is the end of an epoch, then when the final on_batch_end() is called for the given epoch it will call _save_model() (because this will be the same number of ellapsed batches as the save_freq) and try to determine if it should save the model (based on the val_loss or other). But, this is premature because other prep work by on_epoch_end() hasn’t been done yet to create/calculate the validation metrics and append them to the epoch_logs.

tf.compat.v1.disable_eager_execution() lets it run.

Same problem here, then tried to change the batch_size to be not divisible by the number of samples (or number of sequences in my case of LSTM modelling), and the problem is solved. For example: I had problem with numSequences=300 and batch_size=6, then problem solved with numSequences=300 and batch_size =16. So, maybe some bugs in the ‘tf.keras.callbacks.EarlyStopping()’ code.