tensorflow: Error: Cannot convert 'auto' to EagerTensor of dtype float

Thank you for submitting a TensorFlow documentation issue. Per our GitHub policy, we only address code/doc bugs, performance issues, feature requests, and build/installation issues on GitHub.

The TensorFlow docs are open source! To get involved, read the documentation contributor guide: https://www.tensorflow.org/community/contribute/docs

URL(s) with the issue:

Please provide a link to the documentation entry, for example: https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction?version=stable

Description of issue (what needs changing):

I intend to build up a custom loss function as follows:

` from future import absolute_import, division, print_function, unicode_literals import functools

import numpy as np
import tensorflow as tf


class GeneralDiceLoss(tf.keras.losses.Loss):
	def __init__(self, reduction=tf.keras.losses.Reduction.AUTO, name='GeneralDiceLoss'):
		super().__init__(reduction=reduction, name=name)
		self.epsilon = 1e-16 
	
	
	def get_config(self):
		config = super(GeneralDiceLoss, self).get_config()
		return config
	
	def call(self, yPred, yTrue):
		#yTrue =tf.dtypes.cast(yTrue, dtype=yPred.dtype)
		# Dot product yPred and yTrue and sum them up for each datum and class
		crossProd=tf.multiply(yPred, yTrue)
		crossProdSum=tf.math.reduce_sum(crossProd, axis=np.arange(2, yTrue.ndim))
		# Calculate weight for each datum and class 
		weight = tf.math.reduce_sum(yTrue, axis=np.arange(2, yTrue.ndim))
		weight = tf.math.divide(1, tf.math.square(weight)+self.epsilon)
		# Weighted sum over classes
		numerator = 2*tf.math.reduce_sum(tf.multiply(crossProdSum, weight), axis=1)
		# Saquared summation 
		yySum = tf.math.reduce_sum(tf.math.square(yPred) + tf.math.square(yTrue), axis=np.arange(2, yTrue.ndim))
		# Weighted sum over classes
		denominator = tf.math.reduce_sum(tf.multiply(weight, yySum), axis=1)
		loss = 1 - tf.math.divide(numerator, denominator+self.epsilon)
		#loss = tf.math.reduce_mean(1 - tf.math.divide(numerator, denominator+self.epsilon))
		
		return loss

`

Then I create variables to have it test `

GeneralDiceLoss()
yPred = tf.random.uniform(shape=(16, 3, 4, 4, 4))
yTrue = tf.round(tf.random.uniform(shape=(16, 3, 4, 4, 4)))

loss=GeneralDiceLoss(yPred, yTrue)

But I got an error

  File "...\keras-gpu\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
	return ops.EagerTensor(value, ctx.device_name, dtype)

TypeError: Cannot convert 'auto' to EagerTensor of dtype float

`

In the doc above,

  1. there is NO clear indication or warning about conversion issue, not to mention there is NO dtype conversion in my code at all.
  2. there is NO clear example indicating which option, AUTO or SUM_OVER_BATCH_SIZE, should be adopted in one’s minbatch size is greater than 1. In my case, assume my batch is 16 as exhibted in yPred and yTrue above, shall I use

loss = 1 - tf.math.divide(numerator, denominator+self.epsilon) or loss = tf.math.reduce_mean(1 - tf.math.divide(numerator, denominator+self.epsilon)) And for which option?

Building up a custom layer/loss function is already a tough task for many practitioners, so could the doc provide more detailed explanations and examples so as to make users’ life a little bit easier? Many thanks.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 16 (2 by maintainers)

Most upvoted comments

You need to create an instance of your loss class before you call it.

loss_func = GeneralDiceLoss()
loss_val = loss_func(yPred, yTrue)